# `Tinkex.Types.ModelInput`
[🔗](https://github.com/North-Shore-AI/tinkex/blob/v0.4.0/lib/tinkex/types/model_input.ex#L1)

Model input containing chunks of encoded text and/or images.

Mirrors Python tinker.types.ModelInput.

# `chunk`

```elixir
@type chunk() ::
  Tinkex.Types.EncodedTextChunk.t()
  | Tinkex.Types.ImageChunk.t()
  | Tinkex.Types.ImageAssetPointerChunk.t()
```

# `t`

```elixir
@type t() :: %Tinkex.Types.ModelInput{chunks: [chunk()]}
```

# `append`

```elixir
@spec append(t(), chunk()) :: t()
```

Append a chunk to the ModelInput.

Returns a new ModelInput with the given chunk appended to the end.

## Examples

    iex> input = ModelInput.empty()
    iex> chunk = %EncodedTextChunk{tokens: [1, 2, 3], type: "encoded_text"}
    iex> ModelInput.append(input, chunk)
    %ModelInput{chunks: [%EncodedTextChunk{tokens: [1, 2, 3], type: "encoded_text"}]}

# `append_int`

```elixir
@spec append_int(t(), integer()) :: t()
```

Append a single token to the ModelInput.

Token-aware append: if the last chunk is an EncodedTextChunk, extends its
tokens; otherwise adds a new EncodedTextChunk with that single token.

## Examples

    iex> input = ModelInput.from_ints([1, 2])
    iex> ModelInput.append_int(input, 3) |> ModelInput.to_ints()
    [1, 2, 3]

    iex> input = ModelInput.empty()
    iex> ModelInput.append_int(input, 42) |> ModelInput.to_ints()
    [42]

# `empty`

```elixir
@spec empty() :: t()
```

Create an empty ModelInput with no chunks.

## Examples

    iex> ModelInput.empty()
    %ModelInput{chunks: []}

# `from_ints`

```elixir
@spec from_ints([integer()]) :: t()
```

Create ModelInput from a list of token IDs.

# `from_text`

```elixir
@spec from_text(
  String.t(),
  keyword()
) :: {:ok, t()} | {:error, Tinkex.Error.t()}
```

Create ModelInput from raw text.

Tokenizes the provided `text` via `Tinkex.Tokenizer.encode/3` and returns a
tuple using the same `{:ok, ...} | {:error, ...}` contract. Chat templates
are **not** applied; callers must supply fully formatted prompts.

## Options

  * `:model_name` (required) - Model name used to resolve the tokenizer.
  * `:training_client` - Forwarded to tokenizer resolution.
  * Any other options supported by `Tinkex.Tokenizer.encode/3`.

# `from_text!`

```elixir
@spec from_text!(
  String.t(),
  keyword()
) :: t()
```

Create ModelInput from raw text, raising on failure.

See `from_text/2` for options and behavior.

# `length`

```elixir
@spec length(t()) :: non_neg_integer()
```

Get the total length (token count) of the ModelInput.

For image chunks, `expected_tokens` must be set; otherwise `length/1` will
raise to mirror Python SDK guardrails.

# `to_ints`

```elixir
@spec to_ints(t()) :: [integer()]
```

Extract all token IDs from the ModelInput.

Only works with EncodedTextChunk chunks. Raises for image chunks.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
