# `LlamaCppEx.Chat`
[🔗](https://github.com/nyo16/llama_cpp_ex/blob/main/lib/llama_cpp_ex/chat.ex#L1)

Chat template formatting using llama.cpp's Jinja template engine.

Converts a list of chat messages into a formatted prompt string
using the model's embedded chat template. Uses the full Jinja engine
from llama.cpp's common library, which supports `enable_thinking` and
arbitrary `chat_template_kwargs`.

## Examples

    {:ok, prompt} = LlamaCppEx.Chat.apply_template(model, [
      %{role: "system", content: "You are helpful."},
      %{role: "user", content: "Hi!"}
    ])

    # Disable thinking (for Qwen3 and similar models)
    {:ok, prompt} = LlamaCppEx.Chat.apply_template(model, messages,
      enable_thinking: false
    )

# `message`

```elixir
@type message() :: %{role: String.t(), content: String.t()} | {String.t(), String.t()}
```

# `apply_template`

```elixir
@spec apply_template(LlamaCppEx.Model.t(), [message()], keyword()) ::
  {:ok, String.t()} | {:error, String.t()}
```

Applies the model's chat template to a list of messages using the Jinja engine.

## Options

  * `:add_assistant` - Whether to add the assistant turn prefix. Defaults to `true`.
  * `:enable_thinking` - Whether to enable thinking/reasoning mode. Defaults to `true`.
  * `:chat_template_kwargs` - Extra template variables as a list of `{key, value}` string tuples.
    Defaults to `[]`.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
