# `Planck.AI.Context`
[🔗](https://github.com/alexdesousa/planck/blob/v0.1.0/lib/planck/ai/context.ex#L1)

Everything sent to the LLM in a single request: system prompt, conversation
history, and available tools.

Inference parameters (temperature, max_tokens, etc.) are NOT stored here —
they are passed as keyword options at the `Planck.AI.stream/3` or
`Planck.AI.complete/3` call site and forwarded directly to `req_llm`.

## Examples

    iex> %Planck.AI.Context{
    ...>   system: "You are a helpful coding assistant.",
    ...>   messages: [
    ...>     %Planck.AI.Message{role: :user, content: [{:text, "Hello"}]}
    ...>   ],
    ...>   tools: []
    ...> }

# `t`

```elixir
@type t() :: %Planck.AI.Context{
  messages: [Planck.AI.Message.t()],
  system: String.t() | nil,
  tools: [Planck.AI.Tool.t()]
}
```

---

*Consult [api-reference.md](api-reference.md) for complete listing*
