# `Agentic.Loop.Stages.LLMCall`

Makes the LLM API call and stores the response in context.

Uses the `llm_chat` callback from `ctx.callbacks` to make the actual API call.
The response is stored in `ctx.last_response` for the next stage
(typically ModeRouter) to process.

## Model Routing

Resolves the best model route via `Agentic.ModelRouter` before each call.
The resolved route is passed to the callback under `"_route"` key.
If routing fails, falls back to direct callback invocation.

## Cache Awareness (V1.2)

Separates params into a stable prefix (system prompt, workspace snapshot,
tool definitions) and volatile suffix (recent transcript). Computes a
`stable_prefix_hash` so the host can detect when the prefix changed and
pass cache boundary hints to the LLM provider.

The params map sent to `llm_chat` includes a `"cache_control"` key with:
- `"stable_hash"` — hash of the stable prefix content
- `"prefix_changed"` — boolean, true when prefix differs from last call

---

*Consult [api-reference.md](api-reference.md) for complete listing*
