# `LlmCore.Agent.Loop`
[🔗](https://github.com/fosferon/llm_core/blob/v0.3.0/lib/llm_core/agent/loop.ex#L1)

Agentic tool-calling loop.

Iterates: call LLM → feed response through the iteration pipeline →
if the pipeline says `:continue`, append messages and call the LLM again;
if `:done`, return the final response.

The loop owns iteration control and message accumulation. The pipeline
(`LlmCore.Agent.Pipeline.Iteration`) owns per-iteration processing logic.

## Architecture Mirror

This mirrors the common `reduce_while` iteration pattern:

    GrooveExecutor                     Agent.Loop
    ─────────────                      ──────────
    Enum.reduce_while over steps       Enum.reduce_while over iterations
    execute_step(step, ctx)            llm_send_fn.(messages, opts)
    StepwiseEngine.handle_event(...)   Pipeline.Iteration (ALF)
    {:cont, {:ok, rt, ctx, tokens}}    {:cont, {:ok, state}}
    {:halt, {:error, reason}}          {:halt, {:error, reason}}

## Usage

    {:ok, response, messages} =
      LlmCore.Agent.Loop.run(
        [%{role: :user, content: "Research Elixir ALF"}],
        &my_llm_send/2,
        tools: my_tools,
        resolve_tool: &MyResolver.resolve/1,
        max_iterations: 10
      )

# `llm_send_fn`

```elixir
@type llm_send_fn() :: ([map()], keyword() -&gt;
                    {:ok, LlmCore.LLM.Response.t()} | {:error, term()})
```

# `opts`

```elixir
@type opts() :: [
  tools: [LlmToolkit.Tool.t()],
  resolve_tool: (LlmToolkit.Tool.Call.t() -&gt;
                   {:ok, String.t()} | {:error, String.t()}),
  resolver_module: module() | nil,
  max_iterations: pos_integer(),
  on_iteration: (LlmCore.Agent.Context.t() -&gt; :ok) | nil,
  pipeline_opts: keyword(),
  llm_opts: keyword()
]
```

# `run`

```elixir
@spec run([map()], llm_send_fn(), opts()) ::
  {:ok, LlmCore.LLM.Response.t(), [map()]} | {:error, term()}
```

Runs the agentic loop.

Calls `llm_send_fn` with the current messages and tool definitions.
If the LLM responds with tool calls, the response flows through the
iteration pipeline which dispatches tools, collects results, and
builds new messages. The loop repeats until the LLM produces a text-only
response or the iteration budget is exhausted.

## Parameters

  * `messages` — Initial message list (system prompt, history, user message)
  * `llm_send_fn` — `fn(messages, opts) -> {:ok, Response.t()} | {:error, term()}`
  * `opts` — Configuration keyword list:
    * `:tools` — (required) list of `LlmToolkit.Tool.t()` definitions
    * `:resolve_tool` — (required) `fn(Call.t()) -> {:ok, string} | {:error, string}`
    * `:resolver_module` — optional module implementing `ToolResolver` behaviour.
      When set, `DispatchTools` checks for dispatch recipes via
      `resolver_module.dispatch_recipe/1`.
    * `:max_iterations` — iteration ceiling (default: 10)
    * `:on_iteration` — optional callback invoked with the pipeline context
      after each iteration
    * `:pipeline_opts` — options forwarded to `Pipeline.Iteration.ensure_started/1`
    * `:llm_opts` — extra options forwarded to `llm_send_fn`

## Returns

  * `{:ok, final_response, final_messages}` — LLM produced a text response
  * `{:error, reason}` — Budget exceeded, LLM error, or pipeline error

## Telemetry

Emits `[:llm_core, :agent, :complete]` on loop exit with measurements
`%{total_iterations: N}` and metadata `%{tool_calls_count: N}`.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
