# `LangChain.Chains.ChainCallbacks`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L1)

Defines the callbacks fired by an LLMChain and LLM module.

A callback handler is a map that defines the specific callback event with a
function to execute for that event.

## Example

A sample configured callback handler that forwards received data to a specific
LiveView.

    live_view_pid = self()

    my_handlers = %{
      on_llm_new_delta: fn _chain, new_deltas -> send(live_view_pid, {:received_delta, new_deltas}) end,
      on_message_processed: fn _chain, new_message -> send(live_view_pid, {:received_message, new_message}) end,
      on_error_message_created: fn _chain, new_message -> send(live_view_pid, {:received_message, new_message}) end
    }

    model = SomeLLM.new!(%{...})

    chain =
      %{llm: model}
      |> LLMChain.new!()
      |> LLMChain.add_callback(my_handlers)

# `chain_callback_handler`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L297)

```elixir
@type chain_callback_handler() :: %{
  optional(:on_llm_new_delta) =&gt; llm_new_delta(),
  optional(:on_llm_new_message) =&gt; llm_new_message(),
  optional(:on_llm_ratelimit_info) =&gt; llm_ratelimit_info(),
  optional(:on_llm_token_usage) =&gt; llm_token_usage(),
  optional(:on_llm_response_headers) =&gt; llm_response_headers(),
  optional(:on_message_processed) =&gt; chain_message_processed(),
  optional(:on_message_processing_error) =&gt; chain_message_processing_error(),
  optional(:on_error_message_created) =&gt; chain_error_message_created(),
  optional(:on_tool_call_identified) =&gt; chain_tool_call_identified(),
  optional(:on_tool_execution_started) =&gt; chain_tool_execution_started(),
  optional(:on_tool_execution_completed) =&gt; chain_tool_execution_completed(),
  optional(:on_tool_execution_failed) =&gt; chain_tool_execution_failed(),
  optional(:on_tool_interrupted) =&gt; chain_tool_interrupted(),
  optional(:on_tool_response_created) =&gt; chain_tool_response_created(),
  optional(:on_llm_error) =&gt; chain_llm_error(),
  optional(:on_error) =&gt; chain_error(),
  optional(:on_retries_exceeded) =&gt; chain_retries_exceeded()
}
```

The supported set of callbacks for an LLM module.

# `chain_error`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L284)

```elixir
@type chain_error() :: (LangChain.Chains.LLMChain.t(), LangChainError.t() -&gt; any())
```

Executed when the chain encounters a terminal error and is returning an error
result to the caller.

Unlike `on_llm_error` which fires on every individual LLM failure (including
transient ones), this callback fires exactly **once** when the chain has
exhausted all recovery options (retries, fallbacks) and is giving up.

This is the chain-level "final answer is an error" signal. Use this for
application-level error handling -- updating UI state, notifying users,
recording failures.

## Examples

Scenarios where this fires:
- All retry attempts exhausted
- All fallback models failed
- Unrecoverable error (e.g., invalid request)
- Rescued exception during chain execution

    callback_handler = %{
      on_error: fn _chain, error ->
        send(live_view_pid, {:chain_error, error})
      end
    }

- First argument: LLMChain.t() - Chain state at time of failure
- Second argument: LangChainError.t() - The terminal error

The handler's return value is discarded.

# `chain_error_message_created`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L113)

```elixir
@type chain_error_message_created() :: (LangChain.Chains.LLMChain.t(),
                                  LangChain.Message.t() -&gt;
                                    any())
```

Executed when an LLMChain, in response to an error from the LLM, generates a
new, automated response message intended to be returned to the LLM.

# `chain_llm_error`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L251)

```elixir
@type chain_llm_error() :: (LangChain.Chains.LLMChain.t(), LangChainError.t() -&gt;
                        any())
```

Executed when an individual LLM API call fails with an error.

This fires on **every** LLM call failure, including transient errors that may
be retried or recovered from via fallbacks. It provides visibility into errors
that would otherwise be invisible when retries succeed.

Use this callback for diagnostic/observational purposes -- logging, metrics,
debug dashboards. The chain may continue executing after this callback fires.

## Examples

Common scenarios where this fires:
- Rate limit errors (may be retried)
- Overloaded/server errors (may fall back to another model)
- Authentication errors (terminal)
- Network timeouts (may be retried)

In a retry loop: fires once per failed attempt, not just when retries are
exhausted. In a fallback chain: fires for each model that fails before the
next one is tried.

    callback_handler = %{
      on_llm_error: fn _chain, error ->
        Logger.warning("LLM call failed: #{inspect(error)}")
      end
    }

- First argument: LLMChain.t() - Current chain state
- Second argument: LangChainError.t() - The error from the LLM call

The handler's return value is discarded.

# `chain_message_processed`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L106)

```elixir
@type chain_message_processed() :: (LangChain.Chains.LLMChain.t(),
                              LangChain.Message.t() -&gt;
                                any())
```

Executed when an LLMChain has completed processing a received assistant
message. This fires when a message is complete either after assembling
streaming deltas or when a full message is received when not streaming.

This is the best way to be notified when a message is "done" and should be
handled by the application.

The handler's return value is discarded.

# `chain_message_processing_error`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L122)

```elixir
@type chain_message_processing_error() :: (LangChain.Chains.LLMChain.t(),
                                     LangChain.Message.t() -&gt;
                                       any())
```

Executed when processing a received message errors or fails. The erroring
message is included in the callback with the state of processing that was
completed before erroring.

The handler's return value is discarded.

# `chain_retries_exceeded`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L292)

```elixir
@type chain_retries_exceeded() :: (LangChain.Chains.LLMChain.t() -&gt; any())
```

Executed when the chain failed multiple times used up the `max_retry_count`
resulting in the process aborting and returning an error.

The handler's return value is discarded.

# `chain_tool_call_identified`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L154)

```elixir
@type chain_tool_call_identified() :: (LangChain.Chains.LLMChain.t(),
                                 LangChain.Message.ToolCall.t(),
                                 LangChain.Function.t() -&gt;
                                   any())
```

Executed when a tool call is identified during streaming, before execution begins.

This fires as soon as we have enough information to identify the tool (at minimum, the `name` field).
The tool call may be incomplete - `call_id` might not be available yet, and `arguments` may be partial.

This callback provides early notification for UI feedback like "Searching web..." while the LLM
is still streaming the complete tool call.

Timing:
- Fires: As soon as tool name is detected in streaming deltas
- Before: Tool arguments are fully received
- Before: Tool execution begins

Arguments:
- First: LLMChain.t() - Current chain state
- Second: ToolCall.t() - Tool call struct (may be incomplete, but has name)
- Third: Function.t() - Function definition (includes display_text)

The handler's return value is discarded.

## Example

    callback_handler = %{
      on_tool_call_identified: fn _chain, tool_call, func ->
        IO.puts("Tool identified: #{func.display_text || tool_call.name}")
      end
    }

# `chain_tool_execution_completed`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L182)

```elixir
@type chain_tool_execution_completed() :: (LangChain.Chains.LLMChain.t(),
                                     LangChain.Message.ToolCall.t(),
                                     LangChain.Message.ToolResult.t() -&gt;
                                       any())
```

Executed when a single tool execution completes successfully.

Fires after individual tool execution, before results are aggregated.
Useful for showing per-tool success indicators.

- First argument: LLMChain.t()
- Second argument: ToolCall that was executed
- Third argument: ToolResult that was generated

The handler's return value is discarded.

# `chain_tool_execution_failed`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L195)

```elixir
@type chain_tool_execution_failed() :: (LangChain.Chains.LLMChain.t(),
                                  LangChain.Message.ToolCall.t(),
                                  term() -&gt;
                                    any())
```

Executed when a single tool execution fails.

Fires when tool execution raises an exception or returns an error result.

- First argument: LLMChain.t()
- Second argument: ToolCall that failed
- Third argument: Error reason or exception

The handler's return value is discarded.

# `chain_tool_execution_started`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L168)

```elixir
@type chain_tool_execution_started() :: (LangChain.Chains.LLMChain.t(),
                                   LangChain.Message.ToolCall.t(),
                                   LangChain.Function.t() -&gt;
                                     any())
```

Executed when the chain begins executing a tool call.

This fires immediately before tool execution starts, allowing UIs to show
real-time feedback like "Searching the web..." or "Creating file...".

- First argument: LLMChain.t()
- Second argument: ToolCall struct being executed
- Third argument: Function struct for the tool (includes display_text)

The handler's return value is discarded.

# `chain_tool_interrupted`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L208)

```elixir
@type chain_tool_interrupted() :: (LangChain.Chains.LLMChain.t(),
                             [LangChain.Message.ToolResult.t()] -&gt;
                               any())
```

Executed when one or more tools return an interrupt signal.

Fires once per tool execution batch with all interrupted results.
The tool is paused and awaiting external input to continue.

- First argument: LLMChain.t()
- Second argument: List of ToolResult structs with `is_interrupt: true`

The handler's return value is discarded.

# `chain_tool_response_created`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L216)

```elixir
@type chain_tool_response_created() :: (LangChain.Chains.LLMChain.t(),
                                  LangChain.Message.t() -&gt;
                                    any())
```

Executed when the chain uses one or more tools and the resulting ToolResults
are generated as part of a tool response message.

The handler's return value is discarded.

# `llm_new_delta`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L47)

```elixir
@type llm_new_delta() :: (LangChain.Chains.LLMChain.t(),
                    [LangChain.MessageDelta.t()] -&gt;
                      any())
```

Executed when an LLM is streaming a response and a new MessageDelta (or token)
was received.

- `:index` is optionally present if the LLM supports sending `n` versions of a
  response.

The return value is discarded.

# `llm_new_message`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L54)

```elixir
@type llm_new_message() :: (LangChain.Chains.LLMChain.t(), LangChain.Message.t() -&gt;
                        any())
```

Executed when an LLM is not streaming and a full message was received.

The return value is discarded.

# `llm_ratelimit_info`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L65)

```elixir
@type llm_ratelimit_info() :: (LangChain.Chains.LLMChain.t(),
                         info :: %{required(String.t()) =&gt; any()} -&gt;
                           any())
```

Executed when an LLM (typically a service) responds with rate limiting
information.

The specific rate limit information depends on the LLM. It returns a map with
all the available information included.

The return value is discarded.

# `llm_response_headers`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L94)

```elixir
@type llm_response_headers() :: (LangChain.Chains.LLMChain.t(),
                           response_headers :: map() -&gt;
                             any())
```

Executed when an LLM response is received through an HTTP response. The entire
set of raw response headers can be received and processed.

The return value is discarded.

## Example

A function declaration that matches the signature.

    def handle_llm_response_headers(chain, response_headers) do
      # This demonstrates how to send the response headers to a
      # LiveView assuming the LiveView's pid was stored in the chain's
      # custom_context.
      send(chain.custom_context.live_view_pid, {:req_response_headers, response_headers})

      IO.inspect(response_headers)
    end

# `llm_token_usage`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/chain_callbacks.ex#L73)

```elixir
@type llm_token_usage() :: (LangChain.Chains.LLMChain.t(), LangChain.TokenUsage.t() -&gt;
                        any())
```

Executed when an LLM response reports the token usage in a
`LangChain.TokenUsage` struct. The data returned depends on the LLM.

The return value is discarded.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
