# `LangChain.Chains.LLMChain`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1)

Define an LLMChain. This is the heart of the LangChain library.

The chain deals with tools, a tool map, delta tracking, tracking the messages
exchanged during a run, the last_message tracking, conversation messages, and
verbose logging. Messages and tool results support multi-modal ContentParts,
enabling richer responses (text, images, files, thinking, etc.). ToolResult
content can be a list of ContentParts. The chain also supports
`async_tool_timeout` and improved fallback handling.

## Callbacks

Callbacks are fired as specific events occur in the chain as it is running.
The set of events are defined in `LangChain.Chains.ChainCallbacks`.

To be notified of an event you care about, register a callback handler with
the chain. Multiple callback handlers can be assigned. The callback handler
assigned to the `LLMChain` is not provided to an LLM chat model. For callbacks
on a chat model, set them there.

### Registering a callback handler

A handler is a map with key name for the callback to fire. A function is
assigned to the map key. Refer to the documentation for each function as they
arguments vary.

If we want to be notified when an LLM Assistant chat response message has been
processed and it is complete, this is how we could receive that event in our
running LiveView:

    live_view_pid = self()

    handler = %{
      on_message_processed: fn _chain, message ->
        send(live_view_pid, {:new_assistant_response, message})
      end
    }

    LLMChain.new!(%{...})
    |> LLMChain.add_callback(handler)
    |> LLMChain.run()

In the LiveView, a `handle_info` function executes with the received message.

## Fallbacks

When running a chain, the `:with_fallbacks` option can be used to provide a
list of fallback chat models to try when a failure is encountered.

When working with language models, you may often encounter issues from the
underlying APIs, whether these be rate limiting, downtime, or something else.
Therefore, as you go to move your LLM applications into production it becomes
more and more important to safeguard against these. That's what fallbacks are
designed to provide.

A **fallback** is an alternative plan that may be used in an emergency.

A `before_fallback` function can be provided to alter or return a different
chain to use with the fallback LLM model. This is important because often, the
prompts needed for will differ for a fallback LLM. This means if your OpenAI
completion fails, a different prompt may be needed when retrying it with an
Anthropic fallback.

### Fallback for LLM API Errors

This is perhaps the most common use case for fallbacks. A request to an LLM
API can fail for a variety of reasons - the API could be down, you could have
hit rate limits, any number of things. Therefore, using fallbacks can help
protect against these types of failures.

## Fallback Examples

A simple fallback that tries a different LLM chat model

    fallback_llm = ChatAnthropic.new!(%{stream: false})

    {:ok, updated_chain} =
      %{llm: ChatOpenAI.new!(%{stream: false})}
      |> LLMChain.new!()
      |> LLMChain.add_message(Message.new_system!("OpenAI system prompt"))
      |> LLMChain.add_message(Message.new_user!("Why is the sky blue?"))
      |> LLMChain.run(with_fallbacks: [fallback_llm])

Note the `with_fallbacks: [fallback_llm]` option when running the chain.

This example uses the `:before_fallback` option to provide a function that can
modify or return an alternate chain when used with a certain LLM. Also note
the utility function `LangChain.Utils.replace_system_message!/2` is used for
swapping out the system message when falling back to a different LLM.

    fallback_llm = ChatAnthropic.new!(%{stream: false})

    {:ok, updated_chain} =
      %{llm: ChatOpenAI.new!(%{stream: false})}
      |> LLMChain.new!()
      |> LLMChain.add_message(Message.new_system!("OpenAI system prompt"))
      |> LLMChain.add_message(Message.new_user!("Why is the sky blue?"))
      |> LLMChain.run(
        with_fallbacks: [fallback_llm],
        before_fallback: fn chain ->
          case chain.llm do
            %ChatAnthropic{} ->
              # replace the system message
              %LLMChain{
                chain
                | messages:
                    Utils.replace_system_message!(
                      chain.messages,
                      Message.new_system!("Anthropic system prompt")
                    )
              }

            _open_ai ->
              chain
          end
        end
      )

See `LangChain.Chains.LLMChain.run/2` for more details.

## Run Until Tool Used

The `run_until_tool_used/3` function makes it easy to instruct an LLM to use a
set of tools and then call a specific tool to present the results. This is
particularly useful for complex workflows where you want the LLM to perform
multiple operations and then finalize with a specific action.

This works well for receiving a final structured output after multiple tools
are used.

When the specified tool is successfully called, the chain stops processing and
returns the result. This prevents unnecessary additional LLM calls and
provides a clear termination point for your workflow.

    {:ok, %LLMChain{} = updated_chain, %ToolResult{} = tool_result} =
      %{llm: ChatOpenAI.new!(%{stream: false})}
      |> LLMChain.new!()
      |> LLMChain.add_tools([special_search, report_results])
      |> LLMChain.add_message(Message.new_system!())
      |> LLMChain.add_message(Message.new_user!("..."))
      |> LLMChain.run_until_tool_used("final_summary")

The function returns a tuple with three elements:
- `:ok` - Indicating success
- The updated chain with all messages and tool calls
- The specific tool result that matched the requested tool name

### Using Multiple Tool Names

You can also provide a list of tool names to stop when any one of them is called:

    {:ok, %LLMChain{} = updated_chain, %ToolResult{} = tool_result} =
      %{llm: ChatOpenAI.new!(%{stream: false})}
      |> LLMChain.new!()
      |> LLMChain.add_tools([search_tool, summary_tool, report_tool])
      |> LLMChain.add_message(Message.new_system!())
      |> LLMChain.add_message(Message.new_user!("..."))
      |> LLMChain.run_until_tool_used(["summary_tool", "report_tool"])

This variant is useful when you have multiple tools that could serve as valid
endpoints for your workflow, and you want the LLM to choose the most appropriate
one based on the context.

To prevent runaway function calls, a default `max_runs` value of 25 is set.
You can adjust this as needed:

    # Allow up to 50 runs before timing out
    LLMChain.run_until_tool_used(chain, "final_summary", max_runs: 50)

The function also supports fallbacks, allowing you to gracefully handle LLM
failures:

    LLMChain.run_until_tool_used(chain, "final_summary",
      max_runs: 10,
      with_fallbacks: [fallback_llm],
      before_fallback: fn chain ->
        # Modify chain before using fallback LLM
        chain
      end
    )

See `LangChain.Chains.LLMChain.run_until_tool_used/3` for more details.

## Async Tool Timeout

When tools are defined with `async: true`, they execute in parallel using Elixir's
`Task.async/1`. The `async_tool_timeout` setting controls how long to wait for
these parallel tasks to complete.

**Important**: This timeout only applies to tools with `async: true`. Synchronous
tools (the default) run inline and are not subject to this timeout.

### Default Behavior

The default is `:infinity`, meaning async tools can run indefinitely. This is
appropriate for human-interactive agents where the user can manually stop
execution if needed.

For automated or unattended agents, consider setting a finite timeout.

### Configuration Levels

Timeout can be configured at three levels (highest precedence first):

1. **Chain-level** - Set when creating an LLMChain:

       LLMChain.new!(%{
         llm: model,
         async_tool_timeout: 10 * 60 * 1000  # 10 minutes
       })

2. **Application-level** - Set in config/runtime.exs:

       config :langchain, async_tool_timeout: 5 * 60 * 1000  # 5 minutes

3. **Library default** - `:infinity` (no timeout)

### When to Use Async Tools

Mark a tool as `async: true` when:
- The operation may take significant time (web requests, file processing)
- Multiple such operations can run in parallel safely
- The tool has no side effects that depend on ordering

    Function.new!(%{
      name: "web_search",
      async: true,  # Enables parallel execution
      function: fn args, ctx -> ... end
    })

### Timeout Values

- `:infinity` - No timeout (wait forever)
- Integer - Milliseconds (e.g., `300_000` for 5 minutes)

# `message_processor`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L349)

```elixir
@type message_processor() :: (t(), LangChain.Message.t() -&gt; processor_return())
```

A message processor is an arity 2 function that takes an
`LangChain.Chains.LLMChain` and a `LangChain.Message`. It is used to
"pre-process" the received message from the LLM. Processors can be chained
together to perform a sequence of transformations.

The return of the processor is a tuple with a keyword and a message. The
keyword is either `:cont` or `:halt`. If `:cont` is returned, the
message is used as the next message in the chain. If `:halt` is returned, the
halting message is returned to the LLM as an error and no further processors
will handle the message.

An example of this is the `LangChain.MessageProcessors.JsonProcessor` which
parses the message content as JSON and returns the parsed data as a map. If
the content is not valid JSON, the processor returns a halting message with an
error message for the LLM to respond to.

# `processor_return`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L330)

```elixir
@type processor_return() ::
  {:cont, LangChain.Message.t()} | {:halt, t(), LangChain.Message.t()}
```

The expected return types for a Message processor function. When successful,
it returns a `:cont` with an Message to use as a replacement. When it
fails, a `:halt` is returned along with an updated `LLMChain.t()` and a new
user message to be returned to the LLM reporting the error.

# `t`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L322)

```elixir
@type t() :: %LangChain.Chains.LLMChain{
  _tool_map: term(),
  async_tool_timeout: term(),
  callbacks: term(),
  current_failure_count: term(),
  custom_context: term(),
  delta: term(),
  exchanged_messages: term(),
  last_message: term(),
  llm: term(),
  max_retry_count: term(),
  message_processors: term(),
  messages: term(),
  needs_response: term(),
  tools: term(),
  verbose: term(),
  verbose_deltas: term()
}
```

# `add_callback`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1887)

```elixir
@spec add_callback(t(), LangChain.Chains.ChainCallbacks.chain_callback_handler()) ::
  t()
```

Add another callback to the list of callbacks.

# `add_message`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1283)

```elixir
@spec add_message(t(), LangChain.Message.t()) :: t()
```

Add a received Message struct to the chain. The LLMChain tracks the
`last_message` received and the complete list of messages exchanged. Depending
on the message role, the chain may be in a pending or incomplete state where
a response from the LLM is anticipated.

For assistant messages with tool_calls, the tool_calls are automatically
augmented with display_text from the corresponding Function definitions.
This ensures display_text is available to all downstream consumers.

# `add_messages`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1314)

```elixir
@spec add_messages(t(), [LangChain.Message.t()]) :: t()
```

Add a set of Message structs to the chain. This enables quickly building a chain
for submitting to an LLM.

# `add_tools`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L422)

```elixir
@spec add_tools(
  t(),
  LangChain.NativeTool.t() | LangChain.Function.t() | [LangChain.Function.t()]
) ::
  t() | no_return()
```

Add a tool to an LLMChain.

# `apply_deltas`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1012)

```elixir
@spec apply_deltas(t(), list()) ::
  {:ok, t()} | {:error, t(), LangChain.LangChainError.t()}
```

Apply a list of deltas to the chain. When the final delta is received that
completes the message, the LLMChain is updated to clear the `delta` and the
`last_message` and list of messages are updated. The message is processed and
fires any registered callbacks.

# `apply_prompt_templates`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1328)

```elixir
@spec apply_prompt_templates(
  t(),
  [LangChain.Message.t() | LangChain.PromptTemplate.t()],
  %{
    required(atom()) =&gt; any()
  }
) :: t() | no_return()
```

Apply a set of PromptTemplates to the chain. The list of templates can also
include Messages with no templates. Provide the inputs to apply to the
templates for rendering as a message. The prepared messages are applied to the
chain.

# `cancel_delta`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1806)

Remove an incomplete MessageDelta from `delta` and add a Message with the
desired status to the chain.

# `cancel_delta`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1818)

Same as `cancel_delta/2` but stores an optional error in the message's
metadata under `:streaming_error`. This preserves the error reason through the
chain so higher layers (like the Sagents Agent and AgentServer) can detect and
surface it.

# `common_validation`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L398)

# `delta_to_message_when_complete`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1161)

```elixir
@spec delta_to_message_when_complete(t()) ::
  {:ok, t()} | {:error, t(), LangChain.LangChainError.t()}
```

Convert any hanging delta of the chain to a message and append to the chain.

If the delta is `nil`, the chain is returned unmodified.

# `drop_delta`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1135)

```elixir
@spec drop_delta(t()) :: t()
```

Drop the current delta. This is useful when needing to ignore a partial or
complete delta because the message may be handled in a different way.

# `execute_step`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L469)

```elixir
@spec execute_step(t()) :: {:ok, t()} | {:error, t(), LangChain.LangChainError.t()}
```

Execute a single LLM call step.

This is the core primitive that modes use to call the LLM. It:

1. Sends the chain's messages and tools to the LLM
2. Processes the LLM's response (message or streaming deltas)
3. Adds the response to the chain's messages
4. Sets `needs_response` based on whether tool calls are pending

Returns `{:ok, updated_chain}` or `{:error, chain, reason}`.

This function does NOT execute tool calls — use `execute_tool_calls/1` for that.

## Usage in Custom Modes

    def run(chain, opts) do
      case LLMChain.execute_step(chain) do
        {:ok, updated_chain} ->
          updated_chain = LLMChain.execute_tool_calls(updated_chain)
          if updated_chain.needs_response, do: run(updated_chain, opts), else: {:ok, updated_chain}

        {:error, _chain, _reason} = error ->
          error
      end
    end

# `execute_tool_call`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1717)

```elixir
@spec execute_tool_call(
  LangChain.Message.ToolCall.t(),
  LangChain.Function.t(),
  Keyword.t()
) ::
  LangChain.Message.ToolResult.t()
```

Execute the tool call with the tool. Returns the tool's message response.

# `execute_tool_calls`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1358)

```elixir
@spec execute_tool_calls(t(), context :: nil | %{required(atom()) =&gt; any()}) :: t()
```

If the `last_message` from the Assistant includes one or more `ToolCall`s, then the linked
tool is executed. If there is no `last_message` or the `last_message` is
not a `tool_call`, the LLMChain is returned with no action performed.
This makes it safe to call any time.

The `context` is additional data that will be passed to the executed tool.
The value given here will override any `custom_context` set on the LLMChain.
If not set, the global `custom_context` is used.

# `execute_tool_calls_with_decisions`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1533)

```elixir
@spec execute_tool_calls_with_decisions(t(), [LangChain.Message.ToolCall.t()], [map()]) ::
  t()
```

Execute tool calls with human decisions (approve, edit, reject).

This is used for Human-in-the-Loop workflows where tool calls need human approval
before execution. Each decision controls how the corresponding tool call is handled:

- `:approve` - Execute the tool with original arguments
- `:edit` - Execute the tool with modified arguments from the decision
- `:reject` - Create an error result without executing the tool

Returns the updated chain with tool results added and callbacks fired.

## Parameters

  * `chain` - The LLMChain instance
  * `tool_calls` - List of ToolCall structs to execute
  * `decisions` - List of decision maps, one per tool call. Each decision must have:
    - `:type` - One of `:approve`, `:edit`, or `:reject`
    - `:arguments` - (optional, required for `:edit`) The modified arguments map

## Examples

    decisions = [
      %{type: :approve},
      %{type: :edit, arguments: %{"path" => "modified.txt"}},
      %{type: :reject}
    ]

    updated_chain = LLMChain.execute_tool_calls_with_decisions(chain, tool_calls, decisions)

# `increment_current_failure_count`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1857)

```elixir
@spec increment_current_failure_count(t()) :: t()
```

Increments the internal current_failure_count. Returns and incremented and
updated struct.

# `merge_delta`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1041)

```elixir
@spec merge_delta(
  t(),
  LangChain.MessageDelta.t()
  | LangChain.TokenUsage.t()
  | {:error, LangChain.LangChainError.t()}
) :: t()
```

Merge a received MessageDelta struct into the chain's current delta. The
LLMChain tracks the current merged MessageDelta state. This is able to merge
in TokenUsage received after the final delta.

# `merge_deltas`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1026)

```elixir
@spec merge_deltas(t(), list()) :: t() | {:error, t(), LangChain.LangChainError.t()}
```

Merge a list of deltas into the chain.

# `message_processors`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L436)

```elixir
@spec message_processors(t(), [message_processor()]) :: t()
```

Register a set of processors to be applied to received assistant messages.

# `new`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L372)

```elixir
@spec new(attrs :: map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}
```

Start a new LLMChain configuration.

    {:ok, chain} = LLMChain.new(%{
      llm: %ChatOpenAI{model: "gpt-3.5-turbo", stream: true},
      messages: [%Message.new_system!("You are a helpful assistant.")]
    })

# `new!`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L388)

```elixir
@spec new!(attrs :: map()) :: t() | no_return()
```

Start a new LLMChain configuration and return it or raise an error if invalid.

    chain = LLMChain.new!(%{
      llm: %ChatOpenAI{model: "gpt-3.5-turbo", stream: true},
      messages: [%Message.new_system!("You are a helpful assistant.")]
    })

# `process_message`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1239)

```elixir
@spec process_message(t(), LangChain.Message.t()) :: t()
```

Process a newly message received from the LLM. Messages with a role of
`:assistant` may be processed through the `message_processors` before being
generally available or being notified through a callback.

# `quick_prompt`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1338)

```elixir
@spec quick_prompt(t(), String.t()) :: t()
```

Convenience function for setting the prompt text for the LLMChain using
prepared text.

# `replace_tool_result`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1708)

```elixir
@spec replace_tool_result(t(), String.t(), LangChain.Message.ToolResult.t()) :: t()
```

Replace a tool result in the chain's messages by `tool_call_id`.

Delegates to `Message.replace_tool_result/3`.

# `reset_current_failure_count`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1866)

```elixir
@spec reset_current_failure_count(t()) :: t()
```

Reset the internal current_failure_count to 0. Useful after receiving a
successfully returned and processed message from the LLM.

# `reset_current_failure_count_if`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L1875)

```elixir
@spec reset_current_failure_count_if(t(), (-&gt; boolean())) :: t()
```

Reset the internal current_failure_count to 0 if the function provided returns
`true`. Helps to make the change conditional.

# `run`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L593)

```elixir
@spec run(t(), Keyword.t()) ::
  {:ok, t()}
  | {:ok, t(), term()}
  | {:pause, t()}
  | {:error, t(), LangChain.LangChainError.t()}
```

Run the chain on the LLM using messages and any registered functions. This
formats the request for a ChatLLMChain where messages are passed to the API.

When successful, it returns `{:ok, updated_chain}`

## Options

- `:mode` - It defaults to run the chain one time, stopping after receiving a
  response from the LLM. Supports `:until_success`, `:while_needs_response`,
  `:step`, or a module implementing the `LangChain.Chains.LLMChain.Mode`
  behaviour.

- `mode: :until_success` - (for non-interactive processing done by the LLM
  where it may repeatedly fail and need to re-try) Repeatedly evaluates a
  received message through any message processors, returning any errors to the
  LLM until it either succeeds or exceeds the `max_retry_count`. This includes
  evaluating received `ToolCall`s until they succeed. If an LLM makes 3
  ToolCalls in a single message and 2 succeed while 1 fails, the success
  responses are returned to the LLM with the failure response of the remaining
  `ToolCall`, giving the LLM an opportunity to resend the failed `ToolCall`,
  and only the failed `ToolCall` until it succeeds or exceeds the
  `max_retry_count`. In essence, once we have a successful response from the
  LLM, we don't return any more to it and don't want any further responses.

- `mode: :while_needs_response` - (for interactive chats that make
  `ToolCalls`) Repeatedly evaluates functions and submits to the LLM so long
  as we still expect to get a response. Best fit for conversational LLMs where
  a `ToolResult` is used by the LLM to continue. After all `ToolCall` messages
  are evaluated, the `ToolResult` messages are returned to the LLM giving it
  an opportunity to use the `ToolResult` information in an assistant response
  message. In essence, this mode always gives the LLM the last word.

- `mode: :step` - (for step-by-step execution control) Executes one step of
  the chain: makes an LLM call, processes the message, executes any tool
  calls, and then stops. This allows the caller to inspect messages and
  modify the chain between steps before deciding whether to continue by
  calling `run` again. Perfect for scenarios where you need to examine
  each message, update context, or modify the chain state before proceeding.

- `mode: MyCustomMode` - Pass a module implementing the
  `LangChain.Chains.LLMChain.Mode` behaviour to use a custom execution loop.
  The module's `run/2` callback receives the chain and the full opts keyword
  list.

- `should_continue?` - (for automated stepped execution with conditional
  stopping) Needs to be used with `mode: :step`, this option accepts a function
  that receives the updated chain after each step and returns a boolean
  indicating whether to continue. This internally handles the loop logic,
  making stepped execution more streamlined for scenarios where you need
  to inspect the chain state to determine when to stop (e.g., max iterations,
  completion conditions, error thresholds). The function signature is
  `(LLMChain.t() -> boolean())`.

- `with_fallbacks: [...]` - Provide a list of chat models to use as a fallback
  when one fails. This helps a production system remain operational when an
  API limit is reached, an LLM service is overloaded or down, or something
  else new an exciting goes wrong.

  When all fallbacks fail, a `%LangChainError{type: "all_fallbacks_failed"}`
  is returned in the error response.

- `before_fallback: fn chain -> modified_chain end` - A `before_fallback`
  function is called before the LLM call is made. **NOTE: When provided, it
  also fires for the first attempt.** This allows a chain to be modified or
  replaced before running against the configured LLM. This is helpful, for
  example, when a different system prompt is needed for Anthropic vs OpenAI.

## Mode Examples

**Use Case**: A chat with an LLM where functions are available to the LLM:

    LLMChain.run(chain, mode: :while_needs_response)

This will execute any LLM called functions, returning the result to the LLM,
and giving it a chance to respond to the results.

**Use Case**: An application that exposes a function to the LLM, but we want
to stop once the function is successfully executed. When errors are
encountered, the LLM should be given error feedback and allowed to try again.

    LLMChain.run(chain, mode: :until_success)

**Use Case**: Automated stepped execution with a continuation function.
When you want step-by-step control but prefer the loop to be handled
internally based on a condition function.

    should_continue_fn = fn updated_chain ->
      # Continue while we need a response and haven't hit max iterations
      updated_chain.needs_response && Enum.count(updated_chain.exchanged_messages) < 10
    end

    {:ok, final_chain} = LLMChain.run(chain, mode: :step, should_continue?: should_continue_fn)

**Use Case**: Step-by-step execution where you need control of the loop.
In case you want to inspect the result of each step and decide whether to
continue or not, This is useful for debugging, to stop when you receive a
signal of a guardrail or a specific condition.

    {:ok, updated_chain} = LLMChain.run(chain, mode: :step)
    # Inspect the result, check tool calls, etc.
    if should_continue?(updated_chain) do
      # Optionally modify the chain before continuing
      modified_chain = updated_chain
        |> LLMChain.update_custom_context(%{iteration_count: get_iteration_count() + 1})
        |> LLMChain.add_message(Message.new_user!("Continue with the next step"))

      {:ok, final_chain} = LLMChain.run(modified_chain, mode: :step)
    end

**Use Case**: Custom execution mode:

    LLMChain.run(chain, mode: MyApp.Modes.Custom)

# `run_until_tool_used`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L805)

```elixir
@spec run_until_tool_used(t(), [String.t()] | String.t(), Keyword.t()) ::
  {:ok, t(), LangChain.Message.t()}
  | {:error, t(), LangChain.LangChainError.t()}
```

Run the chain until a specific tool call is made. This makes it easy for an
LLM to make multiple tool calls and call a specific tool to return a result,
signaling the end of the operation.

This function accepts either a single tool name as a string, or a list of tool
names. When provided with a list, the chain stops when any one of the specified
tools is called.

## Examples

With a single tool name:

    {:ok, %LLMChain{} = updated_chain, %ToolResult{} = tool_result} =
      chain
      |> LLMChain.run_until_tool_used("final_summary")

With multiple tool names:

    {:ok, %LLMChain{} = updated_chain, %ToolResult{} = tool_result} =
      chain
      |> LLMChain.run_until_tool_used(["summary_tool", "report_tool"])

## Options

- `max_runs`: The maximum number of times to run the chain. To prevent runaway
  calls, it defaults to 25. When exceeded, a `%LangChainError{type: "exceeded_max_runs"}`
  is returned in the error response.

- `with_fallbacks: [...]` - Provide a list of chat models to use as a fallback
  when one fails. This helps a production system remain operational when an
  API limit is reached, an LLM service is overloaded or down, or something
  else new an exciting goes wrong.

  When all fallbacks fail, a `%LangChainError{type: "all_fallbacks_failed"}`
  is returned in the error response.

- `before_fallback: fn chain -> modified_chain end` - A `before_fallback`
  function is called before the LLM call is made. **NOTE: When provided, it
  also fires for the first attempt.** This allows a chain to be modified or
  replaced before running against the configured LLM. This is helpful, for
  example, when a different system prompt is needed for Anthropic vs OpenAI.

# `update_custom_context`
[🔗](https://github.com/brainlid/langchain/blob/v0.8.0/lib/chains/llm_chain.ex#L973)

```elixir
@spec update_custom_context(
  t(),
  context_update :: %{required(atom()) =&gt; any()},
  opts :: Keyword.t()
) ::
  t() | no_return()
```

Update the LLMChain's `custom_context` map. Passing in a `context_update` map
will by default merge the map into the existing `custom_context`.

Use the `:as` option to:
- `:merge` - Merge update changes in. Default.
- `:replace` - Replace the context with the `context_update`.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
