View Source LangChain.Chains.LLMChain (LangChain v0.3.0-rc.0)

Summary

Types

A message processor is an arity 2 function that takes an LLMChain and a Message. It is used to "pre-process" the received message from the LLM. Processors can be chained together to preform a sequence of transformations.

The expected return types for a Message processor function. When successful, it returns a :continue with an Message to use as a replacement. When it fails, a :halt is returned along with an updated LLMChain.t() and a new user message to be returned to the LLM reporting the error.

t()

Functions

Define an LLMChain. This is the heart of the LangChain library.

Add another callback to the list of callbacks.

Add a LangChain.ChatModels.LLMCallbacks callback map to the chain's :llm model if it supports the :callback key.

Add a received Message struct to the chain. The LLMChain tracks the last_message received and the complete list of messages exchanged. Depending on the message role, the chain may be in a pending or incomplete state where a response from the LLM is anticipated.

Add a set of Message structs to the chain. This enables quickly building a chain for submitting to an LLM.

Add a tool to an LLMChain.

Apply a received MessageDelta struct to the chain. The LLMChain tracks the current merged MessageDelta state. When the final delta is received that completes the message, the LLMChain is updated to clear the delta and the last_message and list of messages are updated.

Apply a list of deltas to the chain.

Apply a set of PromptTemplates to the chain. The list of templates can also include Messages with no templates. Provide the inputs to apply to the templates for rendering as a message. The prepared messages are applied to the chain.

Remove an incomplete MessageDelta from delta and add a Message with the desired status to the chain.

Convert any hanging delta of the chain to a message and append to the chain.

Execute the tool call with the tool. Returns the tool's message response.

If the last_message from the Assistant includes one or more ToolCalls, then the linked tool is executed. If there is no last_message or the last_message is not a tool_call, the LLMChain is returned with no action performed. This makes it safe to call any time.

Increments the internal current_failure_count. Returns and incremented and updated struct.

Register a set of processors to on received assistant messages.

Start a new LLMChain configuration.

Start a new LLMChain configuration and return it or raise an error if invalid.

Process a newly message received from the LLM. Messages with a role of :assistant may be processed through the message_processors before being generally available or being notified through a callback.

Convenience function for setting the prompt text for the LLMChain using prepared text.

Reset the internal current_failure_count to 0. Useful after receiving a successfully returned and processed message from the LLM.

Reset the internal current_failure_count to 0 if the function provided returns true. Helps to make the change conditional.

Run the chain on the LLM using messages and any registered functions. This formats the request for a ChatLLMChain where messages are passed to the API.

Update the LLMChain's custom_context map. Passing in a context_update map will by default merge the map into the existing custom_context.

Types

@type message_processor() :: (t(), LangChain.Message.t() -> processor_return())

A message processor is an arity 2 function that takes an LLMChain and a Message. It is used to "pre-process" the received message from the LLM. Processors can be chained together to preform a sequence of transformations.

@type processor_return() ::
  {:continue, LangChain.Message.t()} | {:halt, t(), LangChain.Message.t()}

The expected return types for a Message processor function. When successful, it returns a :continue with an Message to use as a replacement. When it fails, a :halt is returned along with an updated LLMChain.t() and a new user message to be returned to the LLM reporting the error.

@type t() :: %LangChain.Chains.LLMChain{
  _tool_map: term(),
  callbacks: term(),
  current_failure_count: term(),
  custom_context: term(),
  delta: term(),
  last_message: term(),
  llm: term(),
  max_retry_count: term(),
  message_processors: term(),
  messages: term(),
  needs_response: term(),
  tools: term(),
  verbose: term(),
  verbose_deltas: term()
}

Functions

Link to this function

%LangChain.Chains.LLMChain{}

View Source (struct)

Define an LLMChain. This is the heart of the LangChain library.

The chain deals with tools, a tool map, delta tracking, last_message tracking, conversation messages, and verbose logging. This helps by separating these responsibilities from the LLM making it easier to support additional LLMs because the focus is on communication and formats instead of all the extra logic.

Callbacks

Callbacks are fired as specific events occur in the chain as it is running. The set of events are defined in LangChain.Chains.ChainCallbacks.

To be notified of an event you care about, register a callback handler with the chain. Multiple callback handlers can be assigned. The callback handler assigned to the LLMChain is not provided to an LLM chat model. For callbacks on a chat model, set them there.

Registering a callback handler

A handler is a map with key name for the callback to fire. A function is assigned to the map key. Refer to the documentation for each function as they arguments vary.

If we want to be notified when an LLM Assistant chat response message has been processed and it is complete, this is how we could receive that event in our running LiveView:

live_view_pid = self()

handler = %{
  on_message_processed: fn _chain, message ->
    send(live_view_pid, {:new_assistant_response, message})
  end
}

LLMChain.new!(%{...})
|> LLMChain.add_callback(handler)
|> LLMChain.run()

In the LiveView, a handle_info function executes with the received message.

Link to this function

add_callback(chain, additional_callback)

View Source

Add another callback to the list of callbacks.

Link to this function

add_llm_callback(chain, callback_map)

View Source
@spec add_llm_callback(t(), map()) :: t()

Add a LangChain.ChatModels.LLMCallbacks callback map to the chain's :llm model if it supports the :callback key.

Link to this function

add_message(chain, new_message)

View Source
@spec add_message(t(), LangChain.Message.t()) :: t()

Add a received Message struct to the chain. The LLMChain tracks the last_message received and the complete list of messages exchanged. Depending on the message role, the chain may be in a pending or incomplete state where a response from the LLM is anticipated.

Link to this function

add_messages(chain, messages)

View Source
@spec add_messages(t(), [LangChain.Message.t()]) :: t()

Add a set of Message structs to the chain. This enables quickly building a chain for submitting to an LLM.

@spec add_tools(t(), LangChain.Function.t() | [LangChain.Function.t()]) ::
  t() | no_return()

Add a tool to an LLMChain.

Link to this function

apply_delta(chain, new_delta)

View Source
@spec apply_delta(t(), LangChain.MessageDelta.t()) :: t()

Apply a received MessageDelta struct to the chain. The LLMChain tracks the current merged MessageDelta state. When the final delta is received that completes the message, the LLMChain is updated to clear the delta and the last_message and list of messages are updated.

Link to this function

apply_deltas(chain, deltas)

View Source
@spec apply_deltas(t(), list()) :: t()

Apply a list of deltas to the chain.

Link to this function

apply_prompt_templates(chain, templates, inputs)

View Source
@spec apply_prompt_templates(
  t(),
  [LangChain.Message.t() | LangChain.PromptTemplate.t()],
  %{
    required(atom()) => any()
  }
) :: t() | no_return()

Apply a set of PromptTemplates to the chain. The list of templates can also include Messages with no templates. Provide the inputs to apply to the templates for rendering as a message. The prepared messages are applied to the chain.

Link to this function

cancel_delta(chain, message_status)

View Source

Remove an incomplete MessageDelta from delta and add a Message with the desired status to the chain.

Link to this function

common_validation(changeset)

View Source
Link to this function

delta_to_message_when_complete(chain)

View Source
@spec delta_to_message_when_complete(t()) :: t()

Convert any hanging delta of the chain to a message and append to the chain.

If the delta is nil, the chain is returned unmodified.

Link to this function

execute_tool_call(call, function, opts \\ [])

View Source

Execute the tool call with the tool. Returns the tool's message response.

Link to this function

execute_tool_calls(chain, context \\ nil)

View Source
@spec execute_tool_calls(t(), context :: nil | %{required(atom()) => any()}) :: t()

If the last_message from the Assistant includes one or more ToolCalls, then the linked tool is executed. If there is no last_message or the last_message is not a tool_call, the LLMChain is returned with no action performed. This makes it safe to call any time.

The context is additional data that will be passed to the executed tool. The value given here will override any custom_context set on the LLMChain. If not set, the global custom_context is used.

Link to this function

increment_current_failure_count(chain)

View Source
@spec increment_current_failure_count(t()) :: t()

Increments the internal current_failure_count. Returns and incremented and updated struct.

Link to this function

message_processors(chain, processors)

View Source
@spec message_processors(t(), [message_processor()]) :: t()

Register a set of processors to on received assistant messages.

@spec new(attrs :: map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}

Start a new LLMChain configuration.

{:ok, chain} = LLMChain.new(%{
  llm: %ChatOpenAI{model: "gpt-3.5-turbo", stream: true},
  messages: [%Message.new_system!("You are a helpful assistant.")]
})
@spec new!(attrs :: map()) :: t() | no_return()

Start a new LLMChain configuration and return it or raise an error if invalid.

chain = LLMChain.new!(%{
  llm: %ChatOpenAI{model: "gpt-3.5-turbo", stream: true},
  messages: [%Message.new_system!("You are a helpful assistant.")]
})
Link to this function

process_message(chain, message)

View Source
@spec process_message(t(), LangChain.Message.t()) :: t()

Process a newly message received from the LLM. Messages with a role of :assistant may be processed through the message_processors before being generally available or being notified through a callback.

Link to this function

quick_prompt(chain, text)

View Source
@spec quick_prompt(t(), String.t()) :: t()

Convenience function for setting the prompt text for the LLMChain using prepared text.

Link to this function

reset_current_failure_count(chain)

View Source
@spec reset_current_failure_count(t()) :: t()

Reset the internal current_failure_count to 0. Useful after receiving a successfully returned and processed message from the LLM.

Link to this function

reset_current_failure_count_if(chain, fun)

View Source
@spec reset_current_failure_count_if(t(), (-> boolean())) :: t()

Reset the internal current_failure_count to 0 if the function provided returns true. Helps to make the change conditional.

@spec run(t(), Keyword.t()) ::
  {:ok, t(), LangChain.Message.t() | [LangChain.Message.t()]}
  | {:error, t(), String.t()}

Run the chain on the LLM using messages and any registered functions. This formats the request for a ChatLLMChain where messages are passed to the API.

When successful, it returns {:ok, updated_chain, message_or_messages}

Options

  • :mode - It defaults to run the chain one time, stopping after receiving a response from the LLM. Supports :until_success and :while_needs_response.

  • mode: :until_success - (for non-interactive processing done by the LLM where it may repeatedly fail and need to re-try) Repeatedly evaluates a received message through any message processors, returning any errors to the LLM until it either succeeds or exceeds the max_retry_count. Ths includes evaluating received ToolCalls until they succeed. If an LLM makes 3 ToolCalls in a single message and 2 succeed while 1 fails, the success responses are returned to the LLM with the failure response of the remaining ToolCall, giving the LLM an opportunity to resend the failed ToolCall, and only the failed ToolCall until it succeeds or exceeds the max_retry_count. In essence, once we have a successful response from the LLM, we don't return any more to it and don't want any further responses.

  • mode: :while_needs_response - (for interactive chats that make ToolCalls) Repeatedly evaluates functions and submits to the LLM so long as we still expect to get a response. Best fit for conversational LLMs where a ToolResult is used by the LLM to continue. After all ToolCall messages are evaluated, the ToolResult messages are returned to the LLM giving it an opportunity to use the ToolResult information in an assistant response message. In essence, this mode always gives the LLM the last word.

Link to this function

update_custom_context(chain, context_update, opts \\ [])

View Source
@spec update_custom_context(
  t(),
  context_update :: %{required(atom()) => any()},
  opts :: Keyword.t()
) ::
  t() | no_return()

Update the LLMChain's custom_context map. Passing in a context_update map will by default merge the map into the existing custom_context.

Use the :as option to:

  • :merge - Merge update changes in. Default.
  • :replace - Replace the context with the context_update.