LangChain.ChatModels.ChatReqLLM (LangChain v0.6.2)

Copy Markdown View Source

ChatModel adapter using the req_llm library as the HTTP/LLM backend.

Provides access to any provider supported by req_llm (Anthropic, OpenAI, Google Gemini, Groq, Ollama, AWS Bedrock, etc.) through the unified LangChain framework.

Model Specification

The model field takes a req_llm-format specifier string: "provider:model_id".

Usage

alias LangChain.ChatModels.ChatReqLLM
alias LangChain.Chains.LLMChain
alias LangChain.Message

# Anthropic via req_llm
llm = ChatReqLLM.new!(%{model: "anthropic:claude-haiku-4-5"})

# OpenAI
llm = ChatReqLLM.new!(%{model: "openai:gpt-4o"})

# Ollama local model
llm = ChatReqLLM.new!(%{model: "ollama:llama3", base_url: "http://localhost:11434"})

# Groq with streaming
llm = ChatReqLLM.new!(%{model: "groq:llama-3.3-70b-versatile", stream: true})

{:ok, chain} =
  %{llm: llm}
  |> LLMChain.new!()
  |> LLMChain.add_message(Message.new_user!("Hello!"))
  |> LLMChain.run()

Tool Use

Tools are translated to req_llm format automatically. The callback field in the req_llm Tool struct is set to a stub — tool execution remains the LLMChain's responsibility, as with all other ChatModel adapters.

Provider Options

Provider-specific options (e.g. thinking, tool_choice, seed) can be passed via provider_opts:

ChatReqLLM.new!(%{
  model: "anthropic:claude-haiku-4-5",
  provider_opts: %{"thinking" => %{"type" => "enabled", "budget_tokens" => 2000}}
})

Summary

Functions

Call the LLM via req_llm with a prompt or list of messages.

Convert a single LangChain.Function to a ReqLLM.Tool with a stub callback.

Convert a list of LangChain Function structs to ReqLLM.Tool structs.

Convert a single LangChain Message to a list of ReqLLM.Message structs.

Convert a list of LangChain messages to a ReqLLM.Context.

Create a ChatReqLLM configuration.

Create a ChatReqLLM configuration, raising on error if invalid.

Determine if an error should be retried via a fallback LLM.

Translate a req_llm finish_reason atom to a LangChain.Message status atom.

Translate a req_llm usage map to a LangChain.TokenUsage struct.

Types

t()

@type t() :: %LangChain.ChatModels.ChatReqLLM{
  api_key: term(),
  base_url: term(),
  callbacks: term(),
  max_tokens: term(),
  model: term(),
  provider_opts: term(),
  receive_timeout: term(),
  req_opts: term(),
  stream: term(),
  temperature: term(),
  verbose_api: term()
}

Functions

call(model, prompt, functions \\ [])

Call the LLM via req_llm with a prompt or list of messages.

content_part_to_req_llm(content_part)

@spec content_part_to_req_llm(LangChain.Message.ContentPart.t()) ::
  ReqLLM.Message.ContentPart.t() | nil

Convert a LangChain ContentPart to a ReqLLM.Message.ContentPart.

Returns nil for unsupported types (they are filtered out of the content list).

do_process_response(model, response)

@spec do_process_response(t(), ReqLLM.Response.t()) ::
  LangChain.Message.t() | {:error, LangChain.LangChainError.t()}

Convert a ReqLLM.Response to a LangChain.Message.

function_to_req_llm_tool(fun)

@spec function_to_req_llm_tool(LangChain.Function.t()) :: ReqLLM.Tool.t()

Convert a single LangChain.Function to a ReqLLM.Tool with a stub callback.

The stub callback is never invoked in normal LangChain operation — the tool definition is only used for schema generation (telling the LLM what tools exist).

functions_to_req_llm_tools(functions)

@spec functions_to_req_llm_tools([LangChain.Function.t()] | nil) :: [ReqLLM.Tool.t()]

Convert a list of LangChain Function structs to ReqLLM.Tool structs.

Each tool gets a stub callback — tool execution remains the LLMChain's responsibility.

message_to_req_llm_messages(msg)

@spec message_to_req_llm_messages(LangChain.Message.t()) :: [ReqLLM.Message.t()]

Convert a single LangChain Message to a list of ReqLLM.Message structs.

Most roles map 1-to-1. The :tool role expands to one message per ToolResult.

messages_to_req_llm_context(messages)

@spec messages_to_req_llm_context([LangChain.Message.t()]) :: ReqLLM.Context.t()

Convert a list of LangChain messages to a ReqLLM.Context.

Tool messages are expanded: a single LangChain :tool message (which may carry multiple ToolResult structs) becomes one ReqLLM.Message per result, matching the one-result-per-message convention expected by OpenAI-compatible providers.

new(attrs \\ %{})

@spec new(attrs :: map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}

Create a ChatReqLLM configuration.

new!(attrs \\ %{})

@spec new!(attrs :: map()) :: t() | no_return()

Create a ChatReqLLM configuration, raising on error if invalid.

retry_on_fallback?(arg1)

@spec retry_on_fallback?(LangChain.LangChainError.t()) :: boolean()

Determine if an error should be retried via a fallback LLM.

translate_finish_reason(other)

@spec translate_finish_reason(atom() | nil) :: atom()

Translate a req_llm finish_reason atom to a LangChain.Message status atom.

translate_stream_chunk(arg1)

@spec translate_stream_chunk(ReqLLM.StreamChunk.t()) :: [LangChain.MessageDelta.t()]

Translate a single ReqLLM.StreamChunk to a list of LangChain.MessageDelta structs.

Returns an empty list for chunks that produce no LangChain deltas (e.g. empty content, non-terminal metadata).

translate_usage(usage)

@spec translate_usage(map() | nil) :: LangChain.TokenUsage.t() | nil

Translate a req_llm usage map to a LangChain.TokenUsage struct.