ChatModel adapter using the req_llm library as the HTTP/LLM backend.
Provides access to any provider supported by req_llm (Anthropic, OpenAI, Google Gemini, Groq, Ollama, AWS Bedrock, etc.) through the unified LangChain framework.
Model Specification
The model field takes a req_llm-format specifier string: "provider:model_id".
Usage
alias LangChain.ChatModels.ChatReqLLM
alias LangChain.Chains.LLMChain
alias LangChain.Message
# Anthropic via req_llm
llm = ChatReqLLM.new!(%{model: "anthropic:claude-haiku-4-5"})
# OpenAI
llm = ChatReqLLM.new!(%{model: "openai:gpt-4o"})
# Ollama local model
llm = ChatReqLLM.new!(%{model: "ollama:llama3", base_url: "http://localhost:11434"})
# Groq with streaming
llm = ChatReqLLM.new!(%{model: "groq:llama-3.3-70b-versatile", stream: true})
{:ok, chain} =
%{llm: llm}
|> LLMChain.new!()
|> LLMChain.add_message(Message.new_user!("Hello!"))
|> LLMChain.run()Tool Use
Tools are translated to req_llm format automatically. The callback field in the
req_llm Tool struct is set to a stub — tool execution remains the LLMChain's
responsibility, as with all other ChatModel adapters.
Provider Options
Provider-specific options (e.g. thinking, tool_choice, seed) can be passed
via provider_opts:
ChatReqLLM.new!(%{
model: "anthropic:claude-haiku-4-5",
provider_opts: %{"thinking" => %{"type" => "enabled", "budget_tokens" => 2000}}
})
Summary
Functions
Call the LLM via req_llm with a prompt or list of messages.
Convert a LangChain ContentPart to a ReqLLM.Message.ContentPart.
Convert a ReqLLM.Response to a LangChain.Message.
Convert a single LangChain.Function to a ReqLLM.Tool with a stub callback.
Convert a list of LangChain Function structs to ReqLLM.Tool structs.
Convert a single LangChain Message to a list of ReqLLM.Message structs.
Convert a list of LangChain messages to a ReqLLM.Context.
Create a ChatReqLLM configuration.
Create a ChatReqLLM configuration, raising on error if invalid.
Determine if an error should be retried via a fallback LLM.
Translate a req_llm finish_reason atom to a LangChain.Message status atom.
Translate a single ReqLLM.StreamChunk to a list of LangChain.MessageDelta structs.
Translate a req_llm usage map to a LangChain.TokenUsage struct.
Types
Functions
Call the LLM via req_llm with a prompt or list of messages.
@spec content_part_to_req_llm(LangChain.Message.ContentPart.t()) :: ReqLLM.Message.ContentPart.t() | nil
Convert a LangChain ContentPart to a ReqLLM.Message.ContentPart.
Returns nil for unsupported types (they are filtered out of the content list).
@spec do_process_response(t(), ReqLLM.Response.t()) :: LangChain.Message.t() | {:error, LangChain.LangChainError.t()}
Convert a ReqLLM.Response to a LangChain.Message.
@spec function_to_req_llm_tool(LangChain.Function.t()) :: ReqLLM.Tool.t()
Convert a single LangChain.Function to a ReqLLM.Tool with a stub callback.
The stub callback is never invoked in normal LangChain operation — the tool definition is only used for schema generation (telling the LLM what tools exist).
@spec functions_to_req_llm_tools([LangChain.Function.t()] | nil) :: [ReqLLM.Tool.t()]
Convert a list of LangChain Function structs to ReqLLM.Tool structs.
Each tool gets a stub callback — tool execution remains the LLMChain's responsibility.
@spec message_to_req_llm_messages(LangChain.Message.t()) :: [ReqLLM.Message.t()]
Convert a single LangChain Message to a list of ReqLLM.Message structs.
Most roles map 1-to-1. The :tool role expands to one message per ToolResult.
@spec messages_to_req_llm_context([LangChain.Message.t()]) :: ReqLLM.Context.t()
Convert a list of LangChain messages to a ReqLLM.Context.
Tool messages are expanded: a single LangChain :tool message (which may carry
multiple ToolResult structs) becomes one ReqLLM.Message per result, matching
the one-result-per-message convention expected by OpenAI-compatible providers.
@spec new(attrs :: map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}
Create a ChatReqLLM configuration.
Create a ChatReqLLM configuration, raising on error if invalid.
@spec retry_on_fallback?(LangChain.LangChainError.t()) :: boolean()
Determine if an error should be retried via a fallback LLM.
Translate a req_llm finish_reason atom to a LangChain.Message status atom.
@spec translate_stream_chunk(ReqLLM.StreamChunk.t()) :: [LangChain.MessageDelta.t()]
Translate a single ReqLLM.StreamChunk to a list of LangChain.MessageDelta structs.
Returns an empty list for chunks that produce no LangChain deltas (e.g. empty content, non-terminal metadata).
@spec translate_usage(map() | nil) :: LangChain.TokenUsage.t() | nil
Translate a req_llm usage map to a LangChain.TokenUsage struct.