ReqLLM.Response (ReqLLM v1.3.0)
View SourceHigh-level representation of an LLM turn.
Always contains a Context (full conversation history including
the newly-generated assistant/tool messages) plus rich metadata and, when
streaming, a lazy Stream of ReqLLM.StreamChunks.
This struct eliminates the need for manual message extraction and context building in multi-turn conversations and tool calling workflows.
Examples
# Basic response usage
{:ok, response} = ReqLLM.generate_text("anthropic:claude-3-sonnet", context)
ReqLLM.Response.text(response) #=> "Hello! I'm Claude."
ReqLLM.Response.usage(response) #=> %{input_tokens: 12, output_tokens: 4, total_cost: 0.016}
# Multi-turn conversation (no manual context building)
{:ok, response2} = ReqLLM.generate_text("anthropic:claude-3-sonnet", response.context)
Summary
Functions
Decode provider response data into a Response with structured object.
Decode provider streaming response data into a Response with object stream.
Decode provider response data into a canonical ReqLLM.Response.
Get the finish reason for this response.
Returns the first image content part (or nil if none).
Returns the binary data of the first :image part (or nil).
Returns the URL of the first :image_url part (or nil).
Extract image content parts from the response message.
Materialize a streaming response into a complete response.
Extracts the generated object from a Response.
Create a stream of structured objects from a streaming response.
Check if the response completed successfully without errors.
Get reasoning token count from the response usage.
Extract text content from the response message.
Create a stream of text content chunks from a streaming response.
Extract thinking/reasoning content from the response message.
Extract tool calls from the response message.
Unwraps the object from a structured output response, regardless of mode used.
Get usage statistics for this response.
Types
@type t() :: %ReqLLM.Response{ context: ReqLLM.Context.t(), error: Exception.t() | nil, finish_reason: :stop | :length | :tool_calls | :content_filter | :error | nil, id: String.t(), message: ReqLLM.Message.t() | nil, model: String.t(), object: map() | nil, provider_meta: map(), stream: Enumerable.t() | nil, stream?: boolean(), usage: map() | nil }
Functions
@spec decode_object( term(), LLMDB.Model.t() | String.t() | {atom(), String.t(), keyword()} | {atom(), keyword()}, keyword() ) :: {:ok, t()} | {:error, term()}
Decode provider response data into a Response with structured object.
Similar to decode_response/2 but specifically for object generation responses. Extracts the structured object from tool calls and validates it against the schema.
Parameters
raw_data- Raw provider response datamodel_spec- Model specification (supports all formats from ReqLLM.model/1)schema- Schema definition for validation
Returns
{:ok, %ReqLLM.Response{}}with object field populated on success{:error, reason}on failure
@spec decode_object_stream( term(), LLMDB.Model.t() | String.t() | {atom(), String.t(), keyword()} | {atom(), keyword()}, keyword() ) :: {:ok, t()} | {:error, term()}
Decode provider streaming response data into a Response with object stream.
Similar to decode_response/2 but for streaming object generation. The response will contain a stream of structured objects.
Parameters
raw_data- Raw provider streaming response datamodel_spec- Model specification (supports all formats from ReqLLM.model/1)schema- Schema definition for validation
Returns
{:ok, %ReqLLM.Response{}}with stream populated on success{:error, reason}on failure
@spec decode_response( term(), LLMDB.Model.t() | String.t() | {atom(), String.t(), keyword()} | {atom(), keyword()} ) :: {:ok, t()} | {:error, term()}
Decode provider response data into a canonical ReqLLM.Response.
This is a façade function that accepts raw provider data and a model specification, and directly calls the provider's decode_response/1 callback for zero-ceremony decoding.
Supports Model struct, string, and tuple inputs, automatically resolving model specifications using ReqLLM.model/1.
Parameters
raw_data- Raw provider response data or Streammodel_spec- Model specification in any format supported by ReqLLM.model/1:- String:
"anthropic:claude-3-sonnet" - Tuple:
{:anthropic, "claude-3-sonnet", temperature: 0.7} - LLMDB.Model struct:
%LLMDB.Model{provider: :anthropic, id: "claude-3-sonnet"}
- String:
Returns
{:ok, %ReqLLM.Response{}}on success{:error, reason}on failure
Examples
{:ok, response} = ReqLLM.Response.decode_response(raw_json, "anthropic:claude-3-sonnet")
{:ok, response} = ReqLLM.Response.decode_response(raw_json, model_struct)
{:ok, response} = ReqLLM.Response.decode_response(raw_json, {:anthropic, "claude-3-sonnet"})
@spec finish_reason(t()) :: :stop | :length | :tool_calls | :content_filter | :error | nil
Get the finish reason for this response.
Examples
iex> ReqLLM.Response.finish_reason(response)
:stop
@spec image(t()) :: ReqLLM.Message.ContentPart.t() | nil
Returns the first image content part (or nil if none).
Returns the binary data of the first :image part (or nil).
Returns the URL of the first :image_url part (or nil).
@spec images(t()) :: [ReqLLM.Message.ContentPart.t()]
Extract image content parts from the response message.
Returns a list of ReqLLM.Message.ContentPart where type is :image or :image_url.
Materialize a streaming response into a complete response.
Consumes the entire stream, builds the complete message, and returns a new response with the stream consumed and message populated.
Examples
{:ok, complete_response} = ReqLLM.Response.join_stream(streaming_response)
Extracts the generated object from a Response.
@spec object_stream(t()) :: Enumerable.t()
Create a stream of structured objects from a streaming response.
Only yields valid objects from tool call stream chunks, filtering out metadata and other chunk types.
Examples
response
|> ReqLLM.Response.object_stream()
|> Stream.each(&IO.inspect/1)
|> Stream.run()
Check if the response completed successfully without errors.
Examples
iex> ReqLLM.Response.ok?(response)
true
Get reasoning token count from the response usage.
Returns the number of reasoning tokens used by reasoning models (GPT-5, o1, o3, etc.) during their internal thinking process. Returns 0 if no reasoning tokens were used.
Examples
iex> ReqLLM.Response.reasoning_tokens(response)
64
Extract text content from the response message.
Returns the concatenated text from all content parts in the assistant message. Returns nil when no message is present. For streaming responses, this may be nil until the stream is joined.
Examples
iex> ReqLLM.Response.text(response)
"Hello! I'm Claude and I can help you with questions."
@spec text_stream(t()) :: Enumerable.t()
Create a stream of text content chunks from a streaming response.
Only yields content from :content type stream chunks, filtering out metadata and other chunk types.
Examples
response
|> ReqLLM.Response.text_stream()
|> Stream.each(&IO.write/1)
|> Stream.run()
Extract thinking/reasoning content from the response message.
Returns the concatenated thinking content if the message contains thinking parts, empty string otherwise.
Examples
iex> ReqLLM.Response.thinking(response)
"The user is asking about the weather..."
Extract tool calls from the response message.
Returns a list of tool calls if the message contains them, empty list otherwise.
Always returns normalized maps with .name and .arguments fields.
Examples
iex> ReqLLM.Response.tool_calls(response)
[%{name: "get_weather", arguments: %{location: "San Francisco"}}]
Unwraps the object from a structured output response, regardless of mode used.
Handles extraction from:
- json_schema mode: parses from content
- tool modes: extracts from tool call arguments
Examples
{:ok, object} = ReqLLM.Response.unwrap_object(response)
#=> {:ok, %{"name" => "John", "age" => 30}}
Get usage statistics for this response.
Examples
iex> ReqLLM.Response.usage(response)
%{input_tokens: 12, output_tokens: 8, total_tokens: 20, reasoning_tokens: 64, input_cost: 0.01, output_cost: 0.02, total_cost: 0.03}