ReqLLM.Response (ReqLLM v1.0.0-rc.5)
View SourceHigh-level representation of an LLM turn.
Always contains a Context (full conversation history including
the newly-generated assistant/tool messages) plus rich metadata and, when
streaming, a lazy Stream
of ReqLLM.StreamChunk
s.
This struct eliminates the need for manual message extraction and context building in multi-turn conversations and tool calling workflows.
Examples
# Basic response usage
{:ok, response} = ReqLLM.generate_text("anthropic:claude-3-sonnet", context)
response.text() #=> "Hello! I'm Claude."
response.usage() #=> %{input_tokens: 12, output_tokens: 4, total_cost: 0.016}
# Multi-turn conversation (no manual context building)
{:ok, response2} = ReqLLM.generate_text("anthropic:claude-3-sonnet", response.context)
# Tool calling loop
{:ok, final_response} = ReqLLM.Response.handle_tools(response, tools)
Summary
Functions
Decode provider response data into a Response with structured object.
Decode provider streaming response data into a Response with object stream.
Decode provider response data into a canonical ReqLLM.Response.
Get the finish reason for this response.
Materialize a streaming response into a complete response.
Extracts the generated object from a Response.
Create a stream of structured objects from a streaming response.
Check if the response completed successfully without errors.
Get reasoning token count from the response usage.
Extract text content from the response message.
Create a stream of text content chunks from a streaming response.
Extract thinking/reasoning content from the response message.
Extract tool calls from the response message.
Unwraps the object from a structured output response, regardless of mode used.
Get usage statistics for this response.
Types
@type t() :: %ReqLLM.Response{ context: ReqLLM.Context.t(), error: Exception.t() | nil, finish_reason: :stop | :length | :tool_calls | :content_filter | :error | nil, id: String.t(), message: ReqLLM.Message.t() | nil, model: String.t(), object: map() | nil, provider_meta: map(), stream: Enumerable.t() | nil, stream?: boolean(), usage: map() | nil }
Functions
@spec decode_object(term(), ReqLLM.Model.t() | String.t(), keyword()) :: {:ok, t()} | {:error, term()}
Decode provider response data into a Response with structured object.
Similar to decode_response/2 but specifically for object generation responses. Extracts the structured object from tool calls and validates it against the schema.
Parameters
raw_data
- Raw provider response datamodel
- Model specificationschema
- Schema definition for validation
Returns
{:ok, %ReqLLM.Response{}}
with object field populated on success{:error, reason}
on failure
@spec decode_object_stream(term(), ReqLLM.Model.t() | String.t(), keyword()) :: {:ok, t()} | {:error, term()}
Decode provider streaming response data into a Response with object stream.
Similar to decode_response/2 but for streaming object generation. The response will contain a stream of structured objects.
Parameters
raw_data
- Raw provider streaming response datamodel
- Model specificationschema
- Schema definition for validation
Returns
{:ok, %ReqLLM.Response{}}
with stream populated on success{:error, reason}
on failure
@spec decode_response(term(), ReqLLM.Model.t() | String.t()) :: {:ok, t()} | {:error, term()}
Decode provider response data into a canonical ReqLLM.Response.
This is a façade function that accepts raw provider data and a model specification, and directly calls the provider's decode_response/1 callback for zero-ceremony decoding.
Supports both Model struct and string inputs, automatically resolving model strings using Model.from!/1.
Parameters
raw_data
- Raw provider response data or Streammodel
- Model specification (Model struct or string like "anthropic:claude-3-sonnet")
Returns
{:ok, %ReqLLM.Response{}}
on success{:error, reason}
on failure
Examples
{:ok, response} = ReqLLM.Response.decode_response(raw_json, "anthropic:claude-3-sonnet")
{:ok, response} = ReqLLM.Response.decode_response(raw_json, model_struct)
@spec finish_reason(t()) :: :stop | :length | :tool_calls | :content_filter | :error | nil
Get the finish reason for this response.
Examples
iex> ReqLLM.Response.finish_reason(response)
:stop
Materialize a streaming response into a complete response.
Consumes the entire stream, builds the complete message, and returns a new response with the stream consumed and message populated.
Examples
{:ok, complete_response} = ReqLLM.Response.join_stream(streaming_response)
Extracts the generated object from a Response.
@spec object_stream(t()) :: Enumerable.t()
Create a stream of structured objects from a streaming response.
Only yields valid objects from tool call stream chunks, filtering out metadata and other chunk types.
Examples
response
|> ReqLLM.Response.object_stream()
|> Stream.each(&IO.inspect/1)
|> Stream.run()
Check if the response completed successfully without errors.
Examples
iex> ReqLLM.Response.ok?(response)
true
Get reasoning token count from the response usage.
Returns the number of reasoning tokens used by reasoning models (GPT-5, o1, o3, etc.) during their internal thinking process. Returns 0 if no reasoning tokens were used.
Examples
iex> ReqLLM.Response.reasoning_tokens(response)
64
Extract text content from the response message.
Returns the concatenated text from all content parts in the assistant message. Returns nil when no message is present. For streaming responses, this may be nil until the stream is joined.
Examples
iex> ReqLLM.Response.text(response)
"Hello! I'm Claude and I can help you with questions."
@spec text_stream(t()) :: Enumerable.t()
Create a stream of text content chunks from a streaming response.
Only yields content from :content type stream chunks, filtering out metadata and other chunk types.
Examples
response
|> ReqLLM.Response.text_stream()
|> Stream.each(&IO.write/1)
|> Stream.run()
Extract thinking/reasoning content from the response message.
Returns the concatenated thinking content if the message contains thinking parts, empty string otherwise.
Examples
iex> ReqLLM.Response.thinking(response)
"The user is asking about the weather..."
Extract tool calls from the response message.
Returns a list of tool calls if the message contains them, empty list otherwise.
Examples
iex> ReqLLM.Response.tool_calls(response)
[%{name: "get_weather", arguments: %{location: "San Francisco"}}]
Unwraps the object from a structured output response, regardless of mode used.
Handles extraction from:
- json_schema mode: parses from content
- tool modes: extracts from tool call arguments
Examples
{:ok, object} = ReqLLM.Response.unwrap_object(response)
#=> {:ok, %{"name" => "John", "age" => 30}}
Get usage statistics for this response.
Examples
iex> ReqLLM.Response.usage(response)
%{input_tokens: 12, output_tokens: 8, total_tokens: 20, reasoning_tokens: 64, input_cost: 0.01, output_cost: 0.02, total_cost: 0.03}