ReqLLM.Response (ReqLLM v1.0.0-rc.5)

View Source

High-level representation of an LLM turn.

Always contains a Context (full conversation history including the newly-generated assistant/tool messages) plus rich metadata and, when streaming, a lazy Stream of ReqLLM.StreamChunks.

This struct eliminates the need for manual message extraction and context building in multi-turn conversations and tool calling workflows.

Examples

# Basic response usage
{:ok, response} = ReqLLM.generate_text("anthropic:claude-3-sonnet", context)
response.text()  #=> "Hello! I'm Claude."
response.usage()  #=> %{input_tokens: 12, output_tokens: 4, total_cost: 0.016}

# Multi-turn conversation (no manual context building)
{:ok, response2} = ReqLLM.generate_text("anthropic:claude-3-sonnet", response.context)

# Tool calling loop
{:ok, final_response} = ReqLLM.Response.handle_tools(response, tools)

Summary

Functions

Decode provider response data into a Response with structured object.

Decode provider streaming response data into a Response with object stream.

Decode provider response data into a canonical ReqLLM.Response.

Get the finish reason for this response.

Materialize a streaming response into a complete response.

Extracts the generated object from a Response.

Create a stream of structured objects from a streaming response.

Check if the response completed successfully without errors.

Get reasoning token count from the response usage.

Extract text content from the response message.

Create a stream of text content chunks from a streaming response.

Extract thinking/reasoning content from the response message.

Extract tool calls from the response message.

Unwraps the object from a structured output response, regardless of mode used.

Get usage statistics for this response.

Types

t()

@type t() :: %ReqLLM.Response{
  context: ReqLLM.Context.t(),
  error: Exception.t() | nil,
  finish_reason: :stop | :length | :tool_calls | :content_filter | :error | nil,
  id: String.t(),
  message: ReqLLM.Message.t() | nil,
  model: String.t(),
  object: map() | nil,
  provider_meta: map(),
  stream: Enumerable.t() | nil,
  stream?: boolean(),
  usage: map() | nil
}

Functions

decode_object(raw_data, model_input, schema)

@spec decode_object(term(), ReqLLM.Model.t() | String.t(), keyword()) ::
  {:ok, t()} | {:error, term()}

Decode provider response data into a Response with structured object.

Similar to decode_response/2 but specifically for object generation responses. Extracts the structured object from tool calls and validates it against the schema.

Parameters

  • raw_data - Raw provider response data
  • model - Model specification
  • schema - Schema definition for validation

Returns

  • {:ok, %ReqLLM.Response{}} with object field populated on success
  • {:error, reason} on failure

decode_object_stream(raw_data, model_input, schema)

@spec decode_object_stream(term(), ReqLLM.Model.t() | String.t(), keyword()) ::
  {:ok, t()} | {:error, term()}

Decode provider streaming response data into a Response with object stream.

Similar to decode_response/2 but for streaming object generation. The response will contain a stream of structured objects.

Parameters

  • raw_data - Raw provider streaming response data
  • model - Model specification
  • schema - Schema definition for validation

Returns

  • {:ok, %ReqLLM.Response{}} with stream populated on success
  • {:error, reason} on failure

decode_response(raw_data, model_input)

@spec decode_response(term(), ReqLLM.Model.t() | String.t()) ::
  {:ok, t()} | {:error, term()}

Decode provider response data into a canonical ReqLLM.Response.

This is a façade function that accepts raw provider data and a model specification, and directly calls the provider's decode_response/1 callback for zero-ceremony decoding.

Supports both Model struct and string inputs, automatically resolving model strings using Model.from!/1.

Parameters

  • raw_data - Raw provider response data or Stream
  • model - Model specification (Model struct or string like "anthropic:claude-3-sonnet")

Returns

  • {:ok, %ReqLLM.Response{}} on success
  • {:error, reason} on failure

Examples

{:ok, response} = ReqLLM.Response.decode_response(raw_json, "anthropic:claude-3-sonnet")
{:ok, response} = ReqLLM.Response.decode_response(raw_json, model_struct)

finish_reason(response)

@spec finish_reason(t()) ::
  :stop | :length | :tool_calls | :content_filter | :error | nil

Get the finish reason for this response.

Examples

iex> ReqLLM.Response.finish_reason(response)
:stop

join_stream(response)

@spec join_stream(t()) :: {:ok, t()} | {:error, term()}

Materialize a streaming response into a complete response.

Consumes the entire stream, builds the complete message, and returns a new response with the stream consumed and message populated.

Examples

{:ok, complete_response} = ReqLLM.Response.join_stream(streaming_response)

object(response)

@spec object(t()) :: map() | nil

Extracts the generated object from a Response.

object_stream(response)

@spec object_stream(t()) :: Enumerable.t()

Create a stream of structured objects from a streaming response.

Only yields valid objects from tool call stream chunks, filtering out metadata and other chunk types.

Examples

response
|> ReqLLM.Response.object_stream()
|> Stream.each(&IO.inspect/1)
|> Stream.run()

ok?(response)

@spec ok?(t()) :: boolean()

Check if the response completed successfully without errors.

Examples

iex> ReqLLM.Response.ok?(response)
true

reasoning_tokens(response)

@spec reasoning_tokens(t()) :: integer()

Get reasoning token count from the response usage.

Returns the number of reasoning tokens used by reasoning models (GPT-5, o1, o3, etc.) during their internal thinking process. Returns 0 if no reasoning tokens were used.

Examples

iex> ReqLLM.Response.reasoning_tokens(response)
64

text(response)

@spec text(t()) :: String.t() | nil

Extract text content from the response message.

Returns the concatenated text from all content parts in the assistant message. Returns nil when no message is present. For streaming responses, this may be nil until the stream is joined.

Examples

iex> ReqLLM.Response.text(response)
"Hello! I'm Claude and I can help you with questions."

text_stream(response)

@spec text_stream(t()) :: Enumerable.t()

Create a stream of text content chunks from a streaming response.

Only yields content from :content type stream chunks, filtering out metadata and other chunk types.

Examples

response
|> ReqLLM.Response.text_stream()
|> Stream.each(&IO.write/1)
|> Stream.run()

thinking(response)

@spec thinking(t()) :: String.t() | nil

Extract thinking/reasoning content from the response message.

Returns the concatenated thinking content if the message contains thinking parts, empty string otherwise.

Examples

iex> ReqLLM.Response.thinking(response)
"The user is asking about the weather..."

tool_calls(response)

@spec tool_calls(t()) :: [term()]

Extract tool calls from the response message.

Returns a list of tool calls if the message contains them, empty list otherwise.

Examples

iex> ReqLLM.Response.tool_calls(response)
[%{name: "get_weather", arguments: %{location: "San Francisco"}}]

unwrap_object(response)

@spec unwrap_object(t()) :: {:ok, map()} | {:error, term()}

Unwraps the object from a structured output response, regardless of mode used.

Handles extraction from:

  • json_schema mode: parses from content
  • tool modes: extracts from tool call arguments

Examples

{:ok, object} = ReqLLM.Response.unwrap_object(response)
#=> {:ok, %{"name" => "John", "age" => 30}}

usage(response)

@spec usage(t()) :: map() | nil

Get usage statistics for this response.

Examples

iex> ReqLLM.Response.usage(response)
%{input_tokens: 12, output_tokens: 8, total_tokens: 20, reasoning_tokens: 64, input_cost: 0.01, output_cost: 0.02, total_cost: 0.03}