The result of a text generation request.
Returned by Omni.generate_text/3 and Omni.StreamingResponse.complete/1.
Wraps the assistant's message with generation metadata — extract
response.message to continue a multi-turn conversation.
Struct fields
:model— the%Model{}that handled the request:message— the assistant's response message (the last assistant message):messages— all messages from this generation. For single-step calls,[response.message]. For multi-step tool loops, includes assistant and tool-result messages from every step:output— validated, decoded map when the:outputoption was set:stop_reason— why generation ended::stop(natural completion),:length(token limit reached),:tool_use(model invoked a tool),:refusal(declined due to content or safety policy), or:error:error— error description whenstop_reasonis:error, otherwisenil:raw— list of{%Req.Request{}, %Req.Response{}}tuples when:rawwas set (one per generation step):usage— cumulative%Usage{}token counts and costs for this generation
Summary
Functions
Creates a new response struct from a keyword list or map.
Types
@type stop_reason() :: :stop | :length | :tool_use | :refusal | :error | :cancelled
Why generation ended.
@type t() :: %Omni.Response{ error: String.t() | nil, message: Omni.Message.t() | nil, messages: [Omni.Message.t()], model: Omni.Model.t(), output: map() | list() | nil, raw: [{Req.Request.t(), Req.Response.t()}] | nil, stop_reason: stop_reason(), usage: Omni.Usage.t() }
A generation response envelope.
Functions
@spec new(Enumerable.t()) :: t()
Creates a new response struct from a keyword list or map.