# `Omni.Response`
[🔗](https://github.com/aaronrussell/omni/blob/v1.2.1/lib/omni/response.ex#L1)

The result of a text generation request.

Returned by `Omni.generate_text/3` and `Omni.StreamingResponse.complete/1`.
Wraps the assistant's message with generation metadata — extract
`response.message` to continue a multi-turn conversation.

## Struct fields

  * `:model` — the `%Model{}` that handled the request
  * `:message` — the assistant's response message (the last assistant message)
  * `:messages` — all messages from this generation. For single-step calls,
    `[response.message]`. For multi-step tool loops, includes assistant and
    tool-result messages from every step
  * `:output` — validated, decoded map when the `:output` option was set
  * `:stop_reason` — why generation ended: `:stop` (natural completion),
    `:length` (token limit reached), `:tool_use` (model invoked a tool),
    `:refusal` (declined due to content or safety policy), or `:error`
  * `:error` — error description when `stop_reason` is `:error`, otherwise `nil`
  * `:raw` — list of `{%Req.Request{}, %Req.Response{}}` tuples when `:raw`
    was set (one per generation step)
  * `:usage` — cumulative `%Usage{}` token counts and costs for this generation

# `stop_reason`

```elixir
@type stop_reason() :: :stop | :length | :tool_use | :refusal | :error | :cancelled
```

Why generation ended.

# `t`

```elixir
@type t() :: %Omni.Response{
  error: String.t() | nil,
  message: Omni.Message.t() | nil,
  messages: [Omni.Message.t()],
  model: Omni.Model.t(),
  output: map() | list() | nil,
  raw: [{Req.Request.t(), Req.Response.t()}] | nil,
  stop_reason: stop_reason(),
  usage: Omni.Usage.t()
}
```

A generation response envelope.

# `new`

```elixir
@spec new(Enumerable.t()) :: t()
```

Creates a new response struct from a keyword list or map.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
