# `Planck.AI.Stream`
[🔗](https://github.com/alexdesousa/planck/blob/v0.1.0/lib/planck/ai/stream.ex#L1)

Streaming event types and normalization from `req_llm` chunks.

`req_llm` emits `%ReqLLM.StreamChunk{}` structs with a `:type` field.
`from_req_llm/1` maps each chunk type to the canonical event tuples consumed
by `Planck.Agent` and callers of `Planck.AI.stream/3`.

Tool call arguments arrive as JSON fragments spread across multiple `:meta`
chunks. This module buffers the fragments and emits a single assembled
`{:tool_call_complete, ...}` per tool call when the stream finishes.

Exceptions raised during stream consumption (e.g. `ReqLLM.Error.API.Stream`
on HTTP errors) are caught and emitted as `{:error, exception}` events,
halting the stream gracefully.

## Event types

- `{:text_delta, text}` — a chunk of assistant text
- `{:thinking_delta, text}` — a chunk of extended thinking / reasoning
- `{:tool_call_complete, %{id, name, args}}` — a fully assembled tool call
- `{:done, %{stop_reason, usage}}` — stream finished; includes token usage
- `{:error, reason}` — an error occurred during streaming

# `done`

```elixir
@type done() :: %{stop_reason: atom(), usage: usage()}
```

# `t`

```elixir
@type t() ::
  {:text_delta, String.t()}
  | {:thinking_delta, String.t()}
  | {:tool_call_complete, tool_call()}
  | {:done, done()}
  | {:error, term()}
```

# `tool_call`

```elixir
@type tool_call() :: %{id: String.t(), name: String.t(), args: map()}
```

# `usage`

```elixir
@type usage() :: %{input_tokens: non_neg_integer(), output_tokens: non_neg_integer()}
```

# `from_req_llm`

```elixir
@spec from_req_llm(Enumerable.t()) :: Enumerable.t(t())
```

Converts a stream of `ReqLLM.StreamChunk` structs into event tuples.

## Examples

    iex> chunks = [
    ...>   %{type: :content, text: "Hello"},
    ...>   %{type: :meta, metadata: %{finish_reason: :stop, usage: %{input_tokens: 10, output_tokens: 5}}}
    ...> ]
    iex> chunks |> Planck.AI.Stream.from_req_llm() |> Enum.to_list()
    [{:text_delta, "Hello"}, {:done, %{stop_reason: :stop, usage: %{input_tokens: 10, output_tokens: 5}}}]

---

*Consult [api-reference.md](api-reference.md) for complete listing*
