Streaming event types and normalization from req_llm chunks.
req_llm emits %ReqLLM.StreamChunk{} structs with a :type field.
from_req_llm/1 maps each chunk type to the canonical event tuples consumed
by Planck.Agent and callers of Planck.AI.stream/3.
Tool call arguments arrive as JSON fragments spread across multiple :meta
chunks. This module buffers the fragments and emits a single assembled
{:tool_call_complete, ...} per tool call when the stream finishes.
Exceptions raised during stream consumption (e.g. ReqLLM.Error.API.Stream
on HTTP errors) are caught and emitted as {:error, exception} events,
halting the stream gracefully.
Event types
{:text_delta, text}— a chunk of assistant text{:thinking_delta, text}— a chunk of extended thinking / reasoning{:tool_call_complete, %{id, name, args}}— a fully assembled tool call{:done, %{stop_reason, usage}}— stream finished; includes token usage{:error, reason}— an error occurred during streaming
Summary
Functions
Converts a stream of ReqLLM.StreamChunk structs into event tuples.
Types
Functions
@spec from_req_llm(Enumerable.t()) :: Enumerable.t(t())
Converts a stream of ReqLLM.StreamChunk structs into event tuples.
Examples
iex> chunks = [
...> %{type: :content, text: "Hello"},
...> %{type: :meta, metadata: %{finish_reason: :stop, usage: %{input_tokens: 10, output_tokens: 5}}}
...> ]
iex> chunks |> Planck.AI.Stream.from_req_llm() |> Enum.to_list()
[{:text_delta, "Hello"}, {:done, %{stop_reason: :stop, usage: %{input_tokens: 10, output_tokens: 5}}}]