View Source LlmComposer.StreamChunk (llm_composer v0.19.2)
Normalized representation of a streaming chunk emitted by any provider.
:provideridentifies the upstream provider (:open_ai | :open_router | :google | :ollama | :bedrock):typecategorizes the event (:text_delta,:reasoning_delta,:tool_call_delta,:usage,:done,:error,:unknown):textis the accumulated text delta (if any):reasoningis the accumulated reasoning delta (if any):reasoning_detailskeeps structured reasoning detail fragments when providers emit them:tool_callskeeps normalized tool/function call fragments:usagestores the token usage payload when available:cost_infocan surface cost data on the final chunk:metadataholds provider-specific attributes (finish reason, role, etc.):rawretains the original decoded payload for inspection.
Summary
Types
@type t() :: %LlmComposer.StreamChunk{ cost_info: LlmComposer.CostInfo.t() | nil, metadata: map(), provider: atom(), raw: term(), reasoning: String.t() | nil, reasoning_details: list() | nil, text: String.t() | nil, tool_calls: [LlmComposer.FunctionCall.t() | map()] | nil, type: :text_delta | :reasoning_delta | :tool_call_delta | :usage | :done | :error | :unknown, usage: usage() | nil }
@type usage() :: %{ input_tokens: non_neg_integer() | nil, output_tokens: non_neg_integer() | nil, total_tokens: non_neg_integer() | nil, cached_tokens: non_neg_integer() | nil, reasoning_tokens: non_neg_integer() | nil }