View Source LlmComposer.StreamChunk (llm_composer v0.19.2)

Normalized representation of a streaming chunk emitted by any provider.

  • :provider identifies the upstream provider (:open_ai | :open_router | :google | :ollama | :bedrock)

  • :type categorizes the event (:text_delta, :reasoning_delta, :tool_call_delta, :usage, :done, :error, :unknown)
  • :text is the accumulated text delta (if any)
  • :reasoning is the accumulated reasoning delta (if any)
  • :reasoning_details keeps structured reasoning detail fragments when providers emit them
  • :tool_calls keeps normalized tool/function call fragments
  • :usage stores the token usage payload when available
  • :cost_info can surface cost data on the final chunk
  • :metadata holds provider-specific attributes (finish reason, role, etc.)
  • :raw retains the original decoded payload for inspection.

Summary

Types

@type t() :: %LlmComposer.StreamChunk{
  cost_info: LlmComposer.CostInfo.t() | nil,
  metadata: map(),
  provider: atom(),
  raw: term(),
  reasoning: String.t() | nil,
  reasoning_details: list() | nil,
  text: String.t() | nil,
  tool_calls: [LlmComposer.FunctionCall.t() | map()] | nil,
  type:
    :text_delta
    | :reasoning_delta
    | :tool_call_delta
    | :usage
    | :done
    | :error
    | :unknown,
  usage: usage() | nil
}
@type usage() :: %{
  input_tokens: non_neg_integer() | nil,
  output_tokens: non_neg_integer() | nil,
  total_tokens: non_neg_integer() | nil,
  cached_tokens: non_neg_integer() | nil,
  reasoning_tokens: non_neg_integer() | nil
}