ALLM.Event (allm v0.3.0)

Copy Markdown View Source

Closed tagged-tuple union emitted by stream runners. See spec §8.

Layer A — every event is {tag, payload} where tag is an atom from the closed set returned by tags/0. For every tag except :raw_chunk and :error, payload is a map() with documented keys; :raw_chunk and :error carry opaque payloads per spec §8.

Use event?/1 to test whether a term is a well-shaped event. The variant constructors (text_delta/2, tool_call_completed/4, …) are the canonical way to build events from the stream runner; the spec leaves :raw_chunk and :error un-constructed because their payloads are opaque.

Payload extensions

Phase 5 additively extends the :message_completed payload with an optional :finish_reason key of type ALLM.Response.finish_reason/0 or nil. The tag set is unchanged — the union still has 16 tags. Existing consumers that bind {:message_completed, %{message: msg}} continue to match because Elixir map patterns are non-exhaustive; only consumers that want to read :finish_reason opt in. The message_completed/1 constructor is preserved and now produces a payload with :finish_reason => nil; message_completed/2 threads the caller-supplied reason into the payload.

Phase 10.6 additively extends the :message_completed payload with an optional :metadata map key. Adapters that surface terminal provider-specific completion metadata (e.g., the OpenAI Responses-API reasoning summary) populate this key; ALLM.StreamCollector.apply_event/2 merges it into state.metadata, so the value lands on Response.metadata after collection. The key is OPTIONAL — adapters and existing emitters that don't populate it omit the key entirely, and the StreamCollector treats absence as a no-op merge.

Summary

Functions

Build an :ask_user_requested event signalling that the chat loop halted pending a user answer (spec §12.3).

Build a :chat_completed event wrapping the final ChatResult.

Return true when value is a well-shaped ALLM.Event.

Build a :message_completed event with the finalized message. The payload carries :finish_reason => nil; use message_completed/2 to populate a specific finish reason. See "Payload extensions" in the module doc.

Build a :message_completed event with the finalized message and a finish-reason atom (or nil). The guard accepts any atom — the closed ALLM.Response.finish_reason/0 enum is enforced by @type t and by Response.finish_reason/0, not at the event-construction boundary. This lets adapters preserve provider-specific reasons downstream as Response.raw_finish_reason.

Build a :message_completed event with the finalized message, a finish-reason atom (or nil), and an optional :metadata map carrying terminal provider-specific completion data (Phase 10.6).

Build a :message_started event with the in-progress assistant message.

Build a :step_completed event with the response and the updated thread.

Build a :step_completed event with the response, the updated thread, and the orchestration mode (:auto or :manual) the step ran under.

Build a :step_completed event with the response, the updated thread, the orchestration mode, and the per-tool manual bucket (Phase 18.3).

Return the closed list of 16 event tag atoms — 14 structured variants plus :raw_chunk and :error.

Build a :text_completed event with the accumulated text for a message.

Build a :text_delta event. id may be nil when the adapter doesn't associate deltas with a specific message id.

Build a :tool_call_completed event with the parsed arguments map and the original provider JSON string (raw_arguments).

Build a :tool_call_delta event carrying an incremental argument-string fragment for the tool call with the given id.

Build a :tool_call_started event.

Build a :tool_execution_completed event with the handler's opaque result (pre-encoding).

Build a :tool_execution_started event.

Build a :tool_halt event — the handler returned {:halt, reason, result}.

Build a :tool_halt event with the pre-encoded tool-result content. The payload's :content key lets ALLM.StreamCollector's :tool_halt fold populate state.tool_results with the sentinel :tool-role message without re-running the encoder. See PHASE_7_DESIGN.md Non-obvious Decision #6 + Phase 7.6 cleanup.

Build a :tool_result_encoded event with the provider-ready text content that will become the tool-role message body.

Types

t()

@type t() ::
  {:message_started, %{message: ALLM.Message.t()}}
  | {:text_delta, %{id: String.t() | nil, delta: String.t()}}
  | {:text_completed, %{id: String.t() | nil, text: String.t()}}
  | {:tool_call_started, %{id: String.t(), name: String.t()}}
  | {:tool_call_delta, %{id: String.t(), arguments_delta: String.t()}}
  | {:tool_call_completed,
     %{
       id: String.t(),
       name: String.t(),
       arguments: map(),
       raw_arguments: String.t()
     }}
  | {:tool_execution_started,
     %{id: String.t(), name: String.t(), arguments: map()}}
  | {:tool_execution_completed,
     %{id: String.t(), name: String.t(), result: term()}}
  | {:tool_result_encoded, %{id: String.t(), content: String.t()}}
  | {:ask_user_requested,
     %{
       tool_call_id: String.t(),
       tool_name: String.t(),
       question: String.t(),
       opts: keyword()
     }}
  | {:tool_halt,
     %{
       :tool_call_id => String.t(),
       :reason => atom(),
       :result => term(),
       optional(:content) => String.t()
     }}
  | {:message_completed,
     %{
       :message => ALLM.Message.t(),
       :finish_reason => ALLM.Response.finish_reason() | nil,
       optional(:metadata) => map()
     }}
  | {:step_completed,
     %{
       response: ALLM.Response.t(),
       thread: ALLM.Thread.t(),
       mode: :auto | :manual,
       manual_tool_calls: [ALLM.ToolCall.t()]
     }}
  | {:chat_completed, %{result: ALLM.ChatResult.t()}}
  | {:raw_chunk, term()}
  | {:error, term()}

Functions

ask_user_requested(tool_call_id, tool_name, question, opts)

@spec ask_user_requested(String.t(), String.t(), String.t(), keyword()) :: t()

Build an :ask_user_requested event signalling that the chat loop halted pending a user answer (spec §12.3).

Examples

iex> ALLM.Event.ask_user_requested("call_1", "weather", "Which city?", [])
{:ask_user_requested, %{tool_call_id: "call_1", tool_name: "weather", question: "Which city?", opts: []}}

chat_completed(result)

@spec chat_completed(ALLM.ChatResult.t()) :: t()

Build a :chat_completed event wrapping the final ChatResult.

Examples

iex> result = %ALLM.ChatResult{
...>   thread: %ALLM.Thread{},
...>   final_response: %ALLM.Response{},
...>   halted_reason: :completed
...> }
iex> ALLM.Event.chat_completed(result)
{:chat_completed, %{result: result}}

event?(arg1)

@spec event?(term()) :: boolean()

Return true when value is a well-shaped ALLM.Event.

Accepts any 2-tuple whose first element is an atom in tags/0. For every tag except :raw_chunk and :error, additionally requires the second element to be a map() (the per-tag key contract is documented by each variant constructor, but not enforced here — adapter-emitted events are trusted).

Examples

iex> ALLM.Event.event?({:text_delta, %{id: "a", delta: "b"}})
true

iex> ALLM.Event.event?({:raw_chunk, "any opaque term"})
true

iex> ALLM.Event.event?({:text_delta, "not a map"})
false

iex> ALLM.Event.event?(:not_an_event)
false

message_completed(message)

@spec message_completed(ALLM.Message.t()) :: t()

Build a :message_completed event with the finalized message. The payload carries :finish_reason => nil; use message_completed/2 to populate a specific finish reason. See "Payload extensions" in the module doc.

The payload may also carry an optional :metadata map that ALLM.StreamCollector.apply_event/2 merges into state.metadata — see message_completed/3 and "Payload extensions" in the module doc. This 1-arity helper omits the key (no provider metadata to surface).

Examples

iex> msg = %ALLM.Message{role: :assistant, content: "ok"}
iex> ALLM.Event.message_completed(msg)
{:message_completed, %{message: msg, finish_reason: nil}}

message_completed(message, finish_reason)

@spec message_completed(ALLM.Message.t(), ALLM.Response.finish_reason() | nil) :: t()

Build a :message_completed event with the finalized message and a finish-reason atom (or nil). The guard accepts any atom — the closed ALLM.Response.finish_reason/0 enum is enforced by @type t and by Response.finish_reason/0, not at the event-construction boundary. This lets adapters preserve provider-specific reasons downstream as Response.raw_finish_reason.

See message_completed/3 to additionally attach an optional :metadata map to the payload (folded onto Response.metadata by ALLM.StreamCollector). This 2-arity helper omits the key.

Examples

iex> msg = %ALLM.Message{role: :assistant, content: "ok"}
iex> ALLM.Event.message_completed(msg, :stop)
{:message_completed, %{message: msg, finish_reason: :stop}}

iex> msg = %ALLM.Message{role: :assistant, content: "ok"}
iex> ALLM.Event.message_completed(msg, nil) == ALLM.Event.message_completed(msg)
true

message_completed(message, finish_reason, metadata)

@spec message_completed(ALLM.Message.t(), ALLM.Response.finish_reason() | nil, map()) ::
  t()

Build a :message_completed event with the finalized message, a finish-reason atom (or nil), and an optional :metadata map carrying terminal provider-specific completion data (Phase 10.6).

When metadata is non-empty, ALLM.StreamCollector.apply_event/2 merges it into state.metadata via Map.merge/2, so the values land on Response.metadata after collection. When the map is empty, the key is omitted from the payload to keep the event payload identical to message_completed/2 (no observable difference for downstream consumers).

Worked example: the OpenAI Responses-API streaming adapter accumulates reasoning-summary delta chunks into a string and emits %{reasoning: %{summary: "..."}} here; the value surfaces as Response.metadata.reasoning.summary after stream collection.

Examples

iex> msg = %ALLM.Message{role: :assistant, content: "Final answer."}
iex> ALLM.Event.message_completed(msg, :stop, %{reasoning: %{summary: "Thinking"}})
{:message_completed,
  %{message: msg, finish_reason: :stop, metadata: %{reasoning: %{summary: "Thinking"}}}}

iex> msg = %ALLM.Message{role: :assistant, content: "ok"}
iex> ALLM.Event.message_completed(msg, :stop, %{}) == ALLM.Event.message_completed(msg, :stop)
true

message_started(message)

@spec message_started(ALLM.Message.t()) :: t()

Build a :message_started event with the in-progress assistant message.

Examples

iex> msg = %ALLM.Message{role: :assistant, content: ""}
iex> ALLM.Event.message_started(msg)
{:message_started, %{message: msg}}

step_completed(response, thread)

@spec step_completed(ALLM.Response.t(), ALLM.Thread.t()) :: t()

Build a :step_completed event with the response and the updated thread.

Equivalent to step_completed(response, thread, :auto, []). The payload's :mode key carries the orchestration mode the step ran under so that reducers (StreamCollector's :step_completed fold, multi-turn chat orchestrators) can produce StepResult metadata identical to the non-streaming Chat.step/3 path. See Phase 7 retro F1+F3.

The payload also carries :manual_tool_calls defaulting to [] (Phase 18.3 — per-tool manual partition). When per-tool manual is in effect, the list contains the manual-bucket tool calls; otherwise it is empty. See step_completed/4.

Examples

iex> ALLM.Event.step_completed(%ALLM.Response{output_text: "ok"}, %ALLM.Thread{messages: []})
{:step_completed, %{response: %ALLM.Response{output_text: "ok"}, thread: %ALLM.Thread{messages: []}, mode: :auto, manual_tool_calls: []}}

step_completed(response, thread, mode)

@spec step_completed(ALLM.Response.t(), ALLM.Thread.t(), :auto | :manual) :: t()

Build a :step_completed event with the response, the updated thread, and the orchestration mode (:auto or :manual) the step ran under.

Equivalent to step_completed(response, thread, mode, []) — the payload's :manual_tool_calls key defaults to []. See step_completed/4 to populate the per-tool manual bucket.

Examples

iex> ALLM.Event.step_completed(%ALLM.Response{output_text: "ok"}, %ALLM.Thread{messages: []}, :manual)
{:step_completed, %{response: %ALLM.Response{output_text: "ok"}, thread: %ALLM.Thread{messages: []}, mode: :manual, manual_tool_calls: []}}

step_completed(response, thread, mode, manual_tool_calls)

@spec step_completed(ALLM.Response.t(), ALLM.Thread.t(), :auto | :manual, [
  ALLM.ToolCall.t()
]) :: t()

Build a :step_completed event with the response, the updated thread, the orchestration mode, and the per-tool manual bucket (Phase 18.3).

When mode: :auto and any called tool has manual: true, the chat orchestrator partitions a turn's tool calls into auto + manual buckets; the auto bucket runs eagerly via ToolRunner (with corresponding :tool_execution_* events) and manual_tool_calls carries the manual subset for caller resolution. The list is empty for pure-auto turns and for mode: :manual turns (whole-loop manual surfaces calls via response.tool_calls instead — see Decision #1 in PHASE_18_DESIGN.md).

ALLM.StreamCollector.apply_event/2's :step_completed fold extracts the list and merges it onto state.metadata.manual_tool_calls IFF non-empty (empty-list-is-absence per Decision #12).

Examples

iex> tc = %ALLM.ToolCall{id: "c1", name: "charge", arguments: %{"amount" => 20}}
iex> ALLM.Event.step_completed(%ALLM.Response{output_text: "ok"}, %ALLM.Thread{messages: []}, :auto, [tc])
{:step_completed, %{response: %ALLM.Response{output_text: "ok"}, thread: %ALLM.Thread{messages: []}, mode: :auto, manual_tool_calls: [%ALLM.ToolCall{id: "c1", name: "charge", arguments: %{"amount" => 20}}]}}

tags()

@spec tags() :: [atom()]

Return the closed list of 16 event tag atoms — 14 structured variants plus :raw_chunk and :error.

Examples

iex> tags = ALLM.Event.tags()
iex> length(tags)
16
iex> :raw_chunk in tags and :error in tags
true

text_completed(id, text)

@spec text_completed(String.t() | nil, String.t()) :: t()

Build a :text_completed event with the accumulated text for a message.

Examples

iex> ALLM.Event.text_completed("m_1", "hello")
{:text_completed, %{id: "m_1", text: "hello"}}

text_delta(id, delta)

@spec text_delta(String.t() | nil, String.t()) :: t()

Build a :text_delta event. id may be nil when the adapter doesn't associate deltas with a specific message id.

Examples

iex> ALLM.Event.text_delta("m_1", "hel")
{:text_delta, %{id: "m_1", delta: "hel"}}

tool_call_completed(id, name, arguments, raw_arguments)

@spec tool_call_completed(String.t(), String.t(), map(), String.t()) :: t()

Build a :tool_call_completed event with the parsed arguments map and the original provider JSON string (raw_arguments).

Examples

iex> ALLM.Event.tool_call_completed("call_1", "weather", %{"city" => "SFO"}, ~S({"city":"SFO"}))
{:tool_call_completed, %{id: "call_1", name: "weather", arguments: %{"city" => "SFO"}, raw_arguments: ~S({"city":"SFO"})}}

tool_call_delta(id, arguments_delta)

@spec tool_call_delta(String.t(), String.t()) :: t()

Build a :tool_call_delta event carrying an incremental argument-string fragment for the tool call with the given id.

Examples

iex> ALLM.Event.tool_call_delta("call_1", "{\"city")
{:tool_call_delta, %{id: "call_1", arguments_delta: "{\"city"}}

tool_call_started(id, name)

@spec tool_call_started(String.t(), String.t()) :: t()

Build a :tool_call_started event.

Examples

iex> ALLM.Event.tool_call_started("call_1", "weather")
{:tool_call_started, %{id: "call_1", name: "weather"}}

tool_execution_completed(id, name, result)

@spec tool_execution_completed(String.t(), String.t(), term()) :: t()

Build a :tool_execution_completed event with the handler's opaque result (pre-encoding).

Examples

iex> ALLM.Event.tool_execution_completed("call_1", "weather", "sunny")
{:tool_execution_completed, %{id: "call_1", name: "weather", result: "sunny"}}

tool_execution_started(id, name, arguments)

@spec tool_execution_started(String.t(), String.t(), map()) :: t()

Build a :tool_execution_started event.

Examples

iex> ALLM.Event.tool_execution_started("call_1", "weather", %{"city" => "SFO"})
{:tool_execution_started, %{id: "call_1", name: "weather", arguments: %{"city" => "SFO"}}}

tool_halt(tool_call_id, reason, result)

@spec tool_halt(String.t(), atom(), term()) :: t()

Build a :tool_halt event — the handler returned {:halt, reason, result}.

Examples

iex> ALLM.Event.tool_halt("call_1", :rate_limited, %{retry_after: 30})
{:tool_halt, %{tool_call_id: "call_1", reason: :rate_limited, result: %{retry_after: 30}}}

tool_halt(tool_call_id, reason, result, content)

@spec tool_halt(String.t(), atom(), term(), String.t()) :: t()

Build a :tool_halt event with the pre-encoded tool-result content. The payload's :content key lets ALLM.StreamCollector's :tool_halt fold populate state.tool_results with the sentinel :tool-role message without re-running the encoder. See PHASE_7_DESIGN.md Non-obvious Decision #6 + Phase 7.6 cleanup.

Examples

iex> ALLM.Event.tool_halt("call_1", :rate_limited, %{retry_after: 30}, "encoded-body")
{:tool_halt, %{tool_call_id: "call_1", reason: :rate_limited, result: %{retry_after: 30}, content: "encoded-body"}}

tool_result_encoded(id, content)

@spec tool_result_encoded(String.t(), String.t()) :: t()

Build a :tool_result_encoded event with the provider-ready text content that will become the tool-role message body.

Examples

iex> ALLM.Event.tool_result_encoded("call_1", "sunny")
{:tool_result_encoded, %{id: "call_1", content: "sunny"}}