Builds the messages to append for the next LLM turn.
Constructs:
- An assistant message containing the tool call requests (so the LLM sees its own tool calls in conversation history)
- Tool result messages (one per result) in the provider-neutral format
These messages are stored in ctx.result_messages — the outer loop
reads them and appends to the accumulated message list. This mirrors
how a step-oriented loop executor accumulates step_results outside
the pipeline.
Summary
Functions
Builds assistant + tool result messages from the current iteration.
Functions
@spec call( LlmCore.Agent.Context.t(), keyword() ) :: LlmCore.Agent.Context.t()
Builds assistant + tool result messages from the current iteration.
Parameters
ctx—%Context{}withresponse,tool_calls, andtool_resultsopts— ALF stage options (unused)
Returns
Updated %Context{} with result_messages populated.