Render a list of conversation turns into an episode body suitable for ingesting into the knowledge graph.
Each turn that contains a "behaviour" message gets distilled by the
configured LLM into a first-person past-tense summary and rendered as
Assistant: (behaviour: {summary}) before the assistant text. Turns
without behaviour skip the LLM entirely.
Distillation per turn is best-effort: any failure (LLM error, exception) drops the behaviour line for that turn and preserves the user/assistant text — the surrounding turns still produce output.
Turns with behaviour are distilled in parallel via Task.async_stream.
See ex-format-transcript in gralkor/TEST_TREES.md.
Summary
Functions
Schema for the structured-output response the LLM returns when distilling a behaviour-containing turn.
Render turns (a list of turns; each turn a list of canonical Messages)
into the episode body string.
Types
@type turn() :: [Gralkor.Message.t()]
Functions
@spec distill_schema() :: keyword()
Schema for the structured-output response the LLM returns when distilling a behaviour-containing turn.
Used by callers that wire format_transcript/2 up to req_llm:
schema = Gralkor.Distill.distill_schema()
{:ok, response} = ReqLLM.generate_object(model, prompt, schema)
ReqLLM.Response.object(response).behaviour
@spec format_transcript([turn()], distill_fn()) :: String.t()
Render turns (a list of turns; each turn a list of canonical Messages)
into the episode body string.
distill_fn is the LLM caller used to summarise behaviour messages. Pass
nil to skip distillation entirely (behaviour lines are silently omitted).