# `Gralkor.Distill`
[🔗](https://github.com/elimydlarz/gralkor/blob/main/lib/gralkor/distill.ex#L1)

Render a list of conversation turns into an episode body suitable for
ingesting into the knowledge graph.

Each turn that contains a `"behaviour"` message gets distilled by the
configured LLM into a first-person past-tense summary and rendered as
`Assistant: (behaviour: {summary})` before the assistant text. Turns
without behaviour skip the LLM entirely.

Distillation per turn is best-effort: any failure (LLM error, exception)
drops the behaviour line for that turn and preserves the user/assistant
text — the surrounding turns still produce output.

Turns with behaviour are distilled in parallel via `Task.async_stream`.

See `ex-format-transcript` in `gralkor/TEST_TREES.md`.

# `distill_fn`

```elixir
@type distill_fn() :: (String.t() -&gt; {:ok, String.t()} | {:error, term()}) | nil
```

# `turn`

```elixir
@type turn() :: [Gralkor.Message.t()]
```

# `distill_schema`

```elixir
@spec distill_schema() :: keyword()
```

Schema for the structured-output response the LLM returns when distilling
a behaviour-containing turn.

Used by callers that wire `format_transcript/2` up to req_llm:

    schema = Gralkor.Distill.distill_schema()
    {:ok, response} = ReqLLM.generate_object(model, prompt, schema)
    ReqLLM.Response.object(response).behaviour

# `format_transcript`

```elixir
@spec format_transcript([turn()], distill_fn()) :: String.t()
```

Render `turns` (a list of turns; each turn a list of canonical Messages)
into the episode body string.

`distill_fn` is the LLM caller used to summarise behaviour messages. Pass
`nil` to skip distillation entirely (behaviour lines are silently omitted).

---

*Consult [api-reference.md](api-reference.md) for complete listing*
