# `Gralkor.Distill`
[🔗](https://github.com/elimydlarz/gralkor/blob/main/lib/gralkor/distill.ex#L1)

Render a list of conversation turns into an episode body suitable for
ingesting into the knowledge graph.

Each turn that contains a `"behaviour"` message gets distilled by the
configured LLM into a first-person past-tense summary and rendered as
`{agent_name}: (behaviour: {summary})` before the assistant text. Turns
without behaviour skip the LLM entirely.

Distillation per turn is best-effort: any failure (LLM error, exception)
drops the behaviour line for that turn and preserves the user/assistant
text — the surrounding turns still produce output.

Turns with behaviour are distilled in parallel via `Task.async_stream`.

See `ex-format-transcript` in `gralkor/TEST_TREES.md`.

# `distill_fn`

```elixir
@type distill_fn() :: (String.t() -&gt; {:ok, String.t()} | {:error, term()}) | nil
```

# `turn`

```elixir
@type turn() :: [Gralkor.Message.t()]
```

# `distill_schema`

```elixir
@spec distill_schema() :: keyword()
```

Schema for the structured-output response the LLM returns when distilling
a behaviour-containing turn.

# `format_transcript`

```elixir
@spec format_transcript([turn()], distill_fn(), String.t()) :: String.t()
```

Render `turns` (a list of turns; each turn a list of canonical Messages)
into the episode body string.

`distill_fn` is the LLM caller used to summarise behaviour messages. Pass
`nil` to skip distillation entirely (behaviour lines are silently omitted).

`agent_name` is required and non-blank — used to label assistant and
behaviour lines (e.g. `"Susu: hello"`, `"Susu: (behaviour: thought)"`).

---

*Consult [api-reference.md](api-reference.md) for complete listing*
