# `Gralkor.Interpret`
[🔗](https://github.com/elimydlarz/gralkor/blob/main/lib/gralkor/interpret.ex#L1)

Filter retrieved graph facts down to those relevant to the conversation,
using the configured LLM.

Two responsibilities, each its own tree:

  * `build_interpretation_context/3` — pure: assemble the LLM prompt from
    conversation messages and a formatted facts string, dropping oldest
    messages until the prompt fits the configured char budget.
  * `interpret_facts/3` — call the LLM with that prompt and a structured-
    output schema; return the list of relevant facts the LLM selected.

See `ex-interpret` and `ex-interpret-context` in `gralkor/TEST_TREES.md`.

# `interpret_fn`

```elixir
@type interpret_fn() :: (String.t() -&gt; {:ok, [String.t()]} | {:error, term()})
```

# `build_interpretation_context`

```elixir
@spec build_interpretation_context([Gralkor.Message.t()], String.t(), keyword()) ::
  String.t()
```

Assemble the LLM prompt from conversation messages and the formatted facts.

Drops oldest messages until the assembled prompt fits the char budget
(`opts[:budget]`, default 8000).

# `interpret_facts`

```elixir
@spec interpret_facts([Gralkor.Message.t()], String.t(), interpret_fn(), keyword()) ::
  [String.t()]
```

Run the LLM over the conversation context + facts text, returning the
filtered list of relevant facts.

Raises if the LLM call returns `{:error, _}` or a non-list response.

# `interpret_schema`

```elixir
@spec interpret_schema() :: keyword()
```

Schema for the structured-output response the LLM returns.

Wired up by callers that drive `interpret_facts/3` via req_llm:

    schema = Gralkor.Interpret.interpret_schema()
    {:ok, response} = ReqLLM.generate_object(model, prompt, schema)
    ReqLLM.Response.object(response).relevantFacts

---

*Consult [api-reference.md](api-reference.md) for complete listing*
