Filter retrieved graph facts down to those relevant to the conversation, using the configured LLM.
Two responsibilities, each its own tree:
build_interpretation_context/3— pure: assemble the LLM prompt from conversation messages and a formatted facts string, dropping oldest messages until the prompt fits the configured char budget. Renders role labels usingagent_name.interpret_facts/4— call the LLM with that prompt and a structured- output schema; return the list of relevant facts the LLM selected.
See ex-interpret and ex-interpret-context in gralkor/TEST_TREES.md.
Summary
Functions
Assemble the LLM prompt from conversation messages and the formatted facts.
Run the LLM over the conversation context + facts text, returning the filtered list of relevant facts.
Schema for the structured-output response the LLM returns.
Types
Functions
@spec build_interpretation_context( [Gralkor.Message.t()], String.t(), String.t(), keyword() ) :: String.t()
Assemble the LLM prompt from conversation messages and the formatted facts.
Drops oldest messages until the assembled prompt fits the char budget
(opts[:budget], default 8000). Raises on blank agent_name.
@spec interpret_facts( [Gralkor.Message.t()], String.t(), interpret_fn(), String.t(), keyword() ) :: [ String.t() ]
Run the LLM over the conversation context + facts text, returning the filtered list of relevant facts.
Raises if the LLM call returns {:error, _} or a non-list response.
Raises if agent_name is missing or blank.
@spec interpret_schema() :: keyword()
Schema for the structured-output response the LLM returns.