Gralkor.Interpret (gralkor_ex v2.1.2)

Copy Markdown View Source

Filter retrieved graph facts down to those relevant to the conversation, using the configured LLM.

Two responsibilities, each its own tree:

  • build_interpretation_context/3 — pure: assemble the LLM prompt from conversation messages and a formatted facts string, dropping oldest messages until the prompt fits the configured char budget.
  • interpret_facts/3 — call the LLM with that prompt and a structured- output schema; return the list of relevant facts the LLM selected.

See ex-interpret and ex-interpret-context in gralkor/TEST_TREES.md.

Summary

Functions

Assemble the LLM prompt from conversation messages and the formatted facts.

Run the LLM over the conversation context + facts text, returning the filtered list of relevant facts.

Schema for the structured-output response the LLM returns.

Types

interpret_fn()

@type interpret_fn() :: (String.t() -> {:ok, [String.t()]} | {:error, term()})

Functions

build_interpretation_context(messages, facts_text, opts \\ [])

@spec build_interpretation_context([Gralkor.Message.t()], String.t(), keyword()) ::
  String.t()

Assemble the LLM prompt from conversation messages and the formatted facts.

Drops oldest messages until the assembled prompt fits the char budget (opts[:budget], default 8000).

interpret_facts(messages, facts_text, interpret_fn, opts \\ [])

@spec interpret_facts([Gralkor.Message.t()], String.t(), interpret_fn(), keyword()) ::
  [String.t()]

Run the LLM over the conversation context + facts text, returning the filtered list of relevant facts.

Raises if the LLM call returns {:error, _} or a non-list response.

interpret_schema()

@spec interpret_schema() :: keyword()

Schema for the structured-output response the LLM returns.

Wired up by callers that drive interpret_facts/3 via req_llm:

schema = Gralkor.Interpret.interpret_schema()
{:ok, response} = ReqLLM.generate_object(model, prompt, schema)
ReqLLM.Response.object(response).relevantFacts