Arcana.Ask (Arcana v1.3.3)

View Source

RAG (Retrieval Augmented Generation) question answering.

This module handles the core ask workflow:

  1. Search for relevant context chunks
  2. Build a prompt with the context
  3. Call the LLM for an answer

Usage

{:ok, answer, context} = Arcana.ask("What is X?",
  repo: MyApp.Repo,
  llm: "openai:gpt-4o-mini"
)

Summary

Functions

Asks a question using retrieved context from the knowledge base.

Functions

ask(question, opts)

Asks a question using retrieved context from the knowledge base.

Performs a search to find relevant chunks, then passes them along with the question to an LLM for answer generation.

Options

  • :repo - The Ecto repo to use (required)
  • :llm - Any type implementing the Arcana.LLM protocol (required)
  • :limit - Maximum number of context chunks to retrieve (default: 5)
  • :source_id - Filter context to a specific source
  • :threshold - Minimum similarity score for context (default: 0.0)
  • :mode - Search mode: :semantic (default), :fulltext, or :hybrid
  • :collection - Filter to a specific collection
  • :collections - Filter to multiple collections
  • :prompt - Custom prompt function fn question, context -> system_prompt_string end

Examples

# Basic usage
{:ok, answer, context} = Arcana.ask("What is Elixir?",
  repo: MyApp.Repo,
  llm: "openai:gpt-4o-mini"
)

# With custom prompt
{:ok, answer, _} = Arcana.ask("Summarize the docs",
  repo: MyApp.Repo,
  llm: my_llm,
  prompt: fn question, context ->
    "Be concise. Question: #{question}"
  end
)