Delegating provider that runs requests through the req_llm library.
req_llm ships 18+ providers (OpenAI, Anthropic, Ollama, LM Studio,
OpenRouter, Groq, Together, DeepInfra, Vercel, Mistral, Gemini, Cohere,
Bedrock, llama.cpp, vLLM, …) with a canonical data model and models.dev
cost/context metadata. ExAthena delegates to it instead of maintaining
per-provider modules.
Usage
Callers identify a model via the req_llm two-part spec
("provider:model-id" or a {provider, model_id} tuple):
ExAthena.query("hi",
provider: :req_llm,
model: "ollama:llama3.1",
base_url: "http://localhost:11434"
)
ExAthena.query("hi",
provider: :req_llm,
model: "anthropic:claude-opus-4-5",
api_key: System.get_env("ANTHROPIC_API_KEY")
)The provider atoms :ollama, :openai, :openai_compatible, :llamacpp,
:claude, :mock continue to route here via ExAthena.Config and are
translated to the appropriate req_llm model spec.
Capabilities
Reported statically at :native_tool_calls / :streaming / :json_mode
= true, reflecting req_llm's superset. The loop's auto-fallback handles
individual-model quirks (e.g. Ollama models without native tool-calls).