OpenAI-compatible API provider implementing the Provider behaviour.
Works with OpenAI, OpenRouter, Together, Groq, local vLLM — any endpoint that speaks the OpenAI chat completions format.
Configuration
Defaults to OpenAI. Override per-call via opts or globally via app config:
# Per-call
OpenAI.send(prompt, base_url: "https://openrouter.ai/api/v1",
api_key: System.get_env("OPENROUTER_API_KEY"),
model: "anthropic/claude-sonnet-4-20250514")
# Global (application config)
config :llm_core, :openai_base_url, "https://openrouter.ai/api/v1"
config :llm_core, :openai_api_key, System.get_env("OPENROUTER_API_KEY")Auth Resolution Order
opts[:api_key](per-call)Application.get_env(:llm_core, :openai_api_key)System.get_env("OPENAI_API_KEY")
URL Resolution Order
opts[:base_url](per-call)Application.get_env(:llm_core, :openai_base_url)"https://api.openai.com/v1"(default)
Summary
Functions
Checks if an OpenAI-compatible API key is configured.
Returns the OpenAI capability map including streaming, structured output, tool use, vision, and supported models.
Returns :api — OpenAI is a cloud API provider.
Sends a prompt to the OpenAI-compatible chat completions endpoint.
Streams a response from the OpenAI-compatible chat completions endpoint.
Functions
@spec available?() :: boolean()
Checks if an OpenAI-compatible API key is configured.
@spec capabilities() :: LlmCore.LLM.Provider.capabilities()
Returns the OpenAI capability map including streaming, structured output, tool use, vision, and supported models.
@spec provider_type() :: :api
Returns :api — OpenAI is a cloud API provider.
@spec send( LlmCore.LLM.Provider.prompt(), keyword() ) :: {:ok, LlmCore.LLM.Response.t()} | {:error, LlmCore.LLM.Error.t()}
Sends a prompt to the OpenAI-compatible chat completions endpoint.
When opts[:tools] contains a list of LlmToolkit.Tool structs, tool
definitions are encoded into the request body. If the model responds
with finish_reason: "tool_calls", the returned Response.tool_calls
will contain decoded LlmToolkit.Tool.Call structs.
@spec stream( LlmCore.LLM.Provider.prompt(), keyword() ) :: {:ok, Enumerable.t()} | {:error, LlmCore.LLM.Error.t()}
Streams a response from the OpenAI-compatible chat completions endpoint.