Providers
View SourceOpenResponses routes each request to a provider adapter based on the model name. Adapters translate between the Open Responses spec and the provider's native API.
Configuration
API keys and per-provider options live in config/runtime.exs:
config :open_responses, :provider_config, %{
openai: [
api_key: System.fetch_env!("OPENAI_API_KEY")
],
anthropic: [
api_key: System.fetch_env!("ANTHROPIC_API_KEY")
],
gemini: [
api_key: System.fetch_env!("GEMINI_API_KEY")
]
}Routing
The routing table maps model name patterns to adapter modules. The default routing:
config :open_responses, :routing, %{
~r/^gpt-/ => OpenResponses.Adapters.OpenAI,
~r/^claude-/ => OpenResponses.Adapters.Anthropic,
~r/^gemini-/ => OpenResponses.Adapters.Gemini,
~r/^llama|^mistral|^phi|^qwen/ => OpenResponses.Adapters.Ollama,
"default" => OpenResponses.Adapters.Mock
}Patterns are evaluated in insertion order. The first match wins. The "default" key is a fallback for any model that doesn't match a pattern.
To add your own routing, override this in config:
config :open_responses, :routing, %{
~r/^gpt-4/ => OpenResponses.Adapters.OpenAI,
~r/^gpt-3/ => MyApp.Adapters.OpenAILegacy,
~r/^claude-/ => OpenResponses.Adapters.Anthropic,
"default" => OpenResponses.Adapters.OpenAI
}OpenAI
The Open Responses spec is derived from OpenAI's Responses API, so this adapter is nearly a direct pass-through.
Supported models: gpt-4o, gpt-4o-mini, gpt-4-turbo, o1, o3-mini, and any future gpt-* models.
config :open_responses, :provider_config, %{
openai: [
api_key: System.fetch_env!("OPENAI_API_KEY")
]
}To use a custom endpoint (Azure OpenAI, OpenAI-compatible proxies):
openai: [
api_key: System.fetch_env!("AZURE_OPENAI_KEY"),
base_url: "https://your-resource.openai.azure.com/openai/deployments/gpt-4o"
]Anthropic
The Anthropic adapter translates between the Open Responses spec and the Anthropic Messages API. Key translations handled automatically:
- System prompts extracted from input and sent as the top-level
systemfield - Tool definitions converted from
parametersto Anthropic'sinput_schema content_block_deltastreaming events mapped toresponse.output_text.deltatool_useblocks mapped tofunction_callitemsthinkingblocks mapped toreasoningitemsstop_reason: "end_turn"mapped toresponse.completed
Supported models: claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5, and any future claude-* models.
config :open_responses, :provider_config, %{
anthropic: [
api_key: System.fetch_env!("ANTHROPIC_API_KEY")
]
}z.ai
z.ai uses the Anthropic API. Use the Anthropic adapter with a custom base_url and add a routing rule:
config :open_responses, :routing, %{
~r/^zai-/ => OpenResponses.Adapters.Anthropic,
# ... other routes
}
config :open_responses, :provider_config, %{
anthropic: [
api_key: System.fetch_env!("ZAI_API_KEY"),
base_url: "https://api.z.ai/v1"
]
}Google Gemini
The Gemini adapter translates messages to the contents/parts format expected by the Gemini API.
Key translations:
assistantrole mapped tomodelrole- System messages extracted as
system_instruction finishReason: "STOP"mapped toresponse.completedfinishReason: "MAX_TOKENS"mapped toresponse.incomplete
Supported models: gemini-2.0-flash, gemini-1.5-pro, gemini-1.5-flash, and any future gemini-* models.
config :open_responses, :provider_config, %{
gemini: [
api_key: System.fetch_env!("GEMINI_API_KEY")
]
}Ollama (local models)
Ollama runs models locally. No API key is required. OpenResponses defaults to http://localhost:11434.
Supported models: Any model pulled into your Ollama installation — llama3.2, mistral, phi4, qwen2.5, deepseek-r1, and more.
ollama pull llama3.2
config :open_responses, :routing, %{
~r/^llama|^mistral|^phi|^qwen|^deepseek/ => OpenResponses.Adapters.Ollama
}To point at a remote Ollama instance:
config :open_responses, :provider_config, %{
# Ollama uses the adapter name :ollama — add it to provider_config
# by overriding the config key in your Loop opts, or set a custom base_url
# via per-request provider config (see below).
}Per-request provider config
Any request can override the provider config by including a provider key:
{
"model": "gpt-4o",
"provider": {
"api_key": "sk-project-specific-key",
"base_url": "https://my-proxy.example.com/v1"
},
"input": [...]
}This is useful for multi-tenant applications where each user brings their own API key.
Writing a custom adapter
Implement the OpenResponses.Adapter behaviour in any module:
defmodule MyApp.Adapters.MyProvider do
@behaviour OpenResponses.Adapter
@impl OpenResponses.Adapter
def build_request(%{response: response, input: input}) do
%{
model: response.model,
messages: input
# ... provider-specific fields
}
end
@impl OpenResponses.Adapter
def stream(request, config) do
api_key = Keyword.fetch!(config, :api_key)
# Return {:ok, stream} where stream yields raw provider events
{:ok, my_streaming_function(request, api_key)}
end
@impl OpenResponses.Adapter
def normalize_event(raw_event) do
# Translate provider-native events to Open Responses spec events
case raw_event["type"] do
"my_provider.text_delta" ->
%{"type" => "response.output_text.delta", "delta" => raw_event["text"]}
"my_provider.done" ->
%{"type" => "response.completed"}
other ->
raw_event
end
end
endThen add it to your routing config:
config :open_responses, :routing, %{
~r/^myprovider-/ => MyApp.Adapters.MyProvider
}