OpenAI.Responses (OpenAI.Responses v0.8.2)
View SourceClient for OpenAI Responses API.
This module provides a simple interface for creating AI responses with support for:
- Text and structured output generation
- Streaming responses with Server-Sent Events (SSE)
- Automatic cost calculation for all API calls
- JSON Schema-based structured outputs
Available Functions
create/1
andcreate/2
- Create AI responses (synchronous or streaming)create!/1
andcreate!/2
- Same as create but raises on errorrun/2
andrun!/2
- Run conversations with automatic function callingstream/1
- Stream responses as an Enumerablelist_models/0
andlist_models/1
- List available OpenAI modelsrequest/1
- Low-level API request function
Configuration
Set your OpenAI API key via:
- Environment variable:
OPENAI_API_KEY
- Application config:
config :openai_responses, :openai_api_key, "your-key"
Examples
See the tutorial for comprehensive examples and usage patterns.
Summary
Types
User-facing options accepted by create/1, stream/1, run/2
Result from low-level request or create/1
Functions
Create a new response.
Create a response based on a previous response.
Same as create/1
but raises an error on failure.
Same as create/2
but raises an error on failure.
List available models.
Request a response from the OpenAI API.
Run a conversation with automatic function calling.
Same as run/2
but raises an error on failure.
Stream a response from the OpenAI API as an Enumerable.
Types
User-facing options accepted by create/1, stream/1, run/2
@type result() :: {:ok, OpenAI.Responses.Response.t()} | {:error, term()}
Result from low-level request or create/1
Functions
@spec create(options_input()) :: result()
Create a new response.
When the argument is a string, it is used as the input text.
Otherwise, the argument is expected to be a keyword list or map of options that OpenAI expects,
such as input
, model
, temperature
, max_tokens
, etc.
LLM Options Preservation with previous_response_id
The OpenAI API always requires a model parameter, even when using previous_response_id
.
When using create/1
with manual previous_response_id
:
- If no model is specified, the default model is used
- LLM options (model, text, reasoning) from the previous response are NOT automatically inherited
When using create/2
with a Response object:
Only these options are preserved from the previous response:
model
reasoning.effort
text.verbosity
Text format/schema (
text.format
orschema:
option) is never preserved; specify a newschema:
if neededYou can override any preserved option by explicitly providing a value
# Manual previous_response_id - uses defaults if not specified Responses.create(input: "Hello", previous_response_id: "resp_123")
# Manual previous_response_id - with explicit options Responses.create(input: "Hello", previous_response_id: "resp_123", model: "gpt-4.1")
# Using create/2 - automatically inherits LLM options from previous response Responses.create(previous_response, input: "Hello")
# Using create/2 - with reasoning effort preserved (requires model that supports reasoning) first = Responses.create!(input: "Question", model: "gpt-5-mini", reasoning: %{effort: "high"}) followup = Responses.create!(first, input: "Follow-up") # Inherits gpt-5-mini and high reasoning
Examples
# Using a keyword list
Responses.create(input: "Hello", model: "gpt-4.1", temperature: 0.7)
# Using a map
Responses.create(%{input: "Hello", model: "gpt-4.1", temperature: 0.7})
# String shorthand
Responses.create("Hello")
Structured Output with :schema
Pass a schema:
option to get structured JSON output from the model.
The schema is defined using a simple Elixir syntax that is converted to JSON Schema format.
Both maps and keyword lists with atom or string keys are accepted for all options:
# Using a map with atom keys
Responses.create(%{
input: "Extract user info from: John Doe, username @johndoe, john@example.com",
schema: %{
name: :string,
username: {:string, pattern: "^@[a-zA-Z0-9_]+$"},
email: {:string, format: "email"}
}
})
# Using a keyword list
Responses.create(
input: "Extract product details",
schema: [
product_name: :string,
price: :number,
in_stock: :boolean,
tags: {:array, :string}
]
)
# Arrays at the root level (new in 0.6.0)
Responses.create(
input: "List 3 US presidents with facts",
schema: {:array, %{
name: :string,
birth_year: :integer,
achievements: {:array, :string}
}}
)
# Returns an array directly in response.parsed
# Mixed keys (atoms and strings) are supported
Responses.create(%{
"input" => "Analyze this data",
:schema => %{
"result" => :string,
:confidence => :number
}
})
The response will include a parsed
field with the extracted structured data.
See OpenAI.Responses.Schema
for the full schema syntax documentation.
Streaming
Pass a stream:
option with a callback function to stream the response.
The callback receives results wrapped in {:ok, chunk}
or {:error, reason}
tuples:
Responses.create(
input: "Write a story",
stream: fn
{:ok, %{event: "response.output_text.delta", data: %{"delta" => text}}} ->
IO.write(text)
:ok
{:error, reason} ->
IO.puts("Stream error: #{inspect(reason)}")
:ok # Continue despite errors
_ ->
:ok
end
)
The callback should return :ok
to continue or {:error, reason}
to stop the stream.
For simpler text streaming, use the delta/1
helper:
Responses.create(
input: "Write a story",
stream: Responses.Stream.delta(&IO.write/1)
)
If no model is specified, the default model is used.
@spec create(OpenAI.Responses.Response.t(), options_input()) :: result()
Create a response based on a previous response.
This allows creating follow-up responses that maintain context from a previous response. The previous response's ID is automatically included in the request.
Options can be provided as either a keyword list or a map.
Preserved Options
The following options are automatically preserved from the previous response unless explicitly overridden:
model
- The model used for generationtext
- Text generation settings (including verbosity)reasoning
- Reasoning settings (including effort level)
Examples
{:ok, first} = Responses.create("What is Elixir?")
# Using keyword list
{:ok, followup} = Responses.create(first, input: "Tell me more about its concurrency model")
# Using map
{:ok, followup} = Responses.create(first, %{input: "Tell me more about its concurrency model"})
# With reasoning effort preserved (requires model that supports reasoning)
{:ok, first} = Responses.create(input: "Complex question", model: "gpt-5-mini", reasoning: %{effort: "high"})
{:ok, followup} = Responses.create(first, input: "Follow-up") # Inherits gpt-5-mini and high reasoning effort
@spec create!(options_input()) :: OpenAI.Responses.Response.t()
Same as create/1
but raises an error on failure.
Returns the response directly instead of an {:ok, response} tuple.
Examples
response = Responses.create!("Hello, world!")
IO.puts(response.text)
@spec create!(OpenAI.Responses.Response.t(), options_input()) :: OpenAI.Responses.Response.t()
Same as create/2
but raises an error on failure.
Returns the response directly instead of an {:ok, response} tuple.
Examples
first = Responses.create!("What is Elixir?")
followup = Responses.create!(first, input: "Tell me more")
List available models.
Accepts an optional match
string to filter by model ID.
Request a response from the OpenAI API.
Used as a building block by other functions in this module.
Accepts that same arguments as Req.request/1
.
You should provide url
, json
, method
, and other options as needed.
@spec run(options_input(), map() | keyword()) :: [OpenAI.Responses.Response.t()] | {:error, term()}
Run a conversation with automatic function calling.
This function automates the process of handling function calls by repeatedly calling the provided functions and feeding their results back to the model until a final response without function calls is received.
Parameters
options
- Keyword list or map of options to pass tocreate/1
functions
- A map or keyword list where:- Keys are function names (as atoms or strings)
- Values are functions that accept the parsed arguments and return the result
Returns
Returns a list of all responses generated during the conversation, in chronological order. The last response in the list will be the final answer without function calls.
Examples
# Define available functions
functions = %{
"get_weather" => fn %{"location" => location} ->
# Simulate weather API call
"The weather in #{location} is 72°F and sunny"
end,
"get_time" => fn %{} ->
DateTime.utc_now() |> to_string()
end
}
# Create function tools
weather_tool = Responses.Schema.build_function(
"get_weather",
"Get current weather for a location",
%{location: :string}
)
time_tool = Responses.Schema.build_function(
"get_time",
"Get the current UTC time",
%{}
)
# Run the conversation (with keyword list)
responses = Responses.run(
[input: "What's the weather in Paris and what time is it?",
tools: [weather_tool, time_tool]],
functions
)
# Or with map
responses = Responses.run(
%{input: "What's the weather in Paris and what time is it?",
tools: [weather_tool, time_tool]},
functions
)
# The last response contains the final answer
final_response = List.last(responses)
IO.puts(final_response.text)
@spec run!(options_input(), map() | keyword()) :: [OpenAI.Responses.Response.t()]
Same as run/2
but raises an error on failure.
Returns the list of responses directly instead of an {:ok, responses} tuple.
@spec stream(options_input() | String.t()) :: Enumerable.t()
Stream a response from the OpenAI API as an Enumerable.
Returns a Stream that yields chunks with event
and data
keys.
Options can be provided as either a keyword list or a map.
Examples
# Stream and handle all results
for result <- Responses.stream("Tell me a story") do
case result do
{:ok, chunk} -> IO.inspect(chunk)
{:error, reason} -> IO.puts("Error: #{inspect(reason)}")
end
end
# Process only text deltas, ignoring errors
Responses.stream("Write a poem")
|> Stream.filter(fn
{:ok, %{event: "response.output_text.delta"}} -> true
_ -> false
end)
|> Stream.map(fn {:ok, chunk} -> chunk.data["delta"] end)
|> Enum.each(&IO.write/1)
# Accumulate all text with error handling (using map)
result = Responses.stream(%{input: "Explain quantum physics"})
|> Enum.reduce(%{text: "", errors: []}, fn
{:ok, %{event: "response.output_text.delta", data: %{"delta" => delta}}}, acc ->
%{acc | text: acc.text <> delta}
{:error, reason}, acc ->
%{acc | errors: [reason | acc.errors]}
_, acc ->
acc
end)