OpenAI.Responses (OpenAI.Responses v0.5.0)
View SourceClient for OpenAI Responses API.
This module provides a simple interface for creating AI responses with support for:
- Text and structured output generation
- Streaming responses with Server-Sent Events (SSE)
- Automatic cost calculation for all API calls
- JSON Schema-based structured outputs
Available Functions
create/1
andcreate/2
- Create AI responses (synchronous or streaming)create!/1
andcreate!/2
- Same as create but raises on errorrun/2
andrun!/2
- Run conversations with automatic function callingcall_functions/2
- Execute function calls and format results for the APIstream/1
- Stream responses as an Enumerablelist_models/0
andlist_models/1
- List available OpenAI modelsrequest/1
- Low-level API request function
Configuration
Set your OpenAI API key via:
- Environment variable:
OPENAI_API_KEY
- Application config:
config :openai_responses, :openai_api_key, "your-key"
Examples
See the tutorial for comprehensive examples and usage patterns.
Summary
Functions
Execute function calls from a response and format the results for the API.
Create a new response.
Create a response based on a previous response.
Same as create/1
but raises an error on failure.
Same as create/2
but raises an error on failure.
List available models.
Request a response from the OpenAI API.
Run a conversation with automatic function calling.
Same as run/2
but raises an error on failure.
Stream a response from the OpenAI API as an Enumerable.
Functions
Execute function calls from a response and format the results for the API.
Takes the function_calls from a response and a map/keyword list of functions, executes each function with its arguments, and returns the formatted results ready to be used as input for the next API call.
Parameters
function_calls
- The function_calls array from a Response structfunctions
- A map or keyword list where:- Keys are function names (as atoms or strings)
- Values are functions that accept the parsed arguments and return the result
Returns
Returns a list of formatted function outputs suitable for use as input to create/2
.
Important: Function return values must be JSON-encodable. This means they should
only contain basic types (strings, numbers, booleans, nil), lists, and maps. Tuples,
atoms (except true
, false
, and nil
), and other Elixir-specific types are not
supported by default unless they implement the Jason.Encoder
protocol.
Examples
# Get a response with function calls
{:ok, response} = Responses.create(
input: "What's the weather in Paris and what time is it?",
tools: [weather_tool, time_tool]
)
# Define the actual function implementations
functions = %{
"get_weather" => fn %{"location" => location} ->
# Returns a map (JSON-encodable)
%{temperature: 22, unit: "C", location: location}
end,
"get_time" => fn %{} ->
# Returns a string (JSON-encodable)
DateTime.utc_now() |> to_string()
end
}
# Execute the functions and get formatted output
outputs = Responses.call_functions(response.function_calls, functions)
# Continue the conversation with the function results
{:ok, final_response} = Responses.create(response, input: outputs)
Error Handling
If a function is not found or raises an error, the output will contain an error message instead of the function result.
Create a new response.
When the argument is a string, it is used as the input text.
Otherwise, the argument is expected to be a keyword list of options that OpenAI expects,
such as input
, model
, temperature
, max_tokens
, etc.
Streaming
Pass a stream:
option with a callback function to stream the response.
The callback receives results wrapped in {:ok, chunk}
or {:error, reason}
tuples:
Responses.create(
input: "Write a story",
stream: fn
{:ok, %{event: "response.output_text.delta", data: %{"delta" => text}}} ->
IO.write(text)
:ok
{:error, reason} ->
IO.puts("Stream error: #{inspect(reason)}")
:ok # Continue despite errors
_ ->
:ok
end
)
The callback should return :ok
to continue or {:error, reason}
to stop the stream.
For simpler text streaming, use the delta/1
helper:
Responses.create(
input: "Write a story",
stream: Responses.Stream.delta(&IO.write/1)
)
If no model is specified, the default model is used.
Create a response based on a previous response.
This allows creating follow-up responses that maintain context from a previous response. The previous response's ID is automatically included in the request.
Examples
{:ok, first} = Responses.create("What is Elixir?")
{:ok, followup} = Responses.create(first, input: "Tell me more about its concurrency model")
Same as create/1
but raises an error on failure.
Returns the response directly instead of an {:ok, response} tuple.
Examples
response = Responses.create!("Hello, world!")
IO.puts(response.text)
Same as create/2
but raises an error on failure.
Returns the response directly instead of an {:ok, response} tuple.
Examples
first = Responses.create!("What is Elixir?")
followup = Responses.create!(first, input: "Tell me more")
List available models.
Accepts an optional match
string to filter by model ID.
Request a response from the OpenAI API.
Used as a building block by other functions in this module.
Accepts that same arguments as Req.request/1
.
You should provide url
, json
, method
, and other options as needed.
Run a conversation with automatic function calling.
This function automates the process of handling function calls by repeatedly calling the provided functions and feeding their results back to the model until a final response without function calls is received.
Parameters
options
- Keyword list of options to pass tocreate/1
functions
- A map or keyword list where:- Keys are function names (as atoms or strings)
- Values are functions that accept the parsed arguments and return the result
Returns
Returns a list of all responses generated during the conversation, in chronological order. The last response in the list will be the final answer without function calls.
Examples
# Define available functions
functions = %{
"get_weather" => fn %{"location" => location} ->
# Simulate weather API call
"The weather in #{location} is 72°F and sunny"
end,
"get_time" => fn %{} ->
DateTime.utc_now() |> to_string()
end
}
# Create function tools
weather_tool = Responses.Schema.build_function(
"get_weather",
"Get current weather for a location",
%{location: :string}
)
time_tool = Responses.Schema.build_function(
"get_time",
"Get the current UTC time",
%{}
)
# Run the conversation
responses = Responses.run(
[input: "What's the weather in Paris and what time is it?",
tools: [weather_tool, time_tool]],
functions
)
# The last response contains the final answer
final_response = List.last(responses)
IO.puts(final_response.text)
Same as run/2
but raises an error on failure.
Returns the list of responses directly instead of an {:ok, responses} tuple.
Stream a response from the OpenAI API as an Enumerable.
Returns a Stream that yields chunks with event
and data
keys.
Examples
# Stream and handle all results
for result <- Responses.stream("Tell me a story") do
case result do
{:ok, chunk} -> IO.inspect(chunk)
{:error, reason} -> IO.puts("Error: #{inspect(reason)}")
end
end
# Process only text deltas, ignoring errors
Responses.stream("Write a poem")
|> Stream.filter(fn
{:ok, %{event: "response.output_text.delta"}} -> true
_ -> false
end)
|> Stream.map(fn {:ok, chunk} -> chunk.data["delta"] end)
|> Enum.each(&IO.write/1)
# Accumulate all text with error handling
result = Responses.stream(input: "Explain quantum physics")
|> Enum.reduce(%{text: "", errors: []}, fn
{:ok, %{event: "response.output_text.delta", data: %{"delta" => delta}}}, acc ->
%{acc | text: acc.text <> delta}
{:error, reason}, acc ->
%{acc | errors: [reason | acc.errors]}
_, acc ->
acc
end)