OpenAI.Responses (openai_responses v0.3.1)
Client for the OpenAI Responses API.
This module provides functions to interact with OpenAI's Responses API, allowing you to create, retrieve, and manage AI-generated responses.
Examples
# Create a simple text response
{:ok, response} = OpenAI.Responses.create(
model: "gpt-4.1",
input: "Write a haiku about programming"
)
# Extract the text from the response
text = OpenAI.Responses.Helpers.output_text(response)
# Create a response with tools and options
{:ok, response} = OpenAI.Responses.create(
model: "gpt-4.1",
input: "What's the weather like in Paris?",
tools: [%{type: "web_search_preview"}],
temperature: 0.7
)
# Stream a response
stream = OpenAI.Responses.create_stream(
model: "gpt-4.1",
input: "Tell me a story"
)
Enum.each(stream, fn event -> IO.inspect(event) end)
Summary
Functions
Collects a complete response from a streaming response.
Creates a new response.
Deletes a specific response by ID.
Retrieves a specific response by ID.
Lists input items for a specific response.
Creates a response with structured output.
Creates a streaming response and returns a proper Enumerable stream of events.
Extracts text deltas from a streaming response.
Functions
@spec collect_stream(Enumerable.t()) :: map()
Collects a complete response from a streaming response.
This is a convenience function that consumes a stream and returns a complete response, similar to what would be returned by the non-streaming API. All events are processed and combined into a final response object.
Parameters
stream
- The stream from OpenAI.Responses.stream/1
Returns
- The complete response map
Examples
# Get a streaming response
stream = OpenAI.Responses.stream(model: "gpt-4.1", input: "Tell me a story")
# Collect all events into a single response object
response = OpenAI.Responses.collect_stream(stream)
# Process the complete response
text = OpenAI.Responses.Helpers.output_text(response)
IO.puts(text)
Creates a new response.
Parameters
opts
- Keyword list containing the request parameters::client
- The parameters to initialize a custom client. SeeClient.new/1
for more details.:model
- The model ID to use (e.g., "gpt-4.1"). This option is required.:input
- The text prompt or structured input message. This option is required.:tools
- List of tools to make available to the model:instructions
- System instructions for the model:temperature
- Sampling temperature (0.0 to 2.0):max_output_tokens
- Maximum number of tokens to generate:stream
- Whether to stream the response (usestream/1
for proper streaming):previous_response_id
- ID of a previous response for continuation- All other parameters supported by the API
Returns
{:ok, response}
- On success, returns the response{:error, error}
- On failure, potentially includingKeyError
if:model
or:input
are missing.
Deletes a specific response by ID.
Parameters
response_id
- The ID of the response to deleteopts
- Optional parameters for the request
Returns
{:ok, result}
- On success, returns deletion confirmation{:error, error}
- On failure
Retrieves a specific response by ID.
Parameters
response_id
- The ID of the response to retrieveopts
- Optional parameters for the request:include
- Additional data to include in the response
Returns
{:ok, response}
- On success, returns the response{:error, error}
- On failure
Lists input items for a specific response.
Parameters
response_id
- The ID of the responseopts
- Optional parameters for the request:before
- List input items before this ID:after
- List input items after this ID:limit
- Number of objects to return (1-100):order
- Sort order ("asc" or "desc")
Returns
{:ok, items}
- On success, returns the input items{:error, error}
- On failure
@spec parse( map(), keyword() ) :: {:ok, %{parsed: map() | list(), raw_response: map(), token_usage: map() | nil}} | {:error, any()}
Creates a response with structured output.
This function is similar to create/1
but automatically parses the response
according to the provided schema and returns the parsed data.
Parameters
schema
- The schema definition for structured outputopts
- Keyword list containing the request parameters::model
- The model ID to use (e.g., "gpt-4.1"). This option is required.:input
- The text prompt or structured input message. This option is required.:schema_name
- Optional name for the schema (default: "data"):strict
- Whether the output must conform strictly to the schema (default: true)- All other options supported by
create/1
Returns
{:ok, result_map}
- On success, returns a map containing::parsed
- The parsed data according to the schema.:raw_response
- The complete, raw response map received from the API.:token_usage
- The token usage map from the API response (e.g.,%{ "input_tokens" => 10, "output_tokens" => 50 }
), ornil
if not present.
{:error, error}
- On failure, potentially includingKeyError
if:model
or:input
are missing.
Examples
# Define a schema
calendar_event_schema = OpenAI.Responses.Schema.object(%{
name: :string,
date: :string,
participants: {:array, :string}
})
# Create a response with structured output
{:ok, result} = OpenAI.Responses.parse(
calendar_event_schema,
model: "gpt-4.1",
input: "Alice and Bob are going to a science fair on Friday.",
schema_name: "event"
)
# Access the parsed data and metadata
IO.puts("Event: #{result.parsed["name"]} on #{result.parsed["date"]}")
IO.puts("Participants: #{Enum.join(result.parsed["participants"], ", ")}")
IO.inspect(result.token_usage, label: "Token Usage")
@spec stream(keyword()) :: Enumerable.t()
Creates a streaming response and returns a proper Enumerable stream of events.
This function returns a stream that yields individual events as they arrive from the API, making it suitable for real-time processing of responses.
Parameters
opts
- Keyword list containing the request parameters::model
- The model ID to use (e.g., "gpt-4.1"). This option is required.:input
- The text prompt or structured input message. This option is required.- Other options supported by
create/1
Examples
# Print each event as it arrives
stream = OpenAI.Responses.stream(model: "gpt-4.1", input: "Tell me a story")
Enum.each(stream, &IO.inspect/1)
# Process text deltas in real-time
stream = OpenAI.Responses.stream(model: "gpt-4.1", input: "Tell me a story")
text_stream = OpenAI.Responses.Stream.text_deltas(stream)
# This preserves streaming behavior (one chunk at a time)
text_stream
|> Stream.each(fn delta ->
IO.write(delta)
end)
|> Stream.run()
Returns
- An Enumerable stream that yields events as they arrive. Will raise
KeyError
during enumeration if:model
or:input
are missing.
@spec text_deltas(Enumerable.t()) :: Enumerable.t(String.t())
Extracts text deltas from a streaming response.
This is a convenience function that returns a stream of text chunks as they arrive, useful for real-time display of model outputs. The function ensures text is not duplicated in the final output.
Parameters
stream
- The stream from OpenAI.Responses.stream/1
Returns
- A stream of text deltas
Examples
stream = OpenAI.Responses.stream(model: "gpt-4.1", input: "Tell me a story")
text_stream = OpenAI.Responses.text_deltas(stream)
# Print text deltas as they arrive (real-time output)
text_stream
|> Stream.each(fn delta ->
IO.write(delta)
end)
|> Stream.run()
IO.puts("") # Add a newline at the end
# Create a typing effect
stream = OpenAI.Responses.stream(model: "gpt-4.1", input: "Tell me a story")
text_stream = OpenAI.Responses.text_deltas(stream)
text_stream
|> Stream.each(fn delta ->
IO.write(delta)
Process.sleep(10) # Add delay for typing effect
end)
|> Stream.run()