Tutorial
Mix.install([
{:openai_responses, "~> 0.1.0"},
{:kino, "~> 0.11.0"}
])
Introduction
The only setup you need for using the library is to get your OpenAI API token. If you already have the OPENAI_API_KEY
environment variable set, then you can start right away.
alias OpenAI.Responses
alias OpenAI.Responses.Helpers
Basic usage
create/2
requires just two arguments: the name of the model, and the input text:
{:ok, response} = Responses.create("gpt-4o", "Write a haiku about programming")
The response
is just a map, and you can use helper functions to extract information from it:
Helpers.has_refusal?(response)
Helpers.output_text(response)
Helpers.token_usage(response)
You can also supplied additional parameters to the API call:
{:ok, response} =
Responses.create(
"gpt-4o",
"Do you need semicolons in Elixir",
instructions: "Talk like a pirate"
)
IO.puts Helpers.output_text(response)
A structured input can be manually constructed and passed to create/2
:
{:ok, response} =
Responses.create(
"gpt-4o",
[
%{role: "user", content: "knock knock."},
%{role: "assistant", content: "Who's there?"},
%{role: "user", content: "Orange."}
]
)
IO.puts Helpers.output_text(response)
input = [
%{
"role" => "user",
"content" => [
%{"type" => "input_text", "text" => "What is in this image?"},
%{
"type" => "input_image",
"image_url" => "https://upload.wikimedia.org/wikipedia/commons/d/d2/Three_early_medicine_bottles.jpg"
}
]
}
]
{:ok, response} = OpenAI.Responses.create("gpt-4o", input)
IO.puts Helpers.output_text(response)
Image helpers
As we saw in the previous section, you can manually create a structured input with images, but this requires writing verbose JSON-like structures. The library provides helper functions to make this process more ergonomic.
# Using the helper function to create a message with an image
input_message = Helpers.create_message_with_images(
"What is in this image?",
"https://upload.wikimedia.org/wikipedia/commons/d/d2/Three_early_medicine_bottles.jpg"
)
# The helper creates the same structure as the manual approach, but with less code
input_message
You can also specify multiple images with different detail levels:
multi_image_message = Helpers.create_message_with_images(
"Compare these two images",
[
{"https://upload.wikimedia.org/wikipedia/commons/d/d2/Three_early_medicine_bottles.jpg", "high"},
"https://upload.wikimedia.org/wikipedia/commons/4/48/Cocacolacollection.JPG"
],
detail: "low" # Default detail level for images without a specific level
)
# And then use it with the API
{:ok, response} = OpenAI.Responses.create("gpt-4o", [multi_image_message])
IO.puts Helpers.output_text(response)
Local image files are also supported and will be automatically encoded as base64 data URLs:
# This would work if you have these image files locally
# local_image_message = Helpers.create_message_with_images(
# "Describe these local images",
# ["path/to/image1.jpg", "path/to/image2.png"]
# )
The helper function eliminates boilerplate code, handles encoding of local images, and provides a more intuitive interface for working with images in your prompts.
Using built-in tools
The usage of built-in tools can be illustrated by the following example:
{:ok, response_no_tools} = Responses.create("gpt-4o", "What's the weather in San Francisco?")
IO.puts(Helpers.output_text(response_no_tools))
{:ok, response_with_search} =
Responses.create("gpt-4o", "What's the weather in San Francisco?",
tools: [%{type: "web_search_preview"}],
temperature: 0.7
)
IO.puts(Helpers.output_text(response_with_search))
Streaming responses
OpenAI.Responses supports true streaming, where you can process chunks as they arrive without waiting for the entire response to complete.
Real-time text streaming
This example demonstrates how to display text as it arrives in real-time using Kino.Frame:
frame = Kino.Frame.new()
Kino.render(frame)
# Create a stream from OpenAI
stream = Responses.stream("gpt-4o", "Write a short poem about coding in Elixir")
# Extract text deltas - no need for initializing a stream handler
text_stream = Responses.text_deltas(stream)
Kino.Frame.append(frame, Kino.Markdown.new("## Poem about coding\n"))
# Process the stream
text_stream
|> Stream.each(fn delta ->
Kino.Frame.append(frame, Kino.Markdown.new(delta, chunk: true))
end)
|> Stream.run()
Kino.Frame.append(frame, Kino.Markdown.new("\n\n*Generation complete* ✨"))
:done