View Source OpenaiEx User Guide
Mix.install([
{:openai_ex, "~> 0.8.6"},
# {:openai_ex, path: Path.join(__DIR__, "..")},
{:kino, "~> 0.14.2"}
])
Introduction
OpenaiEx
is an Elixir library that provides a community-maintained OpenAI API client.
Portions of this project were developed with assistance from ChatGPT 3.5 and 4, as well as Claude 3 Opus and Claude 3.5 Sonnet. However, every line of code is human curated (by me 😇).
At this point, all API endpoints and features (as of May 1, 2024) are supported, including the Assistants API Beta 2 with Run streaming, DALL-E-3, Text-to-Speech, the tools support in chat completions, and the streaming version of the chat completion endpoint. Streaming request cancellation is also supported.
Configuration of Finch pools and API base url are supported.
There are some differences compared to other elixir openai wrappers.
- I tried to faithfully mirror the naming/structure of the official python api. For example, content that is already in memory can be uploaded as part of a request, it doesn't have to be read from a file at a local path.
- I was developing for a livebook use-case, so I don't have any config, only environment variables.
- Streaming API versions, with request cancellation, are supported.
- The underlying transport is finch, rather than httpoison
- 3rd Party (including local) LLMs with an OpenAI proxy, as well as the Azure OpenAI API, are considered legitimate use cases.
To learn how to use OpenaiEx, you can refer to the relevant parts of the official OpenAI API reference documentation, which we link to throughout this document.
This file is an executable Livebook, which means you can interactively run and modify the code samples provided. We encourage you to open it in Livebook and try out the code for yourself!
Installation
You can install OpenaiEx using Mix:
In Livebook
Add the following code to the first connection cell:
Mix.install(
[
{:openai_ex, "~> 0.8.6"}
]
)
In a Mix Project
Add the following to your mix.exs file:
def deps do
[
{:openai_ex, "~> 0.8.6"}
]
end
Authentication
To authenticate with the OpenAI API, you will need an API key. We recommend storing your API key in an environment variable. Since we are using Livebook, we can store this and other environment variables as Livebook Hub Secrets.
apikey = System.fetch_env!("LB_OPENAI_API_KEY")
openai = OpenaiEx.new(apikey)
You can also specify an organization if you are a member of more than one:
# organization = System.fetch_env!("LB_OPENAI_ORGANIZATION")
# openai = OpenaiEx.new(apikey, organization)
For more information on authentication, see the OpenAI API Authentication reference.
Configuration
There are a few places where configuration seemed necessary.
Receive Timeout
The default receive timeout is 15 seconds. If you are seeing longer latencies, you can override the default with
# set receive timeout to 45 seconds
openai = OpenaiEx.new(apikey) |> OpenaiEx.with_receive_timeout(45_000)
Finch Instance
In production scenarios where you want to explicitly tweak the Finch pool, you can create a new Finch instance using
Finch.start_link(
name: MyConfiguredFinch,
pools: ...
)
You can use this instance of Finch (instead of the default OpenaiEx.Finch
) by setting the finch name
openai_with_custom_finch = openai |> with_finch_name(MyConfiguredFinch)
Base Url
There are times, such as when using a local LLM (like Ollama) with an OpenAI proxy, when you need to reset the base url of the API. This is generally only applicable for chat and chat completion endpoints and can be accomplished by
# in this example, our development livebook server is running in a docker dev container while
# the local llm is running on the host machine
proxy_openai =
OpenaiEx.new(apikey) |> OpenaiEx.with_base_url("http://host.docker.internal:8000/v1")
Using an LLM gateway (e.g. Portkey)
LLM gateways are used to provide a virtual interface to multiple LLM providers behind a single API endpoint.
Generally they work on the basis of additional HTTP headers being added that specify the model to use, the provider to use, and possibly other parameters.
For example, to configure your client for openai using the portkey gateway, you would do this:
openai_api_key = "an-openai-api-key"
portkey_api_key = "a-portkey-api-key"
OpenaiEx.new(openai_api_key)
|> OpenaiEx.with_base_url("https://api.portkey.ai/v1")
|> OpenaiEx.with_additional_headers(%{"x-portkey-api-key"=>portkey_api_key, "x-portkey-provider"=>"openai"})
similarly, for Anthropic, you would do this:
anthropic_api_key = "some-anthropic-api-key"
OpenaiEx.new(anthropic_api_key)
|> OpenaiEx.with_base_url("https://api.portkey.ai/v1")
|> OpenaiEx.with_additional_headers(%{"x-portkey-api-key"=>portkey_api_key, "x-portkey-provider"=>"anthropic"})
Azure OpenAI
The Azure OpenAI API replicates the Completion, Chat Completion and Embeddings endpoints from OpenAI.
However, it modifies the base URL as well as the endpoint path, and adds a parameter to the URL query. These modifications are accommodated with the following calls:
for non Entra Id
openai = OpenaiEx._for_azure(azure_api_id, resource_name, deployment_id, api_version)
and for Entra Id
openai = OpenaiEx.new(entraId) |> OpenaiEx._for_azure(resource_name, deployment_id, api_version)
These methods will be supported as long as the Azure version does not deviate too far from the base OpenAI API.
Error Handling
OpenaiEx provides robust error handling to support both interactive and non-interactive usage. There are two main ways to handle errors:
Error Tuples
Most functions in OpenaiEx return :ok
and :error
tuples. This allows for pattern matching and explicit error handling:
case OpenaiEx.Chat.Completions.create(openai, chat_req) do
{:ok, response} -> # Handle successful response
{:error, error} -> # Handle error
end
Exceptions
For scenarios where you prefer exceptions, OpenaiEx provides bang (!) versions of functions that raise exceptions on errors:
try do
response = OpenaiEx.Chat.Completions.create!(openai, chat_req)
# Handle successful response
rescue
e in OpenaiEx.Error -> # Handle exception
end
Error Types
OpenaiEx closely follows the error types defined in the official OpenAI Python library. For a comprehensive list and description of these error types, please refer to the OpenAI API Error Types documentation.
In addition to these standard error types, OpenaiEx defines two specific error types for handling streaming operations:
SSETimeoutError
: Raised when a streaming response times outSSECancellationError
: Raised when a user initiates a stream cancellation
For more details on specific error types and their attributes, refer to the OpenaiEx.Error
module documentation.
Model
List Models
To list all available models, use the Model.list()
function:
alias OpenaiEx.Models
openai |> Models.list()
Retrieve Models
To retrieve information about a specific model, use the Model.retrieve()
function:
openai |> Models.retrieve("gpt-4o-mini")
For more information on using models, see the OpenAI API Models reference.
Chat Completion
To generate a chat completion, you need to define a chat completion request structure using the ChatCompletion.new()
function. This function takes several parameters, such as the model ID and a list of chat messages. We have a module ChatMessage
which helps create messages in the chat format.
alias OpenaiEx.Chat
alias OpenaiEx.ChatMessage
alias OpenaiEx.MsgContent
chat_req =
Chat.Completions.new(
model: "gpt-4o-mini",
messages: [
ChatMessage.user(
"Give me some background on the elixir language. Why was it created? What is it used for? What distinguishes it from other languages? How popular is it?"
)
]
)
You are able to pass images to the API by creating a message.
ChatMessage.user(
MsgContent.image_url(
"https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/images/starmask.png"
)
)
You can generate a chat completion using the ChatCompletion.create()
function:
{:ok, chat_response} = openai |> Chat.Completions.create(chat_req)
For a more in-depth example of ChatCompletion
, check out the Deeplearning.AI OrderBot Livebook.
You can also call the endpoint and have it stream the response. This returns the result as a series of tokens, which have to be put together in code.
To use the stream option, call the ChatCompletion.create()
function with stream: true
(and stream_options
set to {%include_usage: true}
to recieve usage information)
{:ok, chat_stream} = openai |> Chat.Completions.create(chat_req |> Map.put(:stream_options, %{include_usage: true}), stream: true)
IO.puts(inspect(chat_stream))
IO.puts(inspect(chat_stream.task_pid))
chat_stream.body_stream |> Stream.flat_map(& &1) |> Enum.each(fn x -> IO.puts(inspect(x)) end)
Canceling a streaming request
The chat_stream.task_pid
can be used in conjunction with OpenaiEx.HttpSse.cancel_request/1
to cancel an ongoing request.
You need to check the return chat_stream.status
field. In case the status is not 2XX, the body_stream
and task_pid
fields are not available. Instead, an error
field will be returned.
For example
bad_req = Chat.Completions.new(model: "code-llama", messages: [])
{:error, err_resp} = openai |> Chat.Completions.create(bad_req, stream: true)
For a detailed example of the use of the streaming ChatCompletion
API, including how to cancel an ongoing request, check out Streaming Orderbot, the streaming equivalent of the prior example.
Stream Timeout
While OpenAI's official API implementation typically doesn't require explicit timeout handling for streams, some third-party implementations of the OpenAI API may benefit from custom timeout settings. OpenaiEx provides a way to set a stream-specific timeout to handle these cases.
You can set a stream-specific timeout using the with_stream_timeout
function:
# Set a stream timeout of 30 seconds
openai_with_timeout = openai |> OpenaiEx.with_stream_timeout(30_000)
This is particularly useful when working with third-party OpenAI API implementations that may have different performance characteristics than the official API.
Exception Handling for Streams
When working with streams, it's important to handle potential exceptions that may occur during stream processing. OpenaiEx uses a custom exception type for stream-related errors. Here's how you can handle these exceptions:
alias OpenaiEx.Error
process_stream = fn openai, request ->
response = Chat.Completions.create!(openai, request, stream: true)
try do
response.body_stream
|> Stream.flat_map(& &1)
|> Enum.each(fn chunk ->
# Process each chunk here
IO.inspect(chunk)
end)
rescue
e in OpenaiEx.Error ->
case e do
%{kind: :sse_cancellation} ->
IO.puts("Stream was canceled")
{:error, :canceled, e.message}
%{kind: :sse_timeout_error} ->
IO.puts("Timeout on SSE stream")
{:error, :timeout, e.message}
_ ->
IO.puts("Unknown error occurred")
{:error, :unknown, e.message}
end
e ->
IO.puts("An unexpected error occurred")
{:error, :unexpected, Exception.message(e)}
end
end
# Usage
chat_req = Chat.Completions.new(
model: "gpt-4o-mini",
messages: [ChatMessage.user("Tell me a short story about a brave knight")],
max_tokens: 500
)
# Use the OpenaiEx struct with custom stream timeout
result = process_stream.(openai_with_timeout, chat_req)
case result do
{:error, type, message} ->
IO.puts("Error type: #{type}")
IO.puts("Error message: #{message}")
_ ->
IO.puts("Stream processed successfully")
end
In this example, we define a process_stream
function that handles different types of stream exceptions:
:canceled
: The stream was canceled. We return an error tuple.:timeout
: The stream timed out. We return an error tuple.- Any other
OpenaiEx.Exception
: We treat it as an unknown error. - Any other exception: We treat it as an unexpected error.
This approach allows you to gracefully handle different types of stream-related errors and take appropriate actions.
For more information on generating chat completions, see the OpenAI API Chat Completions reference.
Function(Tool) Calling
In OpenAI's ChatCompletion
endpoint, you can use the function calling feature to call a custom function and pass its result as part of the conversation. Here's an example of how to use the function calling feature:
First, we set up the function specification and completion request. The function specification defines the name, description, and parameters of the function we want to call. In this example, we define a function called get_current_weather
that takes a location
parameter and an optional unit
parameter. The completion request includes the function specification, the conversation history, and the model we want to use.
tool_spec =
Jason.decode!("""
{"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
""")
rev_msgs = [
ChatMessage.user("What's the weather like in Boston today?")
]
fn_req =
Chat.Completions.new(
model: "gpt-4o-mini",
messages: rev_msgs |> Enum.reverse(),
tools: [tool_spec],
tool_choice: "auto"
)
Next, we call the OpenAI endpoint to get a response that includes the function call.
{:ok, fn_response} = openai |> Chat.Completions.create(fn_req)
We extract the function call from the response and call the appropriate function with the given parameters. In this example, we define a map of functions that maps function names to their implementations. We then use the function name and arguments from the function call to look up the appropriate function and call it with the given parameters.
fn_message = fn_response["choices"] |> Enum.at(0) |> Map.get("message")
tool_call = fn_message |> Map.get("tool_calls") |> List.first()
tool_id = tool_call |> Map.get("id")
fn_call = tool_call |> Map.get("function")
functions = %{
"get_current_weather" => fn location, unit ->
%{
"location" => location,
"temperature" => "72",
"unit" => unit,
"forecast" => ["sunny", "windy"]
}
|> Jason.encode!()
end
}
fn_name = fn_call["name"]
fn_args = fn_call["arguments"] |> Jason.decode!()
location = fn_args["location"]
unit = unless is_nil(fn_args["unit"]), do: fn_args["unit"], else: "fahrenheit"
fn_value = functions[fn_name].(location, unit)
We then pass the returned value back to the ChatCompletion endpoint with the conversation history to that point to get the final response.
latest_msgs = [ChatMessage.tool(tool_id, fn_name, fn_value) | [fn_message | rev_msgs]]
fn_req_2 =
Chat.Completions.new(
model: "gpt-4o-mini",
messages: latest_msgs |> Enum.reverse()
)
{:ok, fn_response_2} = openai |> Chat.Completions.create(fn_req_2)
The final response includes the result of the function call integrated into the conversation.
Image
Generate Image
We define the image creation request structure using the Image.new
function
alias OpenaiEx.Images
img_req = Images.Generate.new(prompt: "An adorable baby sea otter", size: "256x256", n: 1)
Then call the Image.create()
function to generate the images.
{:ok, img_response} = openai |> Images.generate(img_req)
For more information on generating images, see the OpenAI API Image reference.
Fetch the generated images
With the information in the image response, we can fetch the images from their URLs
fetch_blob = fn url ->
Finch.build(:get, url) |> Finch.request!(OpenaiEx.Finch) |> Map.get(:body)
end
fetched_images = img_response["data"] |> Enum.map(fn i -> i["url"] |> fetch_blob.() end)
View the generated images
Finally, we can render the images using Kino
fetched_images
|> Enum.map(fn r -> r |> Kino.Image.new("image/png") |> Kino.render() end)
img_to_expmt = fetched_images |> List.first()
Edit Image
We define an image edit request structure using the Images.Edit.new()
function. This function requires an image and a mask. For the image, we will use the one that we received. Let's load the mask from a URL.
# if you're having problems downloading raw github content, you may need to manually set your DNS server to "8.8.8.8" (google)
star_mask =
fetch_blob.(
"https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/images/starmask.png"
)
# star_mask = OpenaiEx.new_file(path: Path.join(__DIR__, "../assets/images/starmask.png"))
Set up the image edit request with image, mask and prompt.
img_edit_req =
Images.Edit.new(
image: img_to_expmt,
mask: star_mask,
size: "256x256",
prompt: "Image shows a smiling Otter"
)
We then call the Image.create_edit()
function
{:ok, img_edit_response} = openai |> Images.edit(img_edit_req)
and view the result
img_edit_response["data"]
|> Enum.map(fn i -> i["url"] |> fetch_blob.() |> Kino.Image.new("image/png") |> Kino.render() end)
Image Variations
We define an image variation request structure using the Images.Variation.new()
function. This function requires an image.
img_var_req = Images.Variation.new(image: img_to_expmt, size: "256x256")
Then call the Image.create_variation()
function to generate the images.
###
{:ok, img_var_response} = openai |> Images.create_variation(img_var_req)
img_var_response["data"]
|> Enum.map(fn i -> i["url"] |> fetch_blob.() |> Kino.Image.new("image/png") |> Kino.render() end)
For more information on images variations, see the OpenAI API Image Variations reference.
Embedding
Define the embedding request structure using Embedding.new
.
alias OpenaiEx.Embeddings
emb_req =
Embeddings.new(
model: "text-embedding-ada-002",
input: "The food was delicious and the waiter..."
)
Then call the Embedding.create()
function.
{:ok, emb_response} = openai |> Embeddings.create(emb_req)
For more information on generating embeddings, see the OpenAI API Embedding reference
Audio
alias OpenaiEx.Audio
Create speech
For text to speech, we create an Audio.Speech
request structure as follows
speech_req =
Audio.Speech.new(
model: "tts-1",
voice: "alloy",
input: "The quick brown fox jumped over the lazy dog",
response_format: "mp3"
)
We then call the Audio.Speech.create()
function to create the audio response
{:ok, speech_response} = openai |> Audio.Speech.create(speech_req)
We can play the response using the Kino
Audio widget.
speech_response |> Kino.Audio.new(:mp3)
Create transcription
To define an audio transcription request structure, we need to create a file parameter using Audio.File.new()
.
# if you're having problems downloading raw github content, you may need to manually set your DNS server to "8.8.8.8" (google)
audio_url = "https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/transcribe.mp3"
audio_file = OpenaiEx.new_file(name: audio_url, content: fetch_blob.(audio_url))
# audio_file = OpenaiEx.new_file(path: Path.join(__DIR__, "../assets/transcribe.mp3"))
The file parameter is used to create the Audio.Transcription request structure
transcription_req = Audio.Transcription.new(file: audio_file, model: "whisper-1")
We then call the Audio.Transcription.create()
function to create a transcription.
{:ok, transcription_response} = openai |> Audio.Transcription.create(transcription_req)
Create translation
The translation call uses practically the same request structure, but calls the Audio.Translation.create()
endpoint
For more information on the audio endpoints see the Openai API Audio Reference
File
List files
To request all files that belong to the user organization, call the File.list()
function
alias OpenaiEx.Files
openai |> Files.list()
Upload files
To upload a file, we need to create a file parameter, and then the upload request
# if you're having problems downloading raw github content, you may need to manually set your DNS server to "8.8.8.8" (google)
ftf_url = "https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/fine-tune.jsonl"
fine_tune_file = OpenaiEx.new_file(name: ftf_url, content: fetch_blob.(ftf_url))
# fine_tune_file = OpenaiEx.new_file(path: Path.join(__DIR__, "../assets/fine-tune.jsonl"))
upload_req = Files.new_upload(file: fine_tune_file, purpose: "fine-tune")
Then we call the File.create()
function to upload the file
{:ok, upload_res} = openai |> Files.create(upload_req)
We can verify that the file has been uploaded by calling
openai |> Files.list()
We grab the file id from the previous response value to use in the following samples
file_id = upload_res["id"]
Retrieve files
In order to retrieve meta information on a file, we simply call the File.retrieve()
function with the given id
openai |> Files.retrieve(file_id)
Retrieve file content
Similarly to download the file contents, we call File.content()
openai |> Files.content(file_id)
Delete file
Finally, we can delete the file by calling File.delete()
openai |> Files.delete(file_id)
Verify that the file has been deleted by listing files again
openai |> Files.list()
FineTuning Job
To run a fine-tuning job, we minimally need a training file. We will re-run the file creation request above.
{:ok, upload_res} = openai |> Files.create(upload_req)
Next we call FineTuning.Jobs.new()
to create a new request structure
alias OpenaiEx.FineTuning
ft_req = FineTuning.Jobs.new(model: "gpt-4o-mini-2024-07-18", training_file: upload_res["id"])
To begin the fine tune, we call the FineTuning.Jobs.create()
function
{:ok, ft_res} = openai |> FineTuning.Jobs.create(ft_req)
We can list all fine tunes by calling FineTuning.Jobs.list()
openai |> FineTuning.Jobs.list()
The function FineTune.retrieve()
gets the details of a particular fine tune.
ft_id = ft_res["id"]
openai |> FineTuning.Jobs.retrieve(fine_tuning_job_id: ft_id)
and FineTuning.Jobs.list_events()
can be called to get the events
openai |> FineTuning.Jobs.list_events(fine_tuning_job_id: ft_id)
To cancel a Fine Tune job, call FineTuning.Jobs.cancel()
openai |> FineTuning.Jobs.cancel(fine_tuning_job_id: ft_id)
A fine tuned model can be deleted by calling the Model.delete()
ft_model = ft_res["fine_tuned_model"]
unless is_nil(ft_model) do
openai |> Models.delete(ft_model)
end
For more information on the fine tune endpoints see the Openai API Moderation Reference
Batch
alias OpenaiEx.Batches
Create batch
Use the Batch.create()
function to create and execute a batch from an uploaded file of requests.
First, we need to upload a file containing the batch requests using the File
API.
batch_url =
"https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/batch-requests.jsonl"
batch_file = OpenaiEx.new_file(name: batch_url, content: fetch_blob.(batch_url))
# batch_file = OpenaiEx.new_file(path: Path.join(__DIR__, "../assets/batch-requests.jsonl"))
batch_upload_req = Files.new_upload(file: batch_file, purpose: "batch")
{:ok, batch_upload_res} = openai |> Files.create(batch_upload_req)
Then, we create the batch request using Batch.new()
and specify the necessary parameters.
batch_req =
Batches.new(
input_file_id: batch_upload_res["id"],
endpoint: "/v1/chat/completions",
completion_window: "24h"
)
Finally, we call the Batch.create()
function to create and execute the batch.
{:ok, batch} = openai |> Batches.create(batch_req)
Retrieve batch
Use the Batch.retrieve()
function to retrieve information about a specific batch.
batch_id = batch["id"]
{:ok, batch_job} = openai |> Batches.retrieve(batch_id: batch_id)
batch_job_output_file_id = batch_job["output_file_id"]
{:ok, batch_job_output_file} = openai |> Files.retrieve(batch_job_output_file_id)
{status, batch_result} = batch_output_content = openai |> Files.content(batch_job_output_file["id"])
Note that the string is not valid json (it's a sequence of json objects without the commas or the array '[' ']' delimiters), so it cannot be parsed as such.
Cancel batch
Use the Batch.cancel()
function to cancel an in-progress batch.
{status, cancel_result} = openai |> Batches.cancel(batch_id: batch_id)
List batches
Use the Batch.list()
function to list your organization's batches.
openai |> Batches.list()
For more information on the Batch API, see the OpenAI API Batch Reference.
Moderation
We use the moderation API by calling Moderation.new()
to create a new request
alias OpenaiEx.Moderations
mod_req = Moderations.new(input: "I want to kill people")
The call the function Moderation.create()
mod_res = openai |> Moderations.create(mod_req)
For more information on the moderation endpoints see the Openai API Moderation Reference
Assistant
alias OpenaiEx.Beta.Assistants
Create Assistant
To create an assistant with model and instructions, call the Assistant.create()
function.
First, we setup the create request parameters. This request sets up an Assistant with a code interpreter tool.
hr_assistant_req =
Assistants.new(
instructions:
"You are an HR bot, and you have access to files to answer employee questions about company policies.",
name: "HR Helper",
tools: [%{type: "file_search"}],
model: "gpt-4o-mini"
)
Then we call the create function
{:ok, asst} = openai |> Assistants.create(hr_assistant_req)
Retrieve Assistant
Extract the id field for the assistant
assistant_id = asst["id"]
which can then be used to retrieve the Assistant fields, using the Assistant.retrieve()
function.
openai |> Assistants.retrieve(assistant_id)
Modify Assistant
Once created, an assistant can be modified using the Assistant.update()
function.
Now we will show an example assistant request using the retrieval tool with a set of files. First we set up the files (in this case a sample HR document) by uploading using the File
API.
alias OpenaiEx.Files
# hr_file = OpenaiEx.new_file(path: Path.join(__DIR__, "../assets/cyberdyne.txt"))
hrf_url = "https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/cyberdyne.txt"
hr_file = OpenaiEx.new_file(name: hrf_url, content: fetch_blob.(hrf_url))
hr_upload_req = Files.new_upload(file: hr_file, purpose: "assistants")
{:ok, hr_upload_res} = openai |> Files.create(hr_upload_req)
file_id = hr_upload_res["id"]
Next we create the update request
math_assistant_req =
Assistants.new(
instructions:
"You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
name: "Math Tutor",
tools: [%{type: "code_interpreter"}],
model: "gpt-4o-mini",
tool_resources: %{code_interpreter: %{file_ids: [file_id]}}
)
Finally we call the endpoint to modify the Assistant
{:ok, asst} = openai |> Assistants.update(assistant_id, math_assistant_req)
Delete Assistant
Finally we can delete assistants using the Assistant.delete()
function
openai |> Assistants.delete(assistant_id)
List Assistants
We use Assistant.list()
to get a list of assistants
assts = openai |> Assistants.list()
Vector Stores
alias OpenaiEx.Beta.VectorStores
alias OpenaiEx.Beta.VectorStores.Files
alias OpenaiEx.Beta.VectorStores.File.Batches
Create vector store
Use VectorStores.create()
to create a vector store.
vector_store_req = VectorStores.new(name: "HR Documents")
{:ok, vector_store} = openai |> VectorStores.create(vector_store_req)
vector_store_id = vector_store["id"]
Retrieve vector store
Use VectorStores.retrieve()
to retrieve a vector store.
openai |> VectorStores.retrieve(vector_store_id)
Update vector store
Use VectorStores.update()
to modify the vector store.
openai |> VectorStores.update(vector_store_id, %{name: "HR Documents 2"})
Delete vector store
VectorStores.delete()
can be used to delete a vector store.
openai |> VectorStores.delete(vector_store_id)
List vector stores
We use VectorStores.list()
to get a list of vectorstores.
openai |> VectorStores.list()
Vector Store Files
Create vector store file
We can create a vector store file by attaching a file to a vector store using VectorStores.Files.create()
.
First we recreate the vector store above
{:ok, vector_store} = openai |> VectorStores.create(VectorStores.new(name: "HR Documents"))
then attach the file id from earlier
vector_store_id = vector_store["id"]
{:ok, vs_file} = openai |> VectorStores.Files.create(vector_store_id, file_id)
Retrieve vector store file
Retrieve a vector store file using the VectorStores.Files.retrieve()
function
openai |> VectorStores.Files.retrieve(vector_store_id, file_id)
Delete vector store file
Detach a file from the vector store using VectorStores.Files.delete()
openai |> VectorStores.Files.delete(vector_store_id, file_id)
List vector store files
List vector store files using VectorStores.Files.list()
openai |> VectorStores.Files.list(vector_store_id)
Vector Store File Batches
File batches allow addition of multiple files to a vector store in a single operation.
Create VS file batch
Use VectorStores.File.Batch.create()
to attach a list of file ids to a vector store.
{:ok, vsf_batch} = openai |> VectorStores.File.Batches.create(vector_store_id, [file_id])
Retrieve VS file batch
Use VectorStores.File.Batch.retrieve()
to retrieve the batch.
vsf_batch_id = vsf_batch["id"]
openai |> VectorStores.File.Batches.retrieve(vector_store_id, vsf_batch_id)
Cancel VS file batch
Use VectorStores.File.Batches.cancel()
to cancel a batch.
openai |> VectorStores.File.Batches.cancel(vector_store_id, vsf_batch_id)
List VS file batch
Use VectorStores.File.Batches.list()
openai |> VectorStores.File.Batches.list(vector_store_id, vsf_batch_id)
Thread
alias OpenaiEx.Beta.Threads
alias OpenaiEx.Beta.Threads.Messages
Create thread
Use the [Thread.create()
] function to create threads. A thread can be created empty or with messages.
{:ok, empty_thread} = openai |> Threads.create()
msg_hr =
Messages.new(
role: "user",
content: "What company do we work at?",
attachments: [%{file_id: file_id, tools: [%{type: "file_search"}]}]
)
msg_ai = Messages.new(role: "user", content: "How does AI work? Explain it in simple terms.")
thrd_req = Threads.new(messages: [msg_hr, msg_ai])
{:ok, thread} = openai |> Threads.create(thrd_req)
Retrieve thread
Thread.retrieve()
can be used to get the thread parameters given the id.
thread_id = thread["id"]
openai |> Threads.retrieve(thread_id)
Modify thread
The metadata for a thread can be modified using Thread.update()
openai |> Threads.update(thread_id, %{metadata: %{modified: "true", user: "abc123"}})
Delete thread
Use Thread.delete()
to delete a thread
openai |> Threads.delete(thread_id)
Verify deletion
openai |> Threads.retrieve(thread_id)
Messages
Create message
You can create a single message for a thread using Message.create()
thread_id = empty_thread["id"]
{:ok, message} = openai |> Threads.Messages.create(thread_id, msg_hr)
Retrieve message
Use [Message.retrieve()
] to retrieve a message
message_id = message["id"]
openai |> Threads.Messages.retrieve(%{thread_id: thread_id, message_id: message_id})
Modify message
The metadata for a message can be modified by [Message.update()
]
metadata = %{modified: "true", user: "abc123"}
upd_msg_req =
Threads.Messages.new(thread_id: thread_id, message_id: message_id, metadata: metadata)
{:ok, message} = openai |> Threads.Messages.update(upd_msg_req)
List messages
Use [Message.list()
] to get all the messages for a given thread
openai |> Threads.Messages.list(thread_id)
Runs
alias OpenaiEx.Beta.Threads.Runs
Create run
A run represents an execution on a thread. Use to Run.create()
with an assistant on a thread
{:ok, math_assistant} = openai |> Assistants.create(math_assistant_req)
math_assistant_id = math_assistant["id"]
run_req = Runs.new(thread_id: thread_id, assistant_id: math_assistant_id)
{:ok, run} = openai |> Runs.create(run_req)
Streaming
It is possible to stream the result of executing a run or resuming a run after submitting tool outputs. To accomplish this, pass stream: true
to the create
, create_thread_and_run
and submit_tool_outputs
functions.
{:ok, run_stream} = openai |> Runs.create(run_req, stream: true)
IO.puts(inspect(run_stream))
IO.puts(inspect(run_stream.task_pid))
run_stream.body_stream |> Stream.flat_map(& &1) |> Enum.each(fn x -> IO.puts(inspect(x)) end)
Retrieve run
Retrieve a run using Run.retrieve()
run_id = run["id"]
openai |> Runs.retrieve(%{thread_id: thread_id, run_id: run_id})
Modify run
The run metadata can be modified using the Run.update()
function
openai
|> Runs.update(%{
thread_id: thread_id,
run_id: run_id,
metadata: %{user_id: "user_zmVY6FvuBDDwIqM4KgH"}
})
List runs
List the runs belonging to a thread using Run.list()
openai |> Runs.list(thread_id)
Submit tool outputs to a run
When a run has the status
: "requires_action" and required_action.type
is submit_tool_outputs
, the Run.submit_tool_outputs()
can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.
openai
|> Runs.submit_tool_outputs(%{
thread_id: thread_id,
run_id: run_id,
tool_outputs: [%{tool_call_id: "foobar", output: "28C"}]
})
Cancel a run
You can cancel a run in_progress
using Run.cancel()
openai |> Runs.cancel(%{thread_id: thread_id, run_id: run_id})
Create thread and run
Use Run.create_and_run()
to create a thread and run.