View Source API Reference ex_openai.ex v2.0.0-beta2
Modules
The conversation that this response belonged to. Input items and output items from this response were automatically added to this conversation.
The conversation that this response belongs to.
An error that occurred while generating the response.
Module for representing the OpenAI schema ImageRefParam-2.
Module for representing the OpenAI schema MessagePhase-2.
ExOpenAI SDK - Auto-generated Elixir client for the OpenAI API.
Generates Elixir modules from parsed OpenAPI Schema structs.
Parser for OpenAPI YAML documentation.
Represents the parsed OpenAPI documentation.
Represents an OpenAPI operation (HTTP method handler).
Represents an OpenAPI parameter.
Represents an OpenAPI path with its operations.
Represents an OpenAPI request body.
Represents an OpenAPI response.
Represents an OpenAPI schema component.
Generates function body AST for OpenAPI operations.
Generates @doc and @spec attributes for OpenAPI operation functions.
Generates Elixir modules from parsed OpenAPI Path structs.
Converts raw JSON API responses into Elixir structs based on the OpenAPI response schema and the generated component modules.
Shared utilities for resolving OpenAPI schemas.
Writes generated SDK modules to source files under lib/ex_openai/generated.
Generates Elixir typespecs from OpenAPI Schema structs.
Indicates that a thread is active.
Module for representing the OpenAI schema AddUploadPartRequest.
Represents an individual Admin API key in an org.
An annotation that applies to a span of output text.
Module for representing the OpenAI schema ApiKeyList.
Module for representing the OpenAI schema ApplyPatchCallOutputStatus.
Outcome values reported for apply_patch tool call outputs.
Module for representing the OpenAI schema ApplyPatchCallStatus.
Status values reported for apply_patch tool calls.
Instruction describing how to create a file via the apply_patch tool.
Instruction for creating a new file via the apply_patch tool.
Instruction describing how to delete a file via the apply_patch tool.
Instruction for deleting an existing file via the apply_patch tool.
One of the create_file, delete_file, or update_file operations supplied to the apply_patch tool.
A tool call that applies file diffs by creating, deleting, or updating files.
A tool call representing a request to create, delete, or update files using diff patches.
The output emitted by an apply patch tool call.
The streamed output emitted by an apply patch tool call.
Allows the assistant to create, delete, or update files using unified diffs.
Instruction describing how to update a file via the apply_patch tool.
Instruction for updating an existing file via the apply_patch tool.
Module for representing the OpenAI schema ApproximateLocation.
Detailed information about a role assignment entry returned when listing assignments.
Assistant-authored message within a thread.
Represents an assistant that can call the model and use tools.
Represents an event emitted when streaming a Run.
Module for representing the OpenAI schema AssistantSupportedModels.
Module for representing the OpenAI schema AssistantToolsCode.
Module for representing the OpenAI schema AssistantToolsFileSearch.
Module for representing the OpenAI schema AssistantToolsFileSearchTypeOnly.
Module for representing the OpenAI schema AssistantToolsFunction.
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.
Controls which (if any) tool is called by the model.
none means the model will not call any tools and instead generates a message.
auto is the default value and means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools before responding to the user.
Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
Specifies a tool the model should use. Use to force the model to call a specific tool.
Attachment metadata included on thread items.
Module for representing the OpenAI schema AttachmentType.
The format of the output, in one of these options: json, text, srt, verbose_json, vtt, or diarized_json. For gpt-4o-transcribe and gpt-4o-mini-transcribe, the only supported format is json. For gpt-4o-transcribe-diarize, the supported formats are json, text, and diarized_json, with diarized_json required to receive speaker annotations.
Module for representing the OpenAI schema AudioTranscription.
A log of a user action or configuration change within this organization.
The actor who performed the audit logged action.
The API Key used to perform the audit logged action.
The service account that performed the audit logged action.
The session in which the audit logged action was performed.
The user who performed the audit logged action.
The event type.
The default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400.
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
Controls whether ChatKit automatically generates thread titles.
Module for representing the OpenAI schema Batch.
The expiration policy for the output and/or error file that are generated for a batch.
Represents an individual certificate uploaded to the organization.
Constrains the tools available to the model to a pre-defined set.
Constrains the tools available to the model to a pre-defined set.
Module for representing the OpenAI schema ChatCompletionDeleted.
Specifying a particular function via {"name": "my_function"} forces the model to call that function.
Module for representing the OpenAI schema ChatCompletionFunctions.
An object representing a list of Chat Completions.
A call to a custom tool created by the model.
An object representing a list of chat completion messages.
A call to a function tool created by the model.
Module for representing the OpenAI schema ChatCompletionMessageToolCallChunk.
The tool calls generated by the model, such as function calls.
Module for representing the OpenAI schema ChatCompletionModalities.
Specifies a tool the model should use. Use to force the model to call a specific function.
Specifies a tool the model should use. Use to force the model to call a specific custom tool.
Messages sent by the model in response to user messages.
Module for representing the OpenAI schema ChatCompletionRequestAssistantMessageContentPart.
Developer-provided instructions that the model should follow, regardless of
messages sent by the user. With o1 models and newer, developer messages
replace the previous system messages.
Module for representing the OpenAI schema ChatCompletionRequestFunctionMessage.
Module for representing the OpenAI schema ChatCompletionRequestMessage.
Learn about file inputs for text generation.
Module for representing the OpenAI schema ChatCompletionRequestMessageContentPartRefusal.
Developer-provided instructions that the model should follow, regardless of
messages sent by the user. With o1 models and newer, use developer messages
for this purpose instead.
Module for representing the OpenAI schema ChatCompletionRequestSystemMessageContentPart.
Module for representing the OpenAI schema ChatCompletionRequestToolMessage.
Module for representing the OpenAI schema ChatCompletionRequestToolMessageContentPart.
Messages sent by an end user, containing prompts or additional context information.
Module for representing the OpenAI schema ChatCompletionRequestUserMessageContentPart.
A chat completion message generated by the model.
The role of the author of a message
Module for representing the OpenAI schema ChatCompletionStreamOptions.
A chat completion delta generated by streamed model responses.
Module for representing the OpenAI schema ChatCompletionTokenLogprob.
A function tool that can be used to generate a response.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
Automatic thread title preferences for the session.
ChatKit configuration for the session.
Upload permissions and limits applied to the session.
History retention preferences returned for the session.
Active per-minute request limit for the session.
Represents a ChatKit session and its resolved configuration.
Module for representing the OpenAI schema ChatSessionStatus.
Optional per-session configuration settings for ChatKit behavior.
Workflow metadata and state returned for the session.
Controls diagnostic tracing during the session.
The chunking strategy used to chunk the file(s). If not set, will use the auto strategy.
Module for representing the OpenAI schema ClickButtonType.
A click action.
Record of a client side tool invocation initiated by the assistant.
Module for representing the OpenAI schema ClientToolCallStatus.
Indicates that a thread has been closed.
The output of a code interpreter tool call that is a file.
The image output from the code interpreter.
The logs output from the code interpreter.
The output of a code interpreter tool call that is text.
A tool that runs Python code to help generate a response to a prompt.
A tool call to run code.
Module for representing the OpenAI schema CompactResource.
Module for representing the OpenAI schema CompactResponseMethodPublicBody.
A compaction item generated by the v1/responses/compact API.
A compaction item generated by the v1/responses/compact API.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Module for representing the OpenAI schema CompleteUploadRequest.
Usage statistics for the completion request.
Combine multiple filters using and or or.
Module for representing the OpenAI schema ComputerAction.
Flattened batched actions for computer_use. Each action includes an
type discriminator and action-specific fields.
The output of a computer tool call.
Module for representing the OpenAI schema ComputerCallOutputStatus.
A pending safety check for the computer call.
Module for representing the OpenAI schema ComputerEnvironment.
A screenshot of a computer.
A computer screenshot image used with the computer use tool.
A tool that controls a virtual computer. Learn more about the computer tool.
A tool call to a computer use tool. See the computer use guide for more information.
The output of a computer tool call.
Module for representing the OpenAI schema ComputerToolCallOutputResource.
A tool that controls a virtual computer. Learn more about the computer tool.
Module for representing the OpenAI schema ContainerAutoParam.
A citation for a container file used to generate a model response.
Module for representing the OpenAI schema ContainerFileListResource.
Module for representing the OpenAI schema ContainerFileResource.
Module for representing the OpenAI schema ContainerListResource.
Module for representing the OpenAI schema ContainerMemoryLimit.
Module for representing the OpenAI schema ContainerNetworkPolicyAllowlistParam.
Module for representing the OpenAI schema ContainerNetworkPolicyDisabledParam.
Module for representing the OpenAI schema ContainerNetworkPolicyDomainSecretParam.
Module for representing the OpenAI schema ContainerReferenceParam.
Represents a container created with /v1/containers.
Module for representing the OpenAI schema ContainerResource.
Multi-modal input and output contents.
Module for representing the OpenAI schema ContextManagementParam.
A single item within a conversation. The set of possible types are the same as the output type of a Response object.
A list of Conversation items.
The conversation that this response belongs to. Items from this conversation are prepended to input_items for this response request.
Input items and output items from this response are automatically added to this conversation after this response completes.
Module for representing the OpenAI schema ConversationResource.
An x/y coordinate pair, e.g. { x: 100, y: 200 }.
The aggregated costs details of the specific time bucket.
Module for representing the OpenAI schema CreateAssistantRequest.
Module for representing the OpenAI schema CreateChatCompletionRequest.
Represents a chat completion response returned by model, based on the provided input.
Represents a streamed chunk of a chat completion response returned by the model, based on the provided input. Learn more.
Parameters for provisioning a new ChatKit session.
Module for representing the OpenAI schema CreateCompletionRequest.
Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).
Module for representing the OpenAI schema CreateContainerBody.
Module for representing the OpenAI schema CreateContainerFileBody.
Module for representing the OpenAI schema CreateConversationBody.
Module for representing the OpenAI schema CreateEmbeddingRequest.
Module for representing the OpenAI schema CreateEmbeddingResponse.
A CompletionsRunDataSource object describing a model sampling configuration.
A CustomDataSourceConfig object that defines the schema for the data source used for the evaluation runs. This schema is used to define the shape of the data that will be
A chat message that makes up the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
A data source config which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
Module for representing the OpenAI schema CreateEvalRequest.
A ResponsesRunDataSource object describing a model sampling configuration.
Module for representing the OpenAI schema CreateEvalRunRequest.
Deprecated in favor of LogsDataSourceConfig.
Module for representing the OpenAI schema CreateFileRequest.
Module for representing the OpenAI schema CreateFineTuningCheckpointPermissionRequest.
Module for representing the OpenAI schema CreateFineTuningJobRequest.
Request payload for creating a new group in the organization.
Request payload for adding a user to a group.
Module for representing the OpenAI schema CreateImageEditRequest.
Module for representing the OpenAI schema CreateImageRequest.
Module for representing the OpenAI schema CreateImageVariationRequest.
Module for representing the OpenAI schema CreateMessageRequest.
Module for representing the OpenAI schema CreateModelResponseProperties.
Module for representing the OpenAI schema CreateModerationRequest.
Represents if a given text input is potentially harmful.
Module for representing the OpenAI schema CreateResponse.
Module for representing the OpenAI schema CreateRunRequest.
Uploads a skill either as a directory (multipart files[]) or as a single zip file.
Uploads a new immutable version of a skill.
Module for representing the OpenAI schema CreateSpeechRequest.
Module for representing the OpenAI schema CreateSpeechResponseStreamEvent.
Module for representing the OpenAI schema CreateThreadAndRunRequest.
Options to create a new thread. If no thread is provided when running a request, an empty thread will be created.
Module for representing the OpenAI schema CreateTranscriptionRequest.
Represents a diarized transcription response returned by the model, including the combined transcript and speaker-segment annotations.
Represents a transcription response returned by model, based on the provided input.
Module for representing the OpenAI schema CreateTranscriptionResponseStreamEvent.
Represents a verbose json transcription response returned by model, based on the provided input.
Module for representing the OpenAI schema CreateTranslationRequest.
Module for representing the OpenAI schema CreateTranslationResponseJson.
Module for representing the OpenAI schema CreateTranslationResponseVerboseJson.
Module for representing the OpenAI schema CreateUploadRequest.
Module for representing the OpenAI schema CreateVectorStoreFileBatchRequest.
Module for representing the OpenAI schema CreateVectorStoreFileRequest.
Module for representing the OpenAI schema CreateVectorStoreRequest.
Parameters for creating a character from an uploaded video.
JSON parameters for editing an existing generated video.
Parameters for editing an existing generated video.
JSON parameters for extending an existing generated video.
Multipart parameters for extending an existing generated video.
JSON parameters for creating a new video generation job.
Multipart parameters for creating a new video generation job.
Parameters for remixing an existing generated video.
Module for representing the OpenAI schema CreateVoiceConsentRequest.
Module for representing the OpenAI schema CreateVoiceRequest.
A grammar defined by the user.
Unconstrained free-form text.
A call to a custom tool created by the model.
The output of a custom tool call from your code, being sent back to the model.
Module for representing the OpenAI schema CustomToolCallOutputResource.
Module for representing the OpenAI schema CustomToolCallResource.
A custom tool that processes input using a specified format.
A custom tool that processes input using a specified format. Learn more about custom tools
Module for representing the OpenAI schema DeleteAssistantResponse.
Module for representing the OpenAI schema DeleteCertificateResponse.
Module for representing the OpenAI schema DeleteFileResponse.
Module for representing the OpenAI schema DeleteFineTuningCheckpointPermissionResponse.
Module for representing the OpenAI schema DeleteMessageResponse.
Module for representing the OpenAI schema DeleteModelResponse.
Module for representing the OpenAI schema DeleteThreadResponse.
Module for representing the OpenAI schema DeleteVectorStoreFileResponse.
Module for representing the OpenAI schema DeleteVectorStoreResponse.
Module for representing the OpenAI schema DeletedConversation.
Module for representing the OpenAI schema DeletedConversationResource.
Confirmation payload returned after unassigning a role.
Module for representing the OpenAI schema DeletedSkillResource.
Module for representing the OpenAI schema DeletedSkillVersionResource.
Confirmation payload returned after deleting a thread.
Confirmation payload returned after deleting a video.
Module for representing the OpenAI schema DetailEnum.
Occurs when a stream ends.
A double click action.
A drag action.
An x/y coordinate pair, e.g. { x: 100, y: 200 }.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
JSON request body for image edits.
Represents an embedding vector returned by embedding endpoint.
Module for representing the OpenAI schema EmptyModelParam.
Module for representing the OpenAI schema Error.
Occurs when an error occurs. This can happen due to an internal server error or a timeout.
Module for representing the OpenAI schema ErrorResponse.
An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like
An object representing an error response from the Eval API.
A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be
Module for representing the OpenAI schema EvalGraderLabelModel.
Module for representing the OpenAI schema EvalGraderPython.
Module for representing the OpenAI schema EvalGraderScoreModel.
Module for representing the OpenAI schema EvalGraderStringCheck.
Module for representing the OpenAI schema EvalGraderTextSimilarity.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
A list of inputs, each of which may be either an input text, output text, input image, or input audio object.
A single content item: input text, output text, input image, or input audio.
A text output from the model.
A text input to the model.
An image input block used within EvalItem content arrays.
Module for representing the OpenAI schema EvalJsonlFileContentSource.
Module for representing the OpenAI schema EvalJsonlFileIdSource.
An object representing a list of evals.
A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
A EvalResponsesSource object describing a run data source configuration.
A schema representing an evaluation run.
An object representing a list of runs for an evaluation.
A schema representing an evaluation run output item.
An object representing a list of output items for an evaluation run.
A single grader result for an evaluation run output item.
Deprecated in favor of LogsDataSourceConfig.
A StoredCompletionsRunDataSource configuration describing a set of filters
Controls when the session expires relative to an anchor timestamp.
Annotation that references an uploaded file.
Attachment source referenced by an annotation.
A citation to a file.
The expiration policy for a file. By default, files with purpose=batch expire after 30 days and all other files are persisted until they are manually deleted.
A path to a file.
The ranker to use for the file search. If not specified will use the auto ranker.
The ranking options for the file search. If not specified, the file search tool will use the auto ranker and a score_threshold of 0.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
The results of a file search tool call. See the file search guide for more information.
Controls whether users can upload files.
Module for representing the OpenAI schema Filters.
Module for representing the OpenAI schema FineTuneChatCompletionRequestAssistantMessage.
The hyperparameters used for the DPO fine-tuning job.
Configuration for the DPO fine-tuning method.
The method used for fine-tuning.
The hyperparameters used for the reinforcement fine-tuning job.
Configuration for the reinforcement fine-tuning method.
The hyperparameters used for the fine-tuning job.
Configuration for the supervised fine-tuning method.
The checkpoint.permission object represents a permission for a fine-tuned model checkpoint.
Module for representing the OpenAI schema FineTuningIntegration.
The fine_tuning.job object represents a fine-tuning job that has been created through the API.
The fine_tuning.job.checkpoint object represents a model checkpoint for a fine-tuning job that is ready to use.
Fine-tuning job event object
Module for representing the OpenAI schema FunctionAndCustomToolCallOutput.
Module for representing the OpenAI schema FunctionCallItemStatus.
The output of a function tool call.
Module for representing the OpenAI schema FunctionCallOutputStatusEnum.
Module for representing the OpenAI schema FunctionCallStatus.
Module for representing the OpenAI schema FunctionObject.
The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.
Execute a shell command.
Commands and limits describing how to run the shell tool call.
A tool call that executes one or more shell commands in a managed environment.
A tool representing a request to execute one or more shell commands.
Status values reported for shell tool calls.
The output of a shell tool call that was emitted.
The content of a shell tool call output that was emitted.
Captured stdout and stderr for a portion of a shell tool call output.
Indicates that the shell commands finished and returned an exit code.
Indicates that the shell commands finished and returned an exit code.
The streamed output items emitted by a shell tool call.
The exit or timeout outcome associated with this shell call.
Indicates that the shell call exceeded its configured time limit.
Indicates that the shell call exceeded its configured time limit.
A tool that allows the model to execute shell commands.
Defines a function in your own code the model can choose to call. Learn more about function calling.
A tool call to run a function. See the function calling guide for more information.
The output of a function tool call.
Module for representing the OpenAI schema FunctionToolCallOutputResource.
Module for representing the OpenAI schema FunctionToolCallResource.
Module for representing the OpenAI schema FunctionToolParam.
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
A MultiGrader object combines the output of multiple graders to produce a single score.
A PythonGrader object that runs a python script on the input.
A ScoreModelGrader object that uses a model to assign a score to the input.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A TextSimilarityGrader object which grades text based on similarity metrics.
Module for representing the OpenAI schema GrammarSyntax1.
Summary information about a group returned in role assignment responses.
Confirmation payload returned after deleting a group.
Paginated list of organization groups.
Response returned after updating a group.
Details about an organization group.
Role assignment linking a group to a role.
Confirmation payload returned after adding a user to a group.
Confirmation payload returned after removing a user from a group.
Controls how much historical context is retained for the session.
Module for representing the OpenAI schema HybridSearchOptions.
Represents the content or the URL of an image generated by the OpenAI API.
Module for representing the OpenAI schema ImageDetail.
Emitted when image editing has completed and the final image is available.
Emitted when a partial image is available during image editing streaming.
Module for representing the OpenAI schema ImageEditStreamEvent.
Module for representing the OpenAI schema ImageGenActionEnum.
Emitted when image generation has completed and the final image is available.
The input tokens detailed information for the image generation.
The output token details for the image generation.
Emitted when a partial image is available during image generation streaming.
Module for representing the OpenAI schema ImageGenStreamEvent.
A tool that generates images using the GPT image models.
An image generation request made by the model.
For gpt-image-1 only, the token usage information for the image generation.
Reference an input image by either URL or uploaded file ID.
Provide exactly one of image_url or file_id.
The response from the image generation endpoint.
For the GPT image models only, the token usage information for the image generation.
Specify additional output data to include in the model response. Currently supported values are
Model and tool overrides applied when generating the assistant response.
Module for representing the OpenAI schema InlineSkillParam.
Inline skill payload
An audio input to the model.
Module for representing the OpenAI schema InputContent.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
A file input to the model.
A file input to the model.
An image input to the model. Learn about image inputs.
An image input to the model. Learn about image inputs
Module for representing the OpenAI schema InputItem.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
A list of one or many input items to the model, containing different content types.
Module for representing the OpenAI schema InputMessageResource.
Text, image, or file inputs to the model, used to generate a response.
A text input to the model.
A text input to the model.
Represents an individual invite to the organization.
Module for representing the OpenAI schema InviteDeleteResponse.
Module for representing the OpenAI schema InviteListResponse.
Request payload for granting a group access to a project.
Module for representing the OpenAI schema InviteRequest.
Content item used to generate a response.
An item representing a message, tool call, tool output, reasoning, or other response element.
An internal identifier for an item to reference.
Content item used to generate a response.
A collection of keypresses the model would like to perform.
Module for representing the OpenAI schema ListAssistantsResponse.
Module for representing the OpenAI schema ListAuditLogsResponse.
Module for representing the OpenAI schema ListBatchesResponse.
Module for representing the OpenAI schema ListCertificatesResponse.
Module for representing the OpenAI schema ListFilesResponse.
Module for representing the OpenAI schema ListFineTuningCheckpointPermissionResponse.
Module for representing the OpenAI schema ListFineTuningJobCheckpointsResponse.
Module for representing the OpenAI schema ListFineTuningJobEventsResponse.
Module for representing the OpenAI schema ListMessagesResponse.
Module for representing the OpenAI schema ListModelsResponse.
Module for representing the OpenAI schema ListPaginatedFineTuningJobsResponse.
Module for representing the OpenAI schema ListRunStepsResponse.
Module for representing the OpenAI schema ListRunsResponse.
Module for representing the OpenAI schema ListVectorStoreFilesResponse.
Module for representing the OpenAI schema ListVectorStoresResponse.
Module for representing the OpenAI schema LocalEnvironmentParam.
Represents the use of a local environment to perform shell actions.
Module for representing the OpenAI schema LocalShellCallOutputStatusEnum.
Module for representing the OpenAI schema LocalShellCallStatus.
Execute a shell command on the server.
A tool call to run a command on the local shell.
The output of a local shell tool call.
A tool that allows the model to execute shell commands in a local environment.
Module for representing the OpenAI schema LocalSkillParam.
Indicates that a thread is locked and cannot accept new input.
The log probability of a token.
A log probability object.
A request for human approval of a tool invocation.
A response to an MCP approval request.
A response to an MCP approval request.
A list of tools available on an MCP server.
A tool available on an MCP server.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
An invocation of a tool on an MCP server.
Module for representing the OpenAI schema MCPToolCallStatus.
A filter object to specify which tools are allowed.
A message to or from the model.
References an image File in the content of a message.
References an image URL in the content of a message.
The refusal content generated by the assistant.
A citation within the message that points to a specific quote from a specific File associated with the assistant or the message. Generated when the assistant uses the "file_search" tool to search files.
A URL for the file that's generated when the assistant used the code_interpreter tool to generate a file.
The text content that is part of a message.
References an image File in the content of a message.
References an image URL in the content of a message.
The refusal content that is part of a message.
A citation within the message that points to a specific quote from a specific File associated with the assistant or the message. Generated when the assistant uses the "file_search" tool to search files.
A URL for the file that's generated when the assistant used the code_interpreter tool to generate a file.
The text content that is part of a message.
Represents a message delta i.e. any changed fields on a message during streaming.
Represents a message within a thread.
Labels an assistant message as intermediate commentary (commentary) or the final answer (final_answer).
For models like gpt-5.3-codex and beyond, when sending follow-up requests, preserve and resend
phase on all assistant messages — dropping it can degrade performance. Not used for user messages.
The text content that is part of a message.
Module for representing the OpenAI schema MessageRole.
Module for representing the OpenAI schema MessageStatus.
Module for representing the OpenAI schema MessageStreamEvent.
Module for representing the OpenAI schema Metadata.
Describes an OpenAI model offering that can be used with the API.
Module for representing the OpenAI schema ModelIds.
Model ID used to generate the response, like gpt-5 or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.
Module for representing the OpenAI schema ModelIdsResponses.
Module for representing the OpenAI schema ModelIdsShared.
Module for representing the OpenAI schema ModelResponseProperties.
Module for representing the OpenAI schema ModifyAssistantRequest.
Module for representing the OpenAI schema ModifyCertificateRequest.
Module for representing the OpenAI schema ModifyMessageRequest.
Module for representing the OpenAI schema ModifyRunRequest.
Module for representing the OpenAI schema ModifyThreadRequest.
A mouse move action.
Groups function/custom tools under a shared namespace.
Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
The File object represents a document that has been uploaded to OpenAI.
Module for representing the OpenAI schema OrderEnum.
This is returned when the chunking strategy is unknown. Typically, this is because the file was indexed before the chunking_strategy concept was introduced in the API.
An audio output from the model.
Module for representing the OpenAI schema OutputContent.
Module for representing the OpenAI schema OutputItem.
An output message from the model.
Module for representing the OpenAI schema OutputMessageContent.
A text output from the model.
Whether to enable parallel function calling during tool use.
Module for representing the OpenAI schema PartialImages.
Static predicted output content, such as the content of a text file that is being regenerated.
Represents an individual project.
Represents an individual API key in a project.
Module for representing the OpenAI schema ProjectApiKeyDeleteResponse.
Module for representing the OpenAI schema ProjectApiKeyListResponse.
Module for representing the OpenAI schema ProjectCreateRequest.
Details about a group's membership in a project.
Confirmation payload returned after removing a group from a project.
Paginated list of groups that have access to a project.
Module for representing the OpenAI schema ProjectListResponse.
Represents a project rate limit config.
Module for representing the OpenAI schema ProjectRateLimitListResponse.
Module for representing the OpenAI schema ProjectRateLimitUpdateRequest.
Represents an individual service account in a project.
Module for representing the OpenAI schema ProjectServiceAccountApiKey.
Module for representing the OpenAI schema ProjectServiceAccountCreateRequest.
Module for representing the OpenAI schema ProjectServiceAccountCreateResponse.
Module for representing the OpenAI schema ProjectServiceAccountDeleteResponse.
Module for representing the OpenAI schema ProjectServiceAccountListResponse.
Module for representing the OpenAI schema ProjectUpdateRequest.
Represents an individual user in a project.
Module for representing the OpenAI schema ProjectUserCreateRequest.
Module for representing the OpenAI schema ProjectUserDeleteResponse.
Module for representing the OpenAI schema ProjectUserListResponse.
Module for representing the OpenAI schema ProjectUserUpdateRequest.
Module for representing the OpenAI schema Prompt.
Request payload for assigning a role to a group or user.
Request payload for creating a custom role.
Paginated list of roles available on an organization or project.
Request payload for updating an existing role.
Module for representing the OpenAI schema RankerVersionType.
Module for representing the OpenAI schema RankingOptions.
Controls request rate limits for the session.
Module for representing the OpenAI schema RealtimeAudioFormats.
Add a new Item to the Conversation's context, including messages, function calls, and function call responses. This event can be used both to populate a "history" of the conversation and to add new items mid-stream, but has the current limitation that it cannot populate assistant audio messages.
Send this event when you want to remove any item from the conversation
history. The server will respond with a conversation.item.deleted event,
unless the item does not exist in the conversation history, in which case the
server will respond with an error.
Send this event when you want to retrieve the server's representation of a specific item in the conversation history. This is useful, for example, to inspect user audio after noise cancellation and VAD.
The server will respond with a conversation.item.retrieved event,
unless the item does not exist in the conversation history, in which case the
server will respond with an error.
Send this event to truncate a previous assistant message’s audio. The server will produce audio faster than realtime, so this event is useful when the user interrupts to truncate audio that has already been sent to the client but not yet played. This will synchronize the server's understanding of the audio with the client's playback.
Send this event to append audio bytes to the input audio buffer. The audio buffer is temporary storage you can write to and later commit. In Server VAD mode, the audio buffer is used to detect speech and the server will decide when to commit. When Server VAD is disabled, you must commit the audio buffer manually.
Send this event to clear the audio bytes in the buffer. The server will
respond with an input_audio_buffer.cleared event.
Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.
WebRTC/SIP Only: Emit to cut off the current audio response. This will trigger the server to
stop generating audio and emit a output_audio_buffer.cleared event. This
event should be preceded by a response.cancel client event to stop the
generation of the current response.
Learn more.
Send this event to cancel an in-progress response. The server will respond
with a response.done event with a status of response.status=cancelled. If
there is no response to cancel, the server will respond with an error.
This event instructs the server to create a Response, which means triggering model inference. When in Server VAD mode, the server will create Responses automatically.
Send this event to update the session’s default configuration.
The client may send this event at any time to update any field,
except for voice. However, note that once a session has been
initialized with a particular model, it can’t be changed to
another model using session.update.
Send this event to update a transcription session.
The response resource.
Create a new Realtime response with these parameters
Returned when a conversation item is created. There are several scenarios that produce this event
Returned when an item in the conversation is deleted by the client with a
conversation.item.delete event. This event is used to synchronize the
server's understanding of the conversation history with the client's view.
This event is the output of audio transcription for user audio written to the
user audio buffer. Transcription begins when the input audio buffer is
committed by the client or server (in server_vad mode). Transcription runs
asynchronously with Response creation, so this event may come before or after
the Response events.
Returned when the text value of an input audio transcription content part is updated.
Returned when input audio transcription is configured, and a transcription
request for a user message failed. These events are separate from other
error events so that the client can identify the related Item.
Returned when an input audio transcription segment is identified for an item.
Returned when a conversation item is retrieved with conversation.item.retrieve.
Returned when an earlier assistant audio message item is truncated by the
client with a conversation.item.truncate event. This event is used to
synchronize the server's understanding of the audio with the client's playback.
Returned when an error occurs, which could be a client problem or a server problem. Most errors are recoverable and the session will stay open, we recommend to implementors to monitor and log error messages by default.
Returned when the input audio buffer is cleared by the client with a
input_audio_buffer.clear event.
Returned when an input audio buffer is committed, either by the client or
automatically in server VAD mode. The item_id property is the ID of the user
message item that will be created, thus a conversation.item.created event
will also be sent to the client.
Sent by the server when in server_vad mode to indicate that speech has been
detected in the audio buffer. This can happen any time audio is added to the
buffer (unless speech is already detected). The client may want to use this
event to interrupt audio playback or provide visual feedback to the user.
Returned in server_vad mode when the server detects the end of speech in
the audio buffer. The server will also send an conversation.item.created
event with the user message item that is created from the audio buffer.
Returned when listing MCP tools has completed for an item.
Returned when listing MCP tools has failed for an item.
Returned when listing MCP tools is in progress for an item.
Emitted at the beginning of a Response to indicate the updated rate limits. When a Response is created some tokens will be "reserved" for the output tokens, the rate limits shown here reflect that reservation, which is then adjusted accordingly once the Response is completed.
Returned when the model-generated audio is updated.
Returned when the model-generated audio is done. Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when the model-generated transcription of audio output is updated.
Returned when the model-generated transcription of audio output is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when a new content part is added to an assistant message item during response generation.
Returned when a content part is done streaming in an assistant message item. Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when a new Response is created. The first event of response creation,
where the response is in an initial state of in_progress.
Returned when a Response is done streaming. Always emitted, no matter the
final state. The Response object included in the response.done event will
include all output Items in the Response but will omit the raw audio data.
Returned when the model-generated function call arguments are updated.
Returned when the model-generated function call arguments are done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when MCP tool call arguments are updated during response generation.
Returned when MCP tool call arguments are finalized during response generation.
Returned when an MCP tool call has completed successfully.
Returned when an MCP tool call has failed.
Returned when an MCP tool call has started and is in progress.
Returned when a new Item is created during Response generation.
Returned when an Item is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when the text value of an "output_text" content part is updated.
Returned when the text value of an "output_text" content part is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when a Session is created. Emitted automatically when a new connection is established as the first server event. This event will contain the default Session configuration.
Returned when a session is updated with a session.update event, unless
there is an error.
Returned when a transcription session is created.
Returned when a transcription session is updated with a transcription_session.update event, unless
there is an error.
Parameters required to initiate a realtime call and receive the SDP answer needed to complete a WebRTC peer connection. Provide an SDP offer generated by your client and optionally configure the session that will answer the call.
Parameters required to transfer a SIP call to a new destination using the Realtime API.
Parameters used to decline an incoming SIP call handled by the Realtime API.
A realtime client event.
Add a new Item to the Conversation's context, including messages, function calls, and function call responses. This event can be used both to populate a "history" of the conversation and to add new items mid-stream, but has the current limitation that it cannot populate assistant audio messages.
Send this event when you want to remove any item from the conversation
history. The server will respond with a conversation.item.deleted event,
unless the item does not exist in the conversation history, in which case the
server will respond with an error.
Send this event when you want to retrieve the server's representation of a specific item in the conversation history. This is useful, for example, to inspect user audio after noise cancellation and VAD.
The server will respond with a conversation.item.retrieved event,
unless the item does not exist in the conversation history, in which case the
server will respond with an error.
Send this event to truncate a previous assistant message’s audio. The server will produce audio faster than realtime, so this event is useful when the user interrupts to truncate audio that has already been sent to the client but not yet played. This will synchronize the server's understanding of the audio with the client's playback.
Send this event to append audio bytes to the input audio buffer. The audio buffer is temporary storage you can write to and later commit. A "commit" will create a new user message item in the conversation history from the buffer content and clear the buffer. Input audio transcription (if enabled) will be generated when the buffer is committed.
Send this event to clear the audio bytes in the buffer. The server will
respond with an input_audio_buffer.cleared event.
Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.
WebRTC/SIP Only: Emit to cut off the current audio response. This will trigger the server to
stop generating audio and emit a output_audio_buffer.cleared event. This
event should be preceded by a response.cancel client event to stop the
generation of the current response.
Learn more.
Send this event to cancel an in-progress response. The server will respond
with a response.done event with a status of response.status=cancelled. If
there is no response to cancel, the server will respond with an error. It's safe
to call response.cancel even if no response is in progress, an error will be
returned the session will remain unaffected.
This event instructs the server to create a Response, which means triggering model inference. When in Server VAD mode, the server will create Responses automatically.
Send this event to update the session’s configuration.
The client may send this event at any time to update any field
except for voice and model. voice can be updated only if there have been no other audio outputs yet.
Send this event to update a transcription session.
A single item within a Realtime conversation.
A function call item in a Realtime conversation.
A function call output item in a Realtime conversation.
An assistant message item in a Realtime conversation.
A system message in a Realtime conversation can be used to provide additional context or instructions to the model. This is similar but distinct from the instruction prompt provided at the start of a conversation, as system messages can be added at any point in the conversation. For major changes to the conversation's behavior, use instructions, but for smaller updates (e.g. "the user is now asking about a different topic"), use system messages.
A user message item in a Realtime conversation.
The item to add to the conversation.
Create a session and client secret for the Realtime API. The request can specify either a realtime or a transcription session configuration. Learn more about the Realtime API.
Response from creating a session and client secret for the Realtime API.
Module for representing the OpenAI schema RealtimeFunctionTool.
A Realtime item requesting human approval of a tool invocation.
A Realtime item responding to an MCP approval request.
Module for representing the OpenAI schema RealtimeMCPHTTPError.
A Realtime item listing tools available on an MCP server.
Module for representing the OpenAI schema RealtimeMCPProtocolError.
A Realtime item representing an invocation of a tool on an MCP server.
Module for representing the OpenAI schema RealtimeMCPToolExecutionError.
The response resource.
Create a new Realtime response with these parameters
A realtime server event.
Returned when a conversation is created. Emitted right after session creation.
Sent by the server when an Item is added to the default Conversation. This can happen in several cases
Returned when a conversation item is created. There are several scenarios that produce this event
Returned when an item in the conversation is deleted by the client with a
conversation.item.delete event. This event is used to synchronize the
server's understanding of the conversation history with the client's view.
Returned when a conversation item is finalized.
This event is the output of audio transcription for user audio written to the user audio buffer. Transcription begins when the input audio buffer is committed by the client or server (when VAD is enabled). Transcription runs asynchronously with Response creation, so this event may come before or after the Response events.
Returned when the text value of an input audio transcription content part is updated with incremental transcription results.
Returned when input audio transcription is configured, and a transcription
request for a user message failed. These events are separate from other
error events so that the client can identify the related Item.
Returned when an input audio transcription segment is identified for an item.
Returned when a conversation item is retrieved with conversation.item.retrieve. This is provided as a way to fetch the server's representation of an item, for example to get access to the post-processed audio data after noise cancellation and VAD. It includes the full content of the Item, including audio data.
Returned when an earlier assistant audio message item is truncated by the
client with a conversation.item.truncate event. This event is used to
synchronize the server's understanding of the audio with the client's playback.
Returned when an error occurs, which could be a client problem or a server problem. Most errors are recoverable and the session will stay open, we recommend to implementors to monitor and log error messages by default.
Returned when the input audio buffer is cleared by the client with a
input_audio_buffer.clear event.
Returned when an input audio buffer is committed, either by the client or
automatically in server VAD mode. The item_id property is the ID of the user
message item that will be created, thus a conversation.item.created event
will also be sent to the client.
SIP Only: Returned when an DTMF event is received. A DTMF event is a message that
represents a telephone keypad press (0–9, *, #, A–D). The event property
is the keypad that the user press. The received_at is the UTC Unix Timestamp
that the server received the event.
Sent by the server when in server_vad mode to indicate that speech has been
detected in the audio buffer. This can happen any time audio is added to the
buffer (unless speech is already detected). The client may want to use this
event to interrupt audio playback or provide visual feedback to the user.
Returned in server_vad mode when the server detects the end of speech in
the audio buffer. The server will also send an conversation.item.created
event with the user message item that is created from the audio buffer.
Returned when the Server VAD timeout is triggered for the input audio buffer. This is configured
with idle_timeout_ms in the turn_detection settings of the session, and it indicates that
there hasn't been any speech detected for the configured duration.
Returned when listing MCP tools has completed for an item.
Returned when listing MCP tools has failed for an item.
Returned when listing MCP tools is in progress for an item.
WebRTC/SIP Only: Emitted when the output audio buffer is cleared. This happens either in VAD
mode when the user has interrupted (input_audio_buffer.speech_started),
or when the client has emitted the output_audio_buffer.clear event to manually
cut off the current audio response.
Learn more.
WebRTC/SIP Only: Emitted when the server begins streaming audio to the client. This event is
emitted after an audio content part has been added (response.content_part.added)
to the response.
Learn more.
WebRTC/SIP Only: Emitted when the output audio buffer has been completely drained on the server,
and no more audio is forthcoming. This event is emitted after the full response
data has been sent to the client (response.done).
Learn more.
Emitted at the beginning of a Response to indicate the updated rate limits. When a Response is created some tokens will be "reserved" for the output tokens, the rate limits shown here reflect that reservation, which is then adjusted accordingly once the Response is completed.
Returned when the model-generated audio is updated.
Returned when the model-generated audio is done. Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when the model-generated transcription of audio output is updated.
Returned when the model-generated transcription of audio output is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when a new content part is added to an assistant message item during response generation.
Returned when a content part is done streaming in an assistant message item. Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when a new Response is created. The first event of response creation,
where the response is in an initial state of in_progress.
Returned when a Response is done streaming. Always emitted, no matter the
final state. The Response object included in the response.done event will
include all output Items in the Response but will omit the raw audio data.
Returned when the model-generated function call arguments are updated.
Returned when the model-generated function call arguments are done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when MCP tool call arguments are updated during response generation.
Returned when MCP tool call arguments are finalized during response generation.
Returned when an MCP tool call has completed successfully.
Returned when an MCP tool call has failed.
Returned when an MCP tool call has started and is in progress.
Returned when a new Item is created during Response generation.
Returned when an Item is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when the text value of an "output_text" content part is updated.
Returned when the text value of an "output_text" content part is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
Returned when a Session is created. Emitted automatically when a new connection is established as the first server event. This event will contain the default Session configuration.
Returned when a session is updated with a session.update event, unless
there is an error.
Returned when a transcription session is updated with a transcription_session.update event, unless
there is an error.
Realtime session object for the beta interface.
A new Realtime session configuration, with an ephemeral key. Default TTL for keys is one minute.
Realtime session object configuration.
A Realtime session configuration object.
A new Realtime session configuration, with an ephemeral key. Default TTL for keys is one minute.
Realtime transcription session object configuration.
Realtime transcription session object configuration.
A new Realtime transcription session configuration.
A Realtime transcription session configuration object.
When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs.
Module for representing the OpenAI schema RealtimeTurnDetection.
gpt-5 and o-series models only
Module for representing the OpenAI schema ReasoningEffort.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning text from the model.
A refusal from the model.
Module for representing the OpenAI schema Response.
Emitted when there is a partial audio response.
Emitted when the audio response is complete.
Emitted when there is a partial transcript of audio.
Emitted when the full audio transcript is completed.
Emitted when a partial code snippet is streamed by the code interpreter.
Emitted when the code snippet is finalized by the code interpreter.
Emitted when the code interpreter call is completed.
Emitted when a code interpreter call is in progress.
Emitted when the code interpreter is actively interpreting the code snippet.
Emitted when the model response is complete.
Emitted when a new content part is added.
Emitted when a content part is done.
An event that is emitted when a response is created.
Event representing a delta (partial update) to the input of a custom tool call.
Event indicating that input for a custom tool call is complete.
Module for representing the OpenAI schema ResponseError.
The error code for the response.
Emitted when an error occurs.
An event that is emitted when a response fails.
Emitted when a file search call is completed (results found).
Emitted when a file search call is initiated.
Emitted when a file search is currently searching.
JSON object response format. An older method of generating JSON responses.
Using json_schema is recommended for models that support it. Note that the
model will not generate JSON without a system or user message instructing it
to do so.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
Default response format. Used to generate text responses.
A custom grammar for the model to follow when generating text. Learn more in the custom grammars guide.
Configure the model to generate valid Python code. See the custom grammars guide for more details.
Emitted when there is a partial function-call arguments delta.
Emitted when function-call arguments are finalized.
Emitted when an image generation tool call has completed and the final image is available.
Emitted when an image generation tool call is actively generating an image (intermediate state).
Emitted when an image generation tool call is in progress.
Emitted when a partial image is available during image generation streaming.
Emitted when the response is in progress.
An event that is emitted when a response finishes as incomplete.
A list of Response items.
A logprob is the logarithmic probability that the model assigns to producing a particular token at a given position in the sequence. Less-negative (higher) logprob values indicate greater model confidence in that token choice.
Emitted when there is a delta (partial update) to the arguments of an MCP tool call.
Emitted when the arguments for an MCP tool call are finalized.
Emitted when an MCP tool call has completed successfully.
Emitted when an MCP tool call has failed.
Emitted when an MCP tool call is in progress.
Emitted when the list of available MCP tools has been successfully retrieved.
Emitted when the attempt to list available MCP tools has failed.
Emitted when the system is in the process of retrieving the list of available MCP tools.
Module for representing the OpenAI schema ResponseModalities.
Emitted when a new output item is added.
Emitted when an output item is marked done.
Assistant response text accompanied by optional annotations.
Emitted when an annotation is added to output text content.
Module for representing the OpenAI schema ResponsePromptVariables.
Module for representing the OpenAI schema ResponseProperties.
Emitted when a response is queued and waiting to be processed.
Emitted when a new reasoning summary part is added.
Emitted when a reasoning summary part is completed.
Emitted when a delta is added to a reasoning summary text.
Emitted when a reasoning summary text is completed.
Emitted when a delta is added to a reasoning text.
Emitted when a reasoning text is completed.
Emitted when there is a partial refusal text.
Emitted when refusal text is finalized.
Module for representing the OpenAI schema ResponseStreamEvent.
Module for representing the OpenAI schema ResponseStreamOptions.
Emitted when there is an additional text delta.
Emitted when text content is finalized.
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
Emitted when a web search call is completed.
Emitted when a web search call is initiated.
Emitted when a web search call is executing.
Client events accepted by the Responses WebSocket server.
Client event for creating a response over a persistent WebSocket connection.
This payload uses the same top-level fields as POST /v1/responses.
Server events emitted by the Responses WebSocket server.
Details about a role that can be assigned through the public Roles API.
Confirmation payload returned after deleting a role.
Paginated list of roles assigned to a principal.
Module for representing the OpenAI schema RunCompletionUsage.
Module for representing the OpenAI schema RunGraderRequest.
Module for representing the OpenAI schema RunGraderResponse.
Represents an execution run on a thread.
Module for representing the OpenAI schema RunStepCompletionUsage.
Represents a run step delta i.e. any changed fields on a run step during streaming.
Details of the message creation by the run step.
Details of the Code Interpreter tool call the run step was involved in.
Module for representing the OpenAI schema RunStepDeltaStepDetailsToolCallsCodeOutputImageObject.
Text output from the Code Interpreter tool call as part of a run step.
Module for representing the OpenAI schema RunStepDeltaStepDetailsToolCallsFileSearchObject.
Module for representing the OpenAI schema RunStepDeltaStepDetailsToolCallsFunctionObject.
Details of the tool call.
Details of the message creation by the run step.
Details of the Code Interpreter tool call the run step was involved in.
Module for representing the OpenAI schema RunStepDetailsToolCallsCodeOutputImageObject.
Text output from the Code Interpreter tool call as part of a run step.
Module for representing the OpenAI schema RunStepDetailsToolCallsFileSearchObject.
The ranking options for the file search.
A result instance of the file search.
Module for representing the OpenAI schema RunStepDetailsToolCallsFunctionObject.
Details of the tool call.
Represents a step in execution of a run.
Module for representing the OpenAI schema RunStepStreamEvent.
Module for representing the OpenAI schema RunStreamEvent.
Tool call objects
A screenshot action.
A scroll action.
Module for representing the OpenAI schema SearchContentType.
Module for representing the OpenAI schema SearchContextSize.
Module for representing the OpenAI schema ServiceTier.
Updates the default version pointer for a skill.
Module for representing the OpenAI schema SkillListResource.
Module for representing the OpenAI schema SkillReferenceParam.
Module for representing the OpenAI schema SkillResource.
Module for representing the OpenAI schema SkillVersionListResource.
Module for representing the OpenAI schema SkillVersionResource.
Forces the model to call the apply_patch tool when executing a tool call.
Forces the model to call the shell tool when a tool call is required.
Emitted for each chunk of audio data generated during speech synthesis.
Emitted when the speech synthesis is complete and all audio has been streamed.
Module for representing the OpenAI schema StaticChunkingStrategy.
Customize your own chunking strategy by setting chunk size and chunk overlap.
Module for representing the OpenAI schema StaticChunkingStrategyResponseParam.
Not supported with latest reasoning models o3 and o4-mini.
Module for representing the OpenAI schema SubmitToolOutputsRunRequest.
A summary text from the model.
Collection of workflow tasks grouped together in the thread.
Task entry that appears within a TaskGroup.
Task emitted by the workflow to show progress and status updates.
Module for representing the OpenAI schema TaskType.
A text content.
An object specifying the format that the model must output.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
Module for representing the OpenAI schema ThreadItem.
A paginated list of thread items rendered for the ChatKit API.
A paginated list of ChatKit threads.
Represents a thread that contains messages.
Represents a ChatKit thread and its current status.
Module for representing the OpenAI schema ThreadStreamEvent.
Module for representing the OpenAI schema ToggleCertificatesRequest.
Module for representing the OpenAI schema TokenCountsBody.
Module for representing the OpenAI schema TokenCountsResource.
A tool that can be used to generate a response.
Tool selection that the assistant should honor when executing the item.
Constrains the tools available to the model to a pre-defined set.
Use this option to force the model to call a specific custom tool.
Use this option to force the model to call a specific function.
Use this option to force the model to call a specific tool on a remote MCP server.
Controls which (if any) tool is called by the model.
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
Module for representing the OpenAI schema ToolSearchCall.
Module for representing the OpenAI schema ToolSearchCallItemParam.
Module for representing the OpenAI schema ToolSearchExecutionType.
Module for representing the OpenAI schema ToolSearchOutput.
Module for representing the OpenAI schema ToolSearchOutputItemParam.
Hosted or BYOT tool search configuration for deferred tools.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
The top log probability of a token.
Emitted when there is an additional text delta. This is also the first event emitted when the transcription starts. Only emitted when you create a transcription with the Stream parameter set to true.
Emitted when the transcription is complete. Contains the complete transcription text. Only emitted when you create a transcription with the Stream parameter set to true.
Emitted when a diarized transcription returns a completed segment with speaker information. Only emitted when you create a transcription with stream set to true and response_format set to diarized_json.
Usage statistics for models billed by audio input duration.
Usage statistics for models billed by token usage.
Controls how the audio is cut into chunks. When set to "auto", the
server first normalizes loudness and then uses voice activity detection (VAD) to
choose boundaries. server_vad object can be provided to tweak VAD detection
parameters manually. If unset, the audio is transcribed as a single block.
A segment of diarized transcript text with speaker metadata.
Module for representing the OpenAI schema TranscriptionInclude.
Module for representing the OpenAI schema TranscriptionSegment.
Module for representing the OpenAI schema TranscriptionWord.
Module for representing the OpenAI schema TruncationEnum.
Controls for how a thread will be truncated prior to the run. Use this to control the initial context window of the run.
An action to type in text.
Module for representing the OpenAI schema UpdateConversationBody.
Request payload for updating the details of an existing group.
Module for representing the OpenAI schema UpdateVectorStoreFileAttributesRequest.
Module for representing the OpenAI schema UpdateVectorStoreRequest.
Module for representing the OpenAI schema UpdateVoiceConsentRequest.
The Upload object can accept byte chunks in the form of Parts.
Module for representing the OpenAI schema UploadCertificateRequest.
The upload Part represents a chunk of bytes we can add to an Upload object.
Annotation that references a URL.
URL backing an annotation entry.
A citation for a web resource used to generate a model response.
The aggregated audio speeches usage details of the specific time bucket.
The aggregated audio transcriptions usage details of the specific time bucket.
The aggregated code interpreter sessions usage details of the specific time bucket.
The aggregated completions usage details of the specific time bucket.
The aggregated embeddings usage details of the specific time bucket.
The aggregated images usage details of the specific time bucket.
The aggregated moderations usage details of the specific time bucket.
Module for representing the OpenAI schema UsageResponse.
Module for representing the OpenAI schema UsageTimeBucket.
The aggregated vector stores usage details of the specific time bucket.
Represents an individual user within an organization.
Module for representing the OpenAI schema UserDeleteResponse.
Paginated list of user objects returned when inspecting group membership.
Module for representing the OpenAI schema UserListResponse.
Text block that a user contributed to the thread.
User-authored messages within a thread.
Quoted snippet that the user referenced in their message.
Role assignment linking a user to a role.
Module for representing the OpenAI schema UserRoleUpdateRequest.
Module for representing the OpenAI schema VadConfig.
Module for representing the OpenAI schema ValidateGraderRequest.
Module for representing the OpenAI schema ValidateGraderResponse.
The expiration policy for a vector store.
Module for representing the OpenAI schema VectorStoreFileAttributes.
A batch of files attached to a vector store.
Represents the parsed content of a vector store file.
A list of files attached to a vector store.
A vector store is a collection of processed files can be used by the file_search tool.
Module for representing the OpenAI schema VectorStoreSearchRequest.
Module for representing the OpenAI schema VectorStoreSearchResultContentObject.
Module for representing the OpenAI schema VectorStoreSearchResultItem.
Module for representing the OpenAI schema VectorStoreSearchResultsPage.
Module for representing the OpenAI schema Verbosity.
Module for representing the OpenAI schema VideoCharacterResource.
Module for representing the OpenAI schema VideoContentVariant.
Module for representing the OpenAI schema VideoListResource.
Module for representing the OpenAI schema VideoModel.
Reference to the completed video.
Structured information describing a generated video job.
Module for representing the OpenAI schema VideoSeconds.
Module for representing the OpenAI schema VideoSize.
Module for representing the OpenAI schema VideoStatus.
Module for representing the OpenAI schema VoiceConsentDeletedResource.
Module for representing the OpenAI schema VoiceConsentListResource.
A consent recording used to authorize creation of a custom voice.
A built-in voice name or a custom voice reference.
Module for representing the OpenAI schema VoiceIdsShared.
A custom voice that can be used for audio output.
A wait action.
Action type "find_in_page": Searches for a pattern within a loaded page.
Action type "open_page" - Opens a specific URL from search results.
Action type "search" - Performs a web search query.
Module for representing the OpenAI schema WebSearchApproximateLocation.
High level guidance for the amount of context window space to use for the
search. One of low, medium, or high. medium is the default.
Approximate location parameters for the search.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
The results of a web search tool call. See the web search guide for more information.
Sent when a batch API request has been cancelled.
Sent when a batch API request has been completed.
Sent when a batch API request has expired.
Sent when a batch API request has failed.
Sent when an eval run has been canceled.
Sent when an eval run has failed.
Sent when an eval run has succeeded.
Sent when a fine-tuning job has been cancelled.
Sent when a fine-tuning job has failed.
Sent when a fine-tuning job has succeeded.
Sent when Realtime API Receives a incoming SIP call.
Sent when a background response has been cancelled.
Sent when a background response has been completed.
Sent when a background response has failed.
Sent when a background response has been interrupted.
Thread item that renders a widget payload.
Workflow reference and overrides applied to the chat session.
Controls diagnostic tracing during the session.
Reads configuration on application start, parses all environment variables (if any) and caches the final config in memory to avoid parsing on each read afterwards.
Mix Tasks
Writes all OpenAPI components as Elixir files to disk
Generate OpenAI SDK source files under lib/ex_openai/generated.
Updates OpenAI API documentation files