View Source ExOpenAI.Components.Response (ex_openai.ex v2.0.0-beta2)

Module for representing the OpenAI schema Response.

Fields

  • :background - optional - boolean() | any()

  • :completed_at - optional - number() | any()

  • :conversation - optional - :"Elixir.ExOpenAI.Components.Conversation-2".t() | any()

  • :created_at - required - number()
    Unix timestamp (in seconds) of when this Response was created.

  • :error - required - ExOpenAI.Components.ResponseError.t()

  • :id - required - String.t()
    Unique identifier for this Response.

  • :incomplete_details - required - {:%{}, [], [{{:optional, [], [:reason]}, {:|, [], [:max_output_tokens, :content_filter]}}]} | any()

  • :instructions - required - String.t() | [ExOpenAI.Components.InputItem.t()] | any()

  • :max_output_tokens - optional - integer() | any()

  • :max_tool_calls - optional - integer() | any()

  • :metadata - required - ExOpenAI.Components.Metadata.t()

  • :model - required - ExOpenAI.Components.ModelIdsResponses.t()
    Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.

  • :object - required - :response
    The object type of this resource - always set to response.
    Allowed values: "response"

  • :output - required - [ExOpenAI.Components.OutputItem.t()]
    An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model's response.
  • Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.
  • :output_text - optional - String.t() | any()

  • :parallel_tool_calls - required - boolean()
    Whether to allow the model to run tool calls in parallel.
    Default: true

  • :previous_response_id - optional - String.t() | any()

  • :prompt - optional - ExOpenAI.Components.Prompt.t()

  • :prompt_cache_key - optional - String.t()
    Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

  • :prompt_cache_retention - optional - :"in-memory" | :"24h" | any()

  • :reasoning - optional - ExOpenAI.Components.Reasoning.t() | any()

  • :safety_identifier - optional - String.t()
    A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user, with a maximum length of 64 characters. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
    Constraints: maxLength: 64

  • :service_tier - optional - ExOpenAI.Components.ServiceTier.t()

  • :status - optional - :completed | :failed | :in_progress | :cancelled | :queued | :incomplete
    The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete.
    Allowed values: "completed", "failed", "in_progress", "cancelled", "queued", "incomplete"

  • :temperature - required - number() | any()

  • :text - optional - ExOpenAI.Components.ResponseTextParam.t()

  • :tool_choice - required - ExOpenAI.Components.ToolChoiceParam.t()

  • :tools - required - ExOpenAI.Components.ToolsArray.t()

  • :top_logprobs - optional - integer() | any()

  • :top_p - required - number() | any()

  • :truncation - optional - :auto | :disabled | any()

  • :usage - optional - ExOpenAI.Components.ResponseUsage.t()

  • :user - optional - String.t()
    This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.

Summary

Types

@type t() :: %ExOpenAI.Components.Response{
  background: (boolean() | any()) | nil,
  completed_at: (number() | any()) | nil,
  conversation:
    (:"Elixir.ExOpenAI.Components.Conversation-2".t() | any()) | nil,
  created_at: number(),
  error: ExOpenAI.Components.ResponseError.t(),
  id: String.t(),
  incomplete_details:
    %{optional(:reason) => :max_output_tokens | :content_filter} | any(),
  instructions: (String.t() | [ExOpenAI.Components.InputItem.t()]) | any(),
  max_output_tokens: (integer() | any()) | nil,
  max_tool_calls: (integer() | any()) | nil,
  metadata: ExOpenAI.Components.Metadata.t(),
  model: ExOpenAI.Components.ModelIdsResponses.t(),
  object: :response,
  output: [ExOpenAI.Components.OutputItem.t()],
  output_text: (String.t() | any()) | nil,
  parallel_tool_calls: boolean(),
  previous_response_id: (String.t() | any()) | nil,
  prompt: ExOpenAI.Components.Prompt.t() | nil,
  prompt_cache_key: String.t() | nil,
  prompt_cache_retention: ((:"in-memory" | :"24h") | any()) | nil,
  reasoning: (ExOpenAI.Components.Reasoning.t() | any()) | nil,
  safety_identifier: String.t() | nil,
  service_tier: ExOpenAI.Components.ServiceTier.t() | nil,
  status:
    (((((:completed | :failed) | :in_progress) | :cancelled) | :queued)
     | :incomplete)
    | nil,
  temperature: number() | any(),
  text: ExOpenAI.Components.ResponseTextParam.t() | nil,
  tool_choice: ExOpenAI.Components.ToolChoiceParam.t(),
  tools: ExOpenAI.Components.ToolsArray.t(),
  top_logprobs: (integer() | any()) | nil,
  top_p: number() | any(),
  truncation: ((:auto | :disabled) | any()) | nil,
  usage: ExOpenAI.Components.ResponseUsage.t() | nil,
  user: String.t() | nil
}