View Source LangChain.ChatModels.ChatOpenAIResponses (LangChain v0.4.0)

Represents the OpenAI Responses API

Parses and validates inputs for making requests to the OpenAI Responses API.

Converts responses into more specialized LangChain data structures.

ContentPart Types

OpenAI's Responses API supports several types of content parts that can be combined in a single message:

Text Content

Basic text content is the default and most common type:

Message.new_user!("Hello, how are you?")

Image Content

OpenAI supports both base64-encoded images and image URLs:

# Using a base64 encoded image
Message.new_user!([
  ContentPart.text!("What's in this image?"),
  ContentPart.image!("base64_encoded_image_data", media: :jpg)
])

# Using an image URL
Message.new_user!([
  ContentPart.text!("Describe this image:"),
  ContentPart.image_url!("https://example.com/image.jpg")
])

For images, you can specify the detail level which affects token usage:

  • detail: "low" - Lower resolution, fewer tokens
  • detail: "high" - Higher resolution, more tokens
  • detail: "auto" - Let the model decide

File Content

OpenAI supports both base64-encoded files and file IDs:

# Using a base64 encoded file
Message.new_user!([
  ContentPart.text!("Process this file:"),
  ContentPart.file!("base64_encoded_file_data",
    type: :base64,
    filename: "document.pdf"
  )
])

# Using a file ID (after uploading to OpenAI)
Message.new_user!([
  ContentPart.text!("Process this file:"),
  ContentPart.file!("file-1234", type: :file_id)
])

Callbacks

See the set of available callbacks: LangChain.Chains.ChainCallbacks

Rate Limit API Response Headers

OpenAI returns rate limit information in the response headers. Those can be accessed using the LLM callback on_llm_ratelimit_info like this:

handlers = %{
  on_llm_ratelimit_info: fn _model, headers ->
    IO.inspect(headers)
  end
}

{:ok, chat} = ChatOpenAI.new(%{callbacks: [handlers]})

When a request is received, something similar to the following will be output to the console.

%{
  "x-ratelimit-limit-requests" => ["5000"],
  "x-ratelimit-limit-tokens" => ["160000"],
  "x-ratelimit-remaining-requests" => ["4999"],
  "x-ratelimit-remaining-tokens" => ["159973"],
  "x-ratelimit-reset-requests" => ["12ms"],
  "x-ratelimit-reset-tokens" => ["10ms"],
  "x-request-id" => ["req_1234"]
}

Token Usage

OpenAI returns token usage information as part of the response body. The LangChain.TokenUsage is added to the metadata of the LangChain.Message and LangChain.MessageDelta structs that are processed under the :usage key.

The OpenAI documentation instructs to provide the stream_options with the include_usage: true for the information to be provided.

The TokenUsage data is accumulated for MessageDelta structs and the final usage information will be on the LangChain.Message.

NOTE: Of special note is that the TokenUsage information is returned once for all "choices" in the response. The LangChain.TokenUsage data is added to each message, but if your usage requests multiple choices, you will see the same usage information for each choice but it is duplicated and only one response is meaningful.

Open AI's Responses API also supports built-in tools. Among those, we support Web Search currently.

Example

To optionally permit the model to use web search:

native_web_tool = NativeTool.new!(%{name: "web_search_preview", configuration: %{}})

%{llm: ChatOpenAIResponses.new!(%{model: "gpt-4o"})}
|> LLMChain.new!()
|> LLMChain.add_message(Message.new_user!("Can you tell me something that happened today in Texas?"))
|> LLMChain.add_tools(web_tool)
|> LLMChain.run()

You may provide additional configuration per the OpenAI documentation:

web_config = %{
  search_context_size: "medium",
  user_location: %{
    type: "approximate",
    city: "Humble",
    country: "US",
    region: "Texas",
    timezone: "America/Chicago"
  }
}
native_web_tool = NativeTool.new!(%{name: "web_search_preview", configuration: web_config)

You may reference a prior web_search_call in subsequent runs as:

Message.new_assistant!([
  ContentPart.new!(%{
    type: :unsupported,
      options: %{
        id: "ws_123456789", # ID as provided from Open AI
        status: "completed",
        type: "web_search_call"
      }
    }
  ),
  ContentPart.text!("The Astros won today 5-4...")
])

Note: Not all Open AI models support web_search_preview. OpenAI will return an error if you request web_search_preview for when using a model that doesn't support it.

Tool Choice

OpenAI's ChatGPT API supports forcing a tool to be used.

This is supported through the tool_choice options. It takes a plain Elixir map to provide the configuration.

By default, the LLM will choose a tool call if a tool is available and it determines it is needed. That's the "auto" mode.

Example

For the LLM's response to make a tool call of the "get_weather" function.

ChatOpenAI.new(%{
  model: "...",
  tool_choice: %{"type" => "function", "function" => %{"name" => "get_weather"}}
})

...or to force a native tool (such as web search):

ChatOpenAI.new(%{
  model: "...",
  tool_choice: "web_search_preview"
})

Summary

Functions

Convert a ContentPart to the expected map of data for the OpenAI API.

Convert a list of ContentParts to the expected map of data for the OpenAI API.

Return the params formatted for an API request.

Setup a ChatOpenAI client configuration.

Setup a ChatOpenAI client configuration and return it or raise an error if invalid.

Restores the model from the config.

Determine if an error should be retried with a fallback model. Aligns with other providers.

Generate a config map that can later restore the model's configuration.

Types

@type t() :: %LangChain.ChatModels.ChatOpenAIResponses{
  api_key: term(),
  callbacks: term(),
  endpoint: term(),
  include: term(),
  json_response: term(),
  json_schema: term(),
  json_schema_name: term(),
  max_output_tokens: term(),
  model: term(),
  reasoning: term(),
  receive_timeout: term(),
  stream: term(),
  temperature: term(),
  tool_choice: term(),
  top_p: term(),
  truncation: term(),
  user: term(),
  verbose_api: term()
}

Functions

Link to this function

content_part_for_api(model, part)

View Source

Convert a ContentPart to the expected map of data for the OpenAI API.

Link to this function

content_parts_for_api(model, content_parts)

View Source

Convert a list of ContentParts to the expected map of data for the OpenAI API.

Link to this function

decode_stream(arg, done \\ [])

View Source
Link to this function

do_api_request(openai, messages, tools, retry_count \\ 3)

View Source
@spec do_api_request(
  t(),
  [LangChain.Message.t()],
  LangChain.ChatModels.ChatModel.tools(),
  integer()
) ::
  list() | struct() | {:error, LangChain.LangChainError.t()}
Link to this function

for_api(openai, messages, tools)

View Source
@spec for_api(
  t() | LangChain.Message.t() | LangChain.Function.t(),
  message :: [map()],
  LangChain.ChatModels.ChatModel.tools()
) :: %{required(atom()) => any()}

Return the params formatted for an API request.

Link to this function

native_tool_call_for_api(model, arg2)

View Source
@spec native_tool_call_for_api(any(), any()) ::
  nil | %{id: any(), status: any(), type: <<_::120>>}
Link to this function

native_tool_calls_for_api(model, content_parts)

View Source
@spec new(attrs :: map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}

Setup a ChatOpenAI client configuration.

@spec new!(attrs :: map()) :: t() | no_return()

Setup a ChatOpenAI client configuration and return it or raise an error if invalid.

Restores the model from the config.

Link to this function

retry_on_fallback?(arg1)

View Source
@spec retry_on_fallback?(LangChain.LangChainError.t()) :: boolean()

Determine if an error should be retried with a fallback model. Aligns with other providers.

@spec serialize_config(t()) :: %{required(String.t()) => any()}

Generate a config map that can later restore the model's configuration.