View Source LangChain.ChatModels.ChatOpenAI (LangChain v0.3.0-rc.0)

Represents the OpenAI ChatModel.

Parses and validates inputs for making a requests from the OpenAI Chat API.

Converts responses into more specialized LangChain data structures.

Callbacks

See the set of available callback: LangChain.ChatModels.LLMCallbacks

Rate Limit API Response Headers

OpenAI returns rate limit information in the response headers. Those can be accessed using the LLM callback on_llm_ratelimit_info like this:

handlers = %{
  on_llm_ratelimit_info: fn _model, headers ->
    IO.inspect(headers)
  end
}

{:ok, chat} = ChatOpenAI.new(%{callbacks: [handlers]})

When a request is received, something similar to the following will be output to the console.

%{
  "x-ratelimit-limit-requests" => ["5000"],
  "x-ratelimit-limit-tokens" => ["160000"],
  "x-ratelimit-remaining-requests" => ["4999"],
  "x-ratelimit-remaining-tokens" => ["159973"],
  "x-ratelimit-reset-requests" => ["12ms"],
  "x-ratelimit-reset-tokens" => ["10ms"],
  "x-request-id" => ["req_1234"]
}

Token Usage

OpenAI returns token usage information as part of the response body. That data can be accessed using the LLM callback on_llm_token_usage like this:

handlers = %{
  on_llm_token_usage: fn _model, usage ->
    IO.inspect(usage)
  end
}

{:ok, chat} = ChatOpenAI.new(%{
  callbacks: [handlers],
  stream: true,
  stream_options: %{include_usage: true}
})

When a request is received, something similar to the following will be output to the console.

%LangChain.TokenUsage{input: 15, output: 3}

The OpenAI documentation instructs to provide the stream_options with the include_usage: true for the information to be provided.

Summary

Functions

Calls the OpenAI API passing the ChatOpenAI struct with configuration, plus either a simple message or the list of messages to act as the prompt.

Decode a streamed response from an OpenAI-compatible server. Parses a string of received content into an Elixir map data structure using string keys.

Convert a LangChain structure to the expected map of data for the OpenAI API.

Return the params formatted for an API request.

Setup a ChatOpenAI client configuration.

Setup a ChatOpenAI client configuration and return it or raise an error if invalid.

Restores the model from the config.

Generate a config map that can later restore the model's configuration.

Types

@type t() :: %LangChain.ChatModels.ChatOpenAI{
  api_key: term(),
  callbacks: term(),
  endpoint: term(),
  frequency_penalty: term(),
  json_response: term(),
  max_tokens: term(),
  model: term(),
  n: term(),
  receive_timeout: term(),
  seed: term(),
  stream: term(),
  stream_options: term(),
  temperature: term(),
  user: term()
}

Functions

Link to this function

call(openai, prompt, tools \\ [])

View Source

Calls the OpenAI API passing the ChatOpenAI struct with configuration, plus either a simple message or the list of messages to act as the prompt.

Optionally pass in a list of tools available to the LLM for requesting execution in response.

Optionally pass in a callback function that can be executed as data is received from the API.

NOTE: This function can be used directly, but the primary interface should be through LangChain.Chains.LLMChain. The ChatOpenAI module is more focused on translating the LangChain data structures to and from the OpenAI API.

Another benefit of using LangChain.Chains.LLMChain is that it combines the storage of messages, adding tools, adding custom context that should be passed to tools, and automatically applying LangChain.MessageDelta structs as they are are received, then converting those to the full LangChain.Message once fully complete.

@spec decode_stream({String.t(), String.t()}) :: {%{required(String.t()) => any()}}

Decode a streamed response from an OpenAI-compatible server. Parses a string of received content into an Elixir map data structure using string keys.

If a partial response was received, meaning the JSON text is split across multiple data frames, then the incomplete portion is returned as-is in the buffer. The function will be successively called, receiving the incomplete buffer data from a previous call, and assembling it to parse.

@spec for_api(
  LangChain.Message.t()
  | LangChain.Message.ContentPart.t()
  | LangChain.Function.t()
) ::
  %{required(String.t()) => any()} | [%{required(String.t()) => any()}]

Convert a LangChain structure to the expected map of data for the OpenAI API.

Link to this function

for_api(openai, messages, tools)

View Source
@spec for_api(
  t() | LangChain.Message.t() | LangChain.Function.t(),
  message :: [map()],
  LangChain.ChatModels.ChatModel.tools()
) :: %{required(atom()) => any()}

Return the params formatted for an API request.

@spec new(attrs :: map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}

Setup a ChatOpenAI client configuration.

@spec new!(attrs :: map()) :: t() | no_return()

Setup a ChatOpenAI client configuration and return it or raise an error if invalid.

Restores the model from the config.

@spec serialize_config(t()) :: %{required(String.t()) => any()}

Generate a config map that can later restore the model's configuration.