View Source LangChain.ChatModels.ChatOrq (LangChain v0.4.0)

Chat adapter for orq.ai Deployments API.

Non-streaming:

Streaming (SSE, sentinel "[DONE]"):

Security:

  • HTTP Bearer token (Authorization: Bearer ...). Configure via application env :langchain, :orq_key

Required body field:

  • key: Deployment key to invoke.

Messages:

  • Accepts roles: developer | system | user | assistant | tool

  • Content supports text, image_url, file, input_audio
  • Assistant may include tool_calls; Tool results are role: tool with tool_call_id

Notes:

  • Azure is not supported.

Summary

Functions

Calls the orq.ai API passing the ChatOrq struct with configuration, plus either a simple message or the list of messages to act as the prompt.

Convert a ContentPart to the expected map of data for the API.

Convert a list of ContentParts to the expected map of data for the API.

Convert a list of ContentParts to a string for tool results. ORQ API expects tool result content to be a string, not an array.

Convert content to a list of ContentParts. Content may be a string or already a list of ContentParts.

Convert content to a single ContentPart for MessageDelta. Content may be a string or already a ContentPart.

Decode a streamed response (SSE). Delegates to ChatOpenAI-compatible decoder.

Return the params formatted for an API request.

Setup a ChatOrq client configuration.

Setup a ChatOrq client configuration and return it or raise an error if invalid.

Restores the model from the config.

Determine if an error should be retried. If true, a fallback LLM may be used. If false, the error is understood to be more fundamental with the request rather than a service issue and it should not be retried or fallback to another service.

Generate a config map that can later restore the model's configuration.

Types

@type t() :: %LangChain.ChatModels.ChatOrq{
  api_key: term(),
  callbacks: term(),
  context: term(),
  documents: term(),
  endpoint: term(),
  extra_params: term(),
  file_ids: term(),
  inputs: term(),
  invoke_options: term(),
  key: term(),
  knowledge_filter: term(),
  messages_passthrough: term(),
  metadata: term(),
  model: term(),
  prefix_messages: term(),
  receive_timeout: term(),
  stream: term(),
  stream_endpoint: term(),
  thread: term(),
  verbose_api: term()
}

Functions

Link to this function

call(orq, prompt, tools \\ [])

View Source

Calls the orq.ai API passing the ChatOrq struct with configuration, plus either a simple message or the list of messages to act as the prompt.

Optionally pass in a list of tools available to the LLM for requesting execution in response (tools schema is not sent to orq; tool messages are included in messages).

Link to this function

content_part_for_api(part)

View Source

Convert a ContentPart to the expected map of data for the API.

Link to this function

content_parts_for_api(content_parts)

View Source

Convert a list of ContentParts to the expected map of data for the API.

Link to this function

content_parts_to_string(content_parts)

View Source

Convert a list of ContentParts to a string for tool results. ORQ API expects tool result content to be a string, not an array.

Link to this function

content_to_parts(content)

View Source

Convert content to a list of ContentParts. Content may be a string or already a list of ContentParts.

Link to this function

content_to_single_part(content)

View Source

Convert content to a single ContentPart for MessageDelta. Content may be a string or already a ContentPart.

Link to this function

decode_stream(data, done \\ [])

View Source
@spec decode_stream(
  {String.t(), String.t()},
  list()
) :: {%{required(String.t()) => any()}}

Decode a streamed response (SSE). Delegates to ChatOpenAI-compatible decoder.

Link to this function

for_api(orq, messages, tools)

View Source

Return the params formatted for an API request.

@spec new(attrs :: map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}

Setup a ChatOrq client configuration.

@spec new!(attrs :: map()) :: t() | no_return()

Setup a ChatOrq client configuration and return it or raise an error if invalid.

Restores the model from the config.

Link to this function

retry_on_fallback?(arg1)

View Source
@spec retry_on_fallback?(LangChain.LangChainError.t()) :: boolean()

Determine if an error should be retried. If true, a fallback LLM may be used. If false, the error is understood to be more fundamental with the request rather than a service issue and it should not be retried or fallback to another service.

@spec serialize_config(t()) :: %{required(String.t()) => any()}

Generate a config map that can later restore the model's configuration.