View Source LangChain.ChatModels.ChatOllamaAI (LangChain v0.4.0)

Represents the Ollama AI Chat model

Parses and validates inputs for making a requests from the Ollama Chat API.

Converts responses into more specialized LangChain data structures.

The module's functionalities include:

  • Initializing a new ChatOllamaAI struct with defaults or specific attributes.
  • Validating and casting input data to fit the expected schema.
  • Preparing and sending requests to the Ollama AI service API.
  • Managing both streaming and non-streaming API responses.
  • Processing API responses to convert them into suitable message formats.

The ChatOllamaAI struct has fields to configure the AI, including but not limited to:

  • endpoint: URL of the Ollama AI service.
  • model: The AI model used, e.g., "llama2:latest".
  • receive_timeout: Max wait time for AI service responses.
  • temperature: Influences the AI's response creativity.

For detailed info on on all other parameters see documentation here: https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values

This module is for use within LangChain and follows the ChatModel behavior, outlining callbacks AI chat models must implement.

Usage examples and more details are in the LangChain documentation or the module's function docs.

Tool Support

Currently, ChatOllamaAI supports tool calls when not streaming the responses. Streaming tool calls is not yet supported.

Summary

Functions

Calls the Ollama Chat Completion API struct with configuration, plus either a simple message or the list of messages to act as the prompt.

Return the params formatted for an API request.

Creates a new ChatOllamaAI struct with the given attributes.

Creates a new ChatOllamaAI struct with the given attributes. Will raise an error if the changeset is invalid.

Restores the model from the config.

Determine if an error should be retried. If true, a fallback LLM may be used. If false, the error is understood to be more fundamental with the request rather than a service issue and it should not be retried or fallback to another service.

Generate a config map that can later restore the model's configuration.

Types

@type t() :: %LangChain.ChatModels.ChatOllamaAI{
  callbacks: term(),
  endpoint: term(),
  keep_alive: term(),
  mirostat: term(),
  mirostat_eta: term(),
  mirostat_tau: term(),
  model: term(),
  num_ctx: term(),
  num_gpu: term(),
  num_gqa: term(),
  num_predict: term(),
  num_thread: term(),
  receive_timeout: term(),
  repeat_last_n: term(),
  repeat_penalty: term(),
  seed: term(),
  stop: term(),
  stream: term(),
  temperature: term(),
  tfs_z: term(),
  top_k: term(),
  top_p: term(),
  verbose_api: term()
}

Functions

Link to this function

call(ollama_ai, prompt, tools \\ [])

View Source

Calls the Ollama Chat Completion API struct with configuration, plus either a simple message or the list of messages to act as the prompt.

NOTE: This function can be used directly, but the primary interface should be through LangChain.Chains.LLMChain. The ChatOllamaAI module is more focused on translating the LangChain data structures to and from the Ollama API.

Another benefit of using LangChain.Chains.LLMChain is that it combines the storage of messages, adding functions, adding custom context that should be passed to functions, and automatically applying LangChain.MessageDelta structs as they are are received, then converting those to the full LangChain.Message once fully complete.

Link to this function

do_process_response(model, response)

View Source
Link to this function

for_api(model, messages, tools)

View Source

Return the params formatted for an API request.

@spec new(attrs :: map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}

Creates a new ChatOllamaAI struct with the given attributes.

@spec new!(attrs :: map()) :: t() | no_return()

Creates a new ChatOllamaAI struct with the given attributes. Will raise an error if the changeset is invalid.

Restores the model from the config.

Link to this function

retry_on_fallback?(arg1)

View Source
@spec retry_on_fallback?(LangChain.LangChainError.t()) :: boolean()

Determine if an error should be retried. If true, a fallback LLM may be used. If false, the error is understood to be more fundamental with the request rather than a service issue and it should not be retried or fallback to another service.

@spec serialize_config(t()) :: %{required(String.t()) => any()}

Generate a config map that can later restore the model's configuration.