# `LangChain.ChatModels.ChatOllamaAI`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_ollama_ai.ex#L1)

Represents the [Ollama AI Chat model](https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-chat-completion)

Parses and validates inputs for making a requests from the Ollama Chat API.

Converts responses into more specialized `LangChain` data structures.

The module's functionalities include:

- Initializing a new `ChatOllamaAI` struct with defaults or specific attributes.
- Validating and casting input data to fit the expected schema.
- Preparing and sending requests to the Ollama AI service API.
- Managing both streaming and non-streaming API responses.
- Processing API responses to convert them into suitable message formats.

The `ChatOllamaAI` struct has fields to configure the AI, including but not limited to:

- `endpoint`: URL of the Ollama AI service.
- `model`: The AI model used, e.g., "llama2:latest".
- `receive_timeout`: Max wait time for AI service responses.
- `temperature`: Influences the AI's response creativity.

For detailed info on on all other parameters see documentation here:
https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values

This module is for use within LangChain and follows the `ChatModel` behavior,
outlining callbacks AI chat models must implement.

Usage examples and more details are in the LangChain documentation or the
module's function docs.

## Tool Support

Currently, `ChatOllamaAI` supports tool calls when not streaming the responses.
Streaming tool calls is not yet supported.

# `t`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_ollama_ai.ex#L59)

```elixir
@type t() :: %LangChain.ChatModels.ChatOllamaAI{
  callbacks: term(),
  endpoint: term(),
  keep_alive: term(),
  mirostat: term(),
  mirostat_eta: term(),
  mirostat_tau: term(),
  model: term(),
  num_ctx: term(),
  num_gpu: term(),
  num_gqa: term(),
  num_predict: term(),
  num_thread: term(),
  receive_timeout: term(),
  repeat_last_n: term(),
  repeat_penalty: term(),
  seed: term(),
  stop: term(),
  stream: term(),
  temperature: term(),
  tfs_z: term(),
  top_k: term(),
  top_p: term(),
  verbose_api: term()
}
```

# `call`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_ollama_ai.ex#L372)

Calls the Ollama Chat Completion API struct with configuration, plus
either a simple message or the list of messages to act as the prompt.

**NOTE:** This function *can* be used directly, but the primary interface
should be through `LangChain.Chains.LLMChain`. The `ChatOllamaAI` module is more focused on
translating the `LangChain` data structures to and from the Ollama API.

Another benefit of using `LangChain.Chains.LLMChain` is that it combines the
storage of messages, adding functions, adding custom context that should be
passed to functions, and automatically applying `LangChain.MessageDelta`
structs as they are are received, then converting those to the full
`LangChain.Message` once fully complete.

# `do_process_response`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_ollama_ai.ex#L564)

# `for_api`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_ollama_ai.ex#L254)

# `for_api`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_ollama_ai.ex#L224)

Return the params formatted for an API request.

# `new`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_ollama_ai.ex#L183)

```elixir
@spec new(attrs :: map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}
```

Creates a new `ChatOllamaAI` struct with the given attributes.

# `new!`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_ollama_ai.ex#L194)

```elixir
@spec new!(attrs :: map()) :: t() | no_return()
```

Creates a new `ChatOllamaAI` struct with the given attributes. Will raise an error if the changeset is invalid.

# `restore_from_map`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_ollama_ai.ex#L679)

Restores the model from the config.

# `retry_on_fallback?`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_ollama_ai.ex#L633)

```elixir
@spec retry_on_fallback?(LangChain.LangChainError.t()) :: boolean()
```

Determine if an error should be retried. If `true`, a fallback LLM may be
used. If `false`, the error is understood to be more fundamental with the
request rather than a service issue and it should not be retried or fallback
to another service.

# `serialize_config`
[🔗](https://github.com/brainlid/langchain/blob/v0.6.2/lib/chat_models/chat_ollama_ai.ex#L644)

```elixir
@spec serialize_config(t()) :: %{required(String.t()) =&gt; any()}
```

Generate a config map that can later restore the model's configuration.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
