Ollixir (Ollixir v0.1.1)

Copy Markdown View Source

Elixir client for the Ollama API.

Quick Start

client = Ollixir.init()
{:ok, response} = Ollixir.chat(client,
  model: "llama3.2",
  messages: [%{role: "user", content: "Hello!"}]
)

Client Configuration

# Default (localhost:11434)
client = Ollixir.init()

# Custom host
client = Ollixir.init("http://ollama.example.com:11434")

# With options
client = Ollixir.init(
  base_url: "http://localhost:11434/api",
  receive_timeout: 120_000,
  headers: [{"authorization", "Bearer token"}]
)

Streaming

Two modes are available:

Enumerable Mode

{:ok, stream} = Ollixir.chat(client, model: "llama3.2", messages: msgs, stream: true)
Enum.each(stream, &IO.inspect/1)

Process Mode (for GenServer/LiveView)

{:ok, task} = Ollixir.chat(client, model: "llama3.2", messages: msgs, stream: self())
# Receive messages with handle_info/2

See the Streaming Guide for details.

Error Handling

All functions return {:ok, result} or {:error, reason}.

case Ollixir.chat(client, opts) do
  {:ok, response} -> handle_success(response)
  {:error, %Ollixir.ConnectionError{} = error} -> handle_connection(error)
  {:error, %Ollixir.ResponseError{status: 404}} -> handle_not_found()
  {:error, %Ollixir.ResponseError{status: status}} -> handle_error(status)
end

Summary

Types

Client struct

Chat message

Client response

Tool definition

Functions

Generates the next message in a chat using the specified model. Optionally streamable.

Checks a blob exists in Ollama by its digest or binary data.

Generates a completion for the given prompt using the specified model. Optionally streamable.

Alias for copy_model/2 to match the Python client's copy.

Creates a model with another name from an existing model.

Alias for create_model/2 to match the Python client's create.

Uploads a blob and returns its digest.

Creates a model using the given name and model file. Optionally streamable.

Alias for delete_model/2 to match the Python client's delete.

Deletes a model and its data.

Generate embeddings from a model for the given prompt.

Generate embeddings from a model for the given prompt.

Alias for completion/2 to match the Python client's generate.

Initializes a new Ollama client.

Alias for list_models/2 to match the Python client's list.

Lists all models that Ollama has available.

Lists currently running models, their memory footprint, and process details.

Load a model into memory without generating a completion. Optionally specify a keep alive value (defaults to 5 minutes, set -1 to permanently keep alive).

Alias for list_running/2 to match the Python client's ps.

Alias for pull_model/2 to match the Python client's pull.

Downloads a model from the Ollama library. Optionally streamable.

Alias for push_model/2 to match the Python client's push.

Upload a model to a model library. Requires an Ollama account and a public key from https://ollama.com/settings/keys. Optionally streamable.

Alias for show_model/2 to match the Python client's show.

Shows all information for a specific model.

Stops a running model and unloads it from memory.

Fetch content from a URL using Ollama's cloud fetch API.

Fetch content from a URL using Ollama's cloud fetch API, raising on error.

Search the web using Ollama's cloud search API.

Search the web using Ollama's cloud search API, raising on error.

Types

client()

@type client() :: %Ollixir{req: Req.Request.t()}

Client struct

message()

@type message() ::
  {:role, term()}
  | {:content, binary() | nil}
  | {:images, [binary()]}
  | {:tool_name, binary()}
  | {:tool_calls, [%{optional(atom() | binary()) => term()}]}

Chat message

A chat message is a map/0 with the following fields:

  • :role - Required. The role of the message, either system, user, assistant or tool.
  • :content - The content of the message. Optional for tool calls.
  • :images (list of String.t/0) - (optional) List of Base64 encoded images (for multimodal models only).
  • :tool_name (String.t/0) - (optional) Tool name for tool responses.
  • :tool_calls - (optional) List of tools the model wants to use.

response()

@type response() ::
  {:ok, map() | boolean() | binary() | Enumerable.t() | Task.t()}
  | {:error, term()}

Client response

tool()

@type tool() :: {:type, binary()} | {:function, map()}

Tool definition

A tool definition is a map/0 with the following fields:

  • :type (String.t/0) - Type of tool. Defaults to "function". The default value is "function".
  • :function (map/0) - Required.
    • :name (String.t/0) - Required. The name of the function to be called.
    • :description (String.t/0) - A description of what the function does.
    • :parameters - Required. The parameters the functions accepts.

Functions

chat(client, params)

@spec chat(
  client(),
  keyword()
) :: response()

Generates the next message in a chat using the specified model. Optionally streamable.

Parameters

  • client - Ollama client from init/1
  • params - Keyword list of chat options (see below)

Options

  • :model (String.t/0) - Required. The Ollama model name.
  • :messages (list of map/0) - Required. List of messages - used to keep a chat memory.
  • :tools (list of map/0) - Tools for the model to use if supported (requires stream to be false)
  • :format - Set the expected format of the response (json or JSON schema map).
  • :stream - See section on streaming. The default value is false.
  • :think - Enable thinking mode. Can be true/false or level: 'low', 'medium', 'high' The default value is false.
  • :logprobs (boolean/0) - Return log probabilities for generated tokens
  • :top_logprobs (integer/0) - Number of alternative tokens to return (0-20)
  • :keep_alive - How long to keep the model loaded.
  • :options - Additional advanced model parameters.

Message structure

Each message is a map with the following fields:

  • :role - Required. The role of the message, either system, user, assistant or tool.
  • :content - The content of the message. Optional for tool calls.
  • :images (list of String.t/0) - (optional) List of Base64 encoded images (for multimodal models only).
  • :tool_name (String.t/0) - (optional) Tool name for tool responses.
  • :tool_calls - (optional) List of tools the model wants to use.

Tool definitions

  • :type (String.t/0) - Type of tool. Defaults to "function". The default value is "function".
  • :function (map/0) - Required.
    • :name (String.t/0) - Required. The name of the function to be called.
    • :description (String.t/0) - A description of what the function does.
    • :parameters - Required. The parameters the functions accepts.

Examples

iex> messages = [
...>   %{role: "system", content: "You are a helpful assistant."},
...>   %{role: "user", content: "Why is the sky blue?"},
...>   %{role: "assistant", content: "Due to rayleigh scattering."},
...>   %{role: "user", content: "How is that different than mie scattering?"},
...> ]

iex> Ollixir.chat(client, [
...>   model: "llama2",
...>   messages: messages,
...> ])
{:ok, %{"message" => %{
  "role" => "assistant",
  "content" => "Mie scattering affects all wavelengths similarly, while Rayleigh favors shorter ones."
}, ...}}

# Passing true to the :stream option initiates an async streaming request.
iex> Ollixir.chat(client, [
...>   model: "llama2",
...>   messages: messages,
...>   stream: true,
...> ])
{:ok, Ollixir.Streaming{}}

Returns

  • {:ok, map()} - Success with response data
  • {:ok, Stream.t()} - When stream: true
  • {:ok, Task.t()} - When stream: pid
  • {:error, Ollixir.RequestError.t()} - On validation errors
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

check_blob(client, digest)

@spec check_blob(client(), Ollixir.Blob.digest() | binary()) :: response()

Checks a blob exists in Ollama by its digest or binary data.

Parameters

  • client - Ollama client from init/1
  • digest_or_blob - Digest string or raw binary data

Examples

iex> Ollixir.check_blob(client, "sha256:fe938a131f40e6f6d40083c9f0f430a515233eb2edaa6d72eb85c50d64f2300e")
{:ok, true}

iex> Ollixir.check_blob(client, "this should not exist")
{:ok, false}

Returns

  • {:ok, true} - When the blob exists
  • {:ok, false} - When the blob does not exist
  • {:error, Ollixir.RequestError.t()} - On validation errors
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

completion(client, params)

@spec completion(
  client(),
  keyword()
) :: response()

Generates a completion for the given prompt using the specified model. Optionally streamable.

Parameters

  • client - Ollama client from init/1
  • params - Keyword list of completion options (see below)

Options

  • :model (String.t/0) - Required. The Ollama model name.
  • :prompt (String.t/0) - Required. Prompt to generate a response for.
  • :suffix (String.t/0) - Text to append after generated content (for code completion)
  • :images (list of String.t/0) - A list of Base64 encoded images to be included with the prompt (for multimodal models only).
  • :system (String.t/0) - System prompt, overriding the model default.
  • :template (String.t/0) - Prompt template, overriding the model default.
  • :context - The context parameter returned from a previous completion/2 call (enabling short conversational memory).
  • :format - Set the expected format of the response (json or JSON schema map).
  • :raw (boolean/0) - Set true if specifying a fully templated prompt. (:template is ingored)
  • :stream - See section on streaming. The default value is false.
  • :think - Enable thinking mode. Can be true/false or level: 'low', 'medium', 'high' The default value is false.
  • :logprobs (boolean/0) - Return log probabilities for generated tokens
  • :top_logprobs (integer/0) - Number of alternative tokens to return (0-20)
  • :keep_alive - How long to keep the model loaded.
  • :options - Additional advanced model parameters.

Examples

iex> Ollixir.completion(client, [
...>   model: "llama2",
...>   prompt: "Why is the sky blue?",
...> ])
{:ok, %{"response": "The sky is blue because it is the color of the sky.", ...}}

# Passing true to the :stream option initiates an async streaming request.
iex> Ollixir.completion(client, [
...>   model: "llama2",
...>   prompt: "Why is the sky blue?",
...>   stream: true,
...> ])
{:ok, %Ollixir.Streaming{}}

Returns

  • {:ok, map()} - Success with response data
  • {:ok, Stream.t()} - When stream: true
  • {:ok, Task.t()} - When stream: pid
  • {:error, Ollixir.RequestError.t()} - On validation errors
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

copy(client, params)

@spec copy(
  client(),
  keyword()
) :: response()

Alias for copy_model/2 to match the Python client's copy.

copy_model(client, params)

@spec copy_model(
  client(),
  keyword()
) :: response()

Creates a model with another name from an existing model.

Parameters

  • client - Ollama client from init/1
  • params - Keyword list with :source and :destination

Options

  • :source (String.t/0) - Required. Name of the model to copy from.
  • :destination (String.t/0) - Required. Name of the model to copy to.

Example

iex> Ollixir.copy_model(client, [
...>   source: "llama2",
...>   destination: "llama2-backup"
...> ])
{:ok, true}

Returns

  • {:ok, true} - When the copy succeeded
  • {:ok, false} - When the model was not found
  • {:error, Ollixir.RequestError.t()} - On validation errors
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

create(client, params)

@spec create(
  client(),
  keyword()
) :: response()

Alias for create_model/2 to match the Python client's create.

create_blob(client, blob)

@spec create_blob(client(), binary()) :: response()

Uploads a blob and returns its digest.

Parameters

  • client - Ollama client from init/1
  • blob - File path or raw binary data

Examples

iex> Ollixir.create_blob(client, "adapter.bin")
{:ok, "sha256:..."}

iex> data = File.read!("adapter.bin")
iex> Ollixir.create_blob(client, data)
{:ok, "sha256:..."}

Returns

  • {:ok, digest} - When the blob was created or already exists
  • {:error, Ollixir.RequestError.t()} - On validation errors
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

create_model(client, params)

@spec create_model(
  client(),
  keyword()
) :: response()

Creates a model using the given name and model file. Optionally streamable.

Any dependent blobs reference in the modelfile, such as FROM and ADAPTER instructions, must exist first. See check_blob/2 and create_blob/2.

Parameters

  • client - Ollama client from init/1
  • params - Keyword list of model creation options (see below)

Options

  • :name (String.t/0) - Required. Name for the new model
  • :modelfile (String.t/0) - Modelfile contents
  • :from (String.t/0) - Base model to create from
  • :files (map of String.t/0 keys and String.t/0 values) - Custom files to include
  • :adapters (map of String.t/0 keys and String.t/0 values) - LoRA adapter files
  • :template (String.t/0) - Custom prompt template
  • :license - License declaration
  • :system (String.t/0) - System prompt
  • :parameters - Model parameters
  • :messages (list of map/0) - Sample conversation messages
  • :quantize (String.t/0) - Quantization level (f16, f32, etc.)
  • :stream - Enable streaming

Example

iex> modelfile = "FROM llama2\nSYSTEM \"You are mario from Super Mario Bros.\""
iex> Ollixir.create_model(client, [
...>   name: "mario",
...>   modelfile: modelfile,
...>   stream: true,
...> ])
{:ok, Ollixir.Streaming{}}

Returns

  • {:ok, map()} - Success with response data
  • {:ok, Stream.t()} - When stream: true
  • {:ok, Task.t()} - When stream: pid
  • {:error, Ollixir.RequestError.t()} - On validation errors
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

delete(client, params)

@spec delete(
  client(),
  keyword()
) :: response()

Alias for delete_model/2 to match the Python client's delete.

delete_model(client, params)

@spec delete_model(
  client(),
  keyword()
) :: response()

Deletes a model and its data.

Parameters

  • client - Ollama client from init/1
  • params - Keyword list with :name

Options

  • :source (String.t/0) - Required. Name of the model to copy from.
  • :destination (String.t/0) - Required. Name of the model to copy to.

Example

iex> Ollixir.delete_model(client, name: "llama2")
{:ok, true}

Returns

  • {:ok, true} - When the delete succeeded
  • {:ok, false} - When the model was not found
  • {:error, Ollixir.RequestError.t()} - On validation errors
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

embed(client, params)

@spec embed(
  client(),
  keyword()
) :: response()

Generate embeddings from a model for the given prompt.

Parameters

  • client - Ollama client from init/1
  • params - Keyword list of embed options (see below)

Options

  • :model (String.t/0) - Required. The name of the model used to generate the embeddings.
  • :input - Required. Text or list of text to generate embeddings for.
  • :truncate (boolean/0) - Truncates the end of each input to fit within context length.
  • :dimensions (integer/0) - Output embedding dimensions (model-specific)
  • :keep_alive - How long to keep the model loaded.
  • :options - Additional advanced model parameters.

Example

iex> Ollixir.embed(client, [
...>   model: "nomic-embed-text",
...>   input: ["Why is the sky blue?", "Why is the grass green?"],
...> ])
{:ok, %{"embedding" => [
  [ 0.009724553, 0.04449892, -0.14063916, 0.0013168337, 0.032128844,
    0.10730086, -0.008447222, 0.010106917, 5.2289694e-4, -0.03554127, ...],
  [ 0.028196355, 0.043162502, -0.18592504, 0.035034444, 0.055619627,
    0.12082449, -0.0090096295, 0.047170386, -0.032078084, 0.0047163847, ...]
]}}

Returns

  • {:ok, map()} - Embedding response data
  • {:error, Ollixir.RequestError.t()} - On validation errors
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

embeddings(client, params)

This function is deprecated. Superseded by embed/2.
@spec embeddings(
  client(),
  keyword()
) :: response()

Generate embeddings from a model for the given prompt.

Parameters

  • client - Ollama client from init/1
  • params - Keyword list of embedding options (see below)

Options

  • :model (String.t/0) - Required. The name of the model used to generate the embeddings.
  • :prompt (String.t/0) - Required. The prompt used to generate the embedding.
  • :keep_alive - How long to keep the model loaded.
  • :options - Additional advanced model parameters.

Example

iex> Ollixir.embeddings(client, [
...>   model: "llama2",
...>   prompt: "Here is an article about llamas..."
...> ])
{:ok, %{"embedding" => [
  0.5670403838157654, 0.009260174818336964, 0.23178744316101074, -0.2916173040866852, -0.8924556970596313,
  0.8785552978515625, -0.34576427936553955, 0.5742510557174683, -0.04222835972905159, -0.137906014919281
]}}

Returns

  • {:ok, map()} - Embedding response data
  • {:error, Ollixir.RequestError.t()} - On validation errors
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

generate(client, params)

@spec generate(
  client(),
  keyword()
) :: response()

Alias for completion/2 to match the Python client's generate.

init(opts \\ [])

@spec init(Req.url() | keyword() | Req.Request.t()) :: client()

Initializes a new Ollama client.

Parameters

Environment Variables

  • OLLAMA_HOST - Default Ollama server URL (default: http://localhost:11434)
  • OLLAMA_API_KEY - Bearer token for API authentication

Examples

# Uses OLLAMA_HOST or defaults to localhost:11434
client = Ollixir.init()

# Explicit URL (overrides OLLAMA_HOST)
client = Ollixir.init("http://ollama.example.com:11434")

# Host strings without a scheme use http:// and default port 11434
client = Ollixir.init("ollama.example.com")
client = Ollixir.init(":11434")

# With host option
client = Ollixir.init(host: "ollama.example.com:11434")

# With custom options
client = Ollixir.init(receive_timeout: 120_000)

Returns

See Also

list(client, opts \\ [])

@spec list(
  client(),
  keyword()
) :: response()

Alias for list_models/2 to match the Python client's list.

list_models(client, opts \\ [])

@spec list_models(
  client(),
  keyword()
) :: response()

Lists all models that Ollama has available.

Parameters

  • client - Ollama client from init/1

Example

iex> Ollixir.list_models(client)
{:ok, %{"models" => [
  %{"name" => "codellama:13b", ...},
  %{"name" => "llama2:latest", ...},
]}}

Returns

  • {:ok, map()} - Map containing available models
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

list_running(client, opts \\ [])

@spec list_running(
  client(),
  keyword()
) :: response()

Lists currently running models, their memory footprint, and process details.

Parameters

  • client - Ollama client from init/1

Example

iex> Ollixir.list_running(client)
{:ok, %{"models" => [
  %{"name" => "nomic-embed-text:latest", ...},
]}}

Returns

  • {:ok, map()} - Map containing running models
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

preload(client, params)

@spec preload(
  client(),
  keyword()
) :: response()

Load a model into memory without generating a completion. Optionally specify a keep alive value (defaults to 5 minutes, set -1 to permanently keep alive).

Parameters

  • client - Ollama client from init/1
  • params - Keyword list with :model and optional :keep_alive

Options

  • :model (String.t/0) - Required. Name of the model to load.
  • :keep_alive - How long to keep the model loaded.

Example

iex> Ollixir.preload(client, model: "llama3.1", timeout: 3_600_000)
true

Returns

  • {:ok, true} - When the model was loaded
  • {:ok, false} - When the model was not found
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

ps(client, opts \\ [])

@spec ps(
  client(),
  keyword()
) :: response()

Alias for list_running/2 to match the Python client's ps.

pull(client, params)

@spec pull(
  client(),
  keyword()
) :: response()

Alias for pull_model/2 to match the Python client's pull.

pull_model(client, params)

@spec pull_model(
  client(),
  keyword()
) :: response()

Downloads a model from the Ollama library. Optionally streamable.

Parameters

  • client - Ollama client from init/1
  • params - Keyword list with :name and optional :stream

Options

Example

iex> Ollixir.pull_model(client, name: "llama2")
{:ok, %{"status" => "success"}}

# Passing true to the :stream option initiates an async streaming request.
iex> Ollixir.pull_model(client, name: "llama2", stream: true)
{:ok, %Ollixir.Streaming{}}

Returns

  • {:ok, map()} - Status updates or completion
  • {:ok, Stream.t()} - When stream: true
  • {:ok, Task.t()} - When stream: pid
  • {:error, Ollixir.RequestError.t()} - On validation errors
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

push(client, params)

@spec push(
  client(),
  keyword()
) :: response()

Alias for push_model/2 to match the Python client's push.

push_model(client, params)

@spec push_model(
  client(),
  keyword()
) :: response()

Upload a model to a model library. Requires an Ollama account and a public key from https://ollama.com/settings/keys. Optionally streamable.

Parameters

  • client - Ollama client from init/1
  • params - Keyword list with :name and optional :stream

Options

Example

iex> Ollixir.push_model(client, name: "mattw/pygmalion:latest")
{:ok, %{"status" => "success"}}

# Passing true to the :stream option initiates an async streaming request.
iex> Ollixir.push_model(client, name: "mattw/pygmalion:latest", stream: true)
{:ok, %Ollixir.Streaming{}}

Returns

  • {:ok, map()} - Status updates or completion
  • {:ok, Stream.t()} - When stream: true
  • {:ok, Task.t()} - When stream: pid
  • {:error, Ollixir.RequestError.t()} - On validation errors
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

show(client, params)

@spec show(
  client(),
  keyword()
) :: response()

Alias for show_model/2 to match the Python client's show.

show_model(client, params)

@spec show_model(
  client(),
  keyword()
) :: response()

Shows all information for a specific model.

Parameters

  • client - Ollama client from init/1
  • params - Keyword list with :name

Options

  • :name (String.t/0) - Required. Name of the model to show.

Example

iex> Ollixir.show_model(client, name: "llama2")
{:ok, %{
  "details" => %{
    "families" => ["llama", "clip"],
    "family" => "llama",
    "format" => "gguf",
    "parameter_size" => "7B",
    "quantization_level" => "Q4_0"
  },
  "modelfile" => "...",
  "parameters" => "...",
  "template" => "..."
}}

Returns

  • {:ok, map()} - Model details
  • {:error, Ollixir.RequestError.t()} - On validation errors
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

unload(client, params)

@spec unload(
  client(),
  keyword()
) :: response()

Stops a running model and unloads it from memory.

Parameters

  • client - Ollama client from init/1
  • params - Keyword list with :model

Options

  • :model (String.t/0) - Required. Name of the model to unload.

Example

iex> Ollixir.preload(client, model: "llama3.1")
true

Returns

  • {:ok, true} - When the model was unloaded
  • {:ok, false} - When the model was not found
  • {:error, Ollixir.ResponseError.t()} - On HTTP errors

See Also

web_fetch(client, params)

@spec web_fetch(
  client(),
  keyword()
) :: response()

Fetch content from a URL using Ollama's cloud fetch API.

Delegates to Ollixir.Web.fetch/2.

web_fetch!(client, params)

@spec web_fetch!(
  client(),
  keyword()
) :: Ollixir.Web.FetchResponse.t()

Fetch content from a URL using Ollama's cloud fetch API, raising on error.

Delegates to Ollixir.Web.fetch!/2.

web_search(client, params)

@spec web_search(
  client(),
  keyword()
) :: response()

Search the web using Ollama's cloud search API.

Delegates to Ollixir.Web.search/2.

web_search!(client, params)

@spec web_search!(
  client(),
  keyword()
) :: Ollixir.Web.SearchResponse.t()

Search the web using Ollama's cloud search API, raising on error.

Delegates to Ollixir.Web.search!/2.