View Source Ollama.API (Ollama v0.2.0)

Client module for interacting with the Ollama API.

Currently supporting all Ollama API endpoints except pushing models (/api/push), which is coming soon.

Usage

Assuming you have Ollama running on localhost, and that you have installed a model, use completion/2 or chat/2 interract with the model.

iex> api = Ollama.API.new

iex> Ollama.API.completion(api, [
...>   model: "llama2",
...>   prompt: "Why is the sky blue?",
...> ])
{:ok, %{"response" => "The sky is blue because it is the color of the sky.", ...}}

iex> Ollama.API.chat(api, [
...>   model: "llama2",
...>   messages: [
...>     %{role: "system", content: "You are a helpful assistant."},
...>     %{role: "user", content: "Why is the sky blue?"},
...>     %{role: "assistant", content: "Due to rayleigh scattering."},
...>     %{role: "user", content: "How is that different than mie scattering?"},
...>   ],
...> ])
{:ok, %{"message" => %{
  "role" => "assistant",
  "content" => "Mie scattering affects all wavelengths similarly, while Rayleigh favors shorter ones."
}, ...}}

Streaming

By default, all enpoints are called with streaming disabled, blocking until the HTTP request completes and the response body is returned. For endpoints where streaming is supported, the :stream option can be set to true or a pid/0. When streaming is enabled, the function returns a Task.t/0, which asynchronously sends messages back to either the calling process, or the process associated with the given pid/0.

Messages will be sent in the following format, allowing the recieving process to pattern match against the pid of the async task if known:

{request_pid, {:data, data}}

The data is a map from the Ollama JSON message. See Ollama API docs.

The following example show's how a LiveView process may by constructed to both create the streaming request and recieve the streaming messages.

defmodule Ollama.ChatLive do
  use Phoenix.LiveView

  # When the client invokes the "prompt" event, create a streaming request
  # and optionally store the request task into the assigns
  def handle_event("prompt", %{"message" => prompt}, socket) do
    api = Ollama.API.new()
    {:ok, task} = Ollama.API.completion(api, [
      model: "llama2",
      prompt: prompt,
      stream: true,
    ])

    {:noreply, assign(socket, current_request: task)}
  end

  # The request task streams messages back to the LiveView process
  def handle_info({_request_pid, {:data, _data}} = message, socket) do
    pid = socket.assigns.current_request.pid
    case message do
      {^pid, {:data, %{"done" => false} = data}} ->
        # handle each streaming chunk

      {^pid, {:data, %{"done" => true} = data}} ->
        # handle the final streaming chunk

      {_pid, _data} ->
        # this message was not expected!
    end
  end
end

Summary

Types

Chat message

API function response

t()

Client struct

Functions

Generates the next message in a chat using the specified model. Optionally streamable.

Checks a blob exists in ollama by its digest or binary data.

Generates a completion for the given prompt using the specified model. Optionally streamable.

Creates a model with another name from an existing model.

Creates a blob from its binary data.

Creates a model using the given name and model file. Optionally streamable.

Deletes a model and its data.

Generate embeddings from a model for the given prompt.

Lists all models that Ollama has available.

Creates a new API client with the provided URL. If no URL is given, it defaults to "http://localhost:11434/api".

Downloads a model from the ollama library. Optionally streamable.

Shows all information for a specific model.

Types

@type message() :: map()

Chat message

A chat message is a map/0 with the following fields:

  • role - The role of the message, either system, user or assistant.
  • content - The content of the message.
  • images - (optional) List of Base64 encoded images (for multimodal models only).
@type response() :: {:ok, Task.t() | map() | boolean()} | {:error, term()}

API function response

@type t() :: %Ollama.API{req: Req.Request.t()}

Client struct

Functions

@spec chat(
  t(),
  keyword()
) :: response()

Generates the next message in a chat using the specified model. Optionally streamable.

Options

Required options:

  • :model - The ollama model name.
  • :messages - List of messages - used to keep a chat memory.

Accepted options:

Message structure

Each message is a map with the following fields:

  • role - The role of the message, either system, user or assistant.
  • content - The content of the message.
  • images - (optional) List of Base64 encoded images (for multimodal models only).

Examples

iex> messages = [
...>   %{role: "system", content: "You are a helpful assistant."},
...>   %{role: "user", content: "Why is the sky blue?"},
...>   %{role: "assistant", content: "Due to rayleigh scattering."},
...>   %{role: "user", content: "How is that different than mie scattering?"},
...> ]

iex> Ollama.API.chat(api, [
...>   model: "llama2",
...>   messages: messages,
...> ])
{:ok, %{"message" => %{
  "role" => "assistant",
  "content" => "Mie scattering affects all wavelengths similarly, while Rayleigh favors shorter ones."
}, ...}}

# Passing true to the :stream option initiates an async streaming request.
iex> Ollama.API.chat(api, [
...>   model: "llama2",
...>   messages: messages,
...>   stream: true,
...> ])
{:ok, Task{}}
Link to this function

chat(api, model, messages, opts \\ [])

View Source
This function is deprecated. Use Ollama.API.chat/2.
@spec chat(t(), String.t(), [message()], keyword()) :: response()
@spec check_blob(t(), Ollama.Blob.digest() | binary()) :: response()

Checks a blob exists in ollama by its digest or binary data.

Examples

iex> Ollama.API.check_blob(api, "sha256:fe938a131f40e6f6d40083c9f0f430a515233eb2edaa6d72eb85c50d64f2300e")
{:ok, true}

iex> Ollama.API.check_blob(api, "this should not exist")
{:ok, false}
@spec completion(
  t(),
  keyword()
) :: response()

Generates a completion for the given prompt using the specified model. Optionally streamable.

Options

Required options:

  • :model - The ollama model name.
  • :prompt - Prompt to generate a response for.

Accepted options::

  • :images - A list of Base64 encoded images to be included with the prompt (for multimodal models only).
  • :options - Additional advanced model parameters.
  • :system - System prompt, overriding the model default.
  • :template - Prompt template, overriding the model default.
  • :context - The context parameter returned from a previous f:completion/4 call (enabling short conversational memory).
  • :stream - Defaults to false. See section on streaming.

Examples

iex> Ollama.API.completion(api, [
...>   model: "llama2",
...>   prompt: "Why is the sky blue?",
...> ])
{:ok, %{"response": "The sky is blue because it is the color of the sky.", ...}}

# Passing true to the :stream option initiates an async streaming request.
iex> Ollama.API.completion(api, [
...>   model: "llama2",
...>   prompt: "Why is the sky blue?",
...>   stream: true,
...> ])
{:ok, %Task{}}
Link to this function

completion(api, model, prompt, opts \\ [])

View Source
This function is deprecated. Use Ollama.API.completion/2.
@spec completion(t(), String.t(), String.t(), keyword()) :: response()
@spec copy_model(
  t(),
  keyword()
) :: response()

Creates a model with another name from an existing model.

Options

Required options:

  • :source - Name of the model to copy from.
  • :destination - Name of the model to copy to.

Example

iex> Ollama.API.copy_model(api, [
...>   source: "llama2",
...>   destination: "llama2-backup"
...> ])
{:ok, true}
Link to this function

copy_model(api, from, to)

View Source
This function is deprecated. Use Ollama.API.copy_model/2.
@spec copy_model(t(), String.t(), String.t()) :: response()
@spec create_blob(t(), binary()) :: response()

Creates a blob from its binary data.

Example

iex> Ollama.API.create_blob(api, modelfile)
{:ok, true}
Link to this function

create_model(api, params)

View Source
@spec create_model(
  t(),
  keyword()
) :: response()

Creates a model using the given name and model file. Optionally streamable.

Any dependent blobs reference in the modelfile, such as FROM and ADAPTER instructions, must exist first. See check_blob/2 and create_blob/2.

Options

Required options:

  • :name - Name of the model to create.
  • :modelfile - Contents of the Modelfile.

Accepted options:

Example

iex> modelfile = "FROM llama2\nSYSTEM \"You are mario from Super Mario Bros.\""
iex> Ollama.API.create_model(api, [
...>   name: "mario",
...>   modelfile: modelfile,
...>   stream: true,
...> ])
{:ok, Task{}}
Link to this function

create_model(api, model, modelfile, opts \\ [])

View Source
This function is deprecated. Use Ollama.API.create_model/2.
@spec create_model(t(), String.t(), String.t(), keyword()) :: response()
Link to this function

delete_model(api, params)

View Source
@spec delete_model(
  t(),
  keyword()
) :: response()

Deletes a model and its data.

Options

Required options:

  • :name - Name of the model to delete.

Example

iex> Ollama.API.delete_model(api, name: "llama2")
{:ok, true}
@spec embeddings(
  t(),
  keyword()
) :: response()

Generate embeddings from a model for the given prompt.

Example

iex> Ollama.API.embeddings(api, [
...>   model: "llama2",
...>   prompt: "Here is an article about llamas..."
...> ])
{:ok, %{"embedding" => [
  0.5670403838157654, 0.009260174818336964, 0.23178744316101074, -0.2916173040866852, -0.8924556970596313,
  0.8785552978515625, -0.34576427936553955, 0.5742510557174683, -0.04222835972905159, -0.137906014919281
]}}
Link to this function

embeddings(api, model, prompt, opts \\ [])

View Source
This function is deprecated. Use Ollama.API.embeddings/2.
@spec embeddings(t(), String.t(), String.t(), keyword()) :: response()
@spec list_models(t()) :: response()

Lists all models that Ollama has available.

Example

iex> Ollama.API.list_models(api)
{:ok, %{"models" => [
  %{"name" => "codellama:13b", ...},
  %{"name" => "llama2:latest", ...},
]}}
Link to this function

new(url \\ "http://localhost:11434/api")

View Source
@spec new(Req.url() | Req.Request.t()) :: t()

Creates a new API client with the provided URL. If no URL is given, it defaults to "http://localhost:11434/api".

Examples

iex> api = Ollama.API.new("https://ollama.service.ai:11434")
%Ollama.API{}
@spec pull_model(
  t(),
  keyword()
) :: response()

Downloads a model from the ollama library. Optionally streamable.

Options

Required options:

  • :name - Name of the model to pull.

The following options are accepted:

Example

iex> Ollama.API.pull_model(api, "llama2", stream: fn data ->
...>   IO.inspect(data) # %{"status" => "pulling manifest"}
...> end)
{:ok, ""}
Link to this function

pull_model(api, model, opts)

View Source
This function is deprecated. Use Ollama.API.pull_model/2.
@spec pull_model(t(), String.t(), keyword()) :: response()
@spec show_model(
  t(),
  keyword()
) :: response()

Shows all information for a specific model.

Options

Required options:

  • :name - Name of the model to create.

Example

iex> Ollama.API.show_model(api, name: "llama2")
{:ok, %{
  "details" => %{
    "families" => ["llama", "clip"],
    "family" => "llama",
    "format" => "gguf",
    "parameter_size" => "7B",
    "quantization_level" => "Q4_0"
  },
  "modelfile" => "...",
  "parameters" => "...",
  "template" => "..."
}}