# `Ollixir`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1)

Elixir client for the Ollama API.

## Quick Start

    client = Ollixir.init()
    {:ok, response} = Ollixir.chat(client,
      model: "llama3.2",
      messages: [%{role: "user", content: "Hello!"}]
    )

## Client Configuration

    # Default (localhost:11434)
    client = Ollixir.init()

    # Custom host
    client = Ollixir.init("http://ollama.example.com:11434")

    # With options
    client = Ollixir.init(
      base_url: "http://localhost:11434/api",
      receive_timeout: 120_000,
      headers: [{"authorization", "Bearer token"}]
    )

## Streaming

Two modes are available:

### Enumerable Mode

    {:ok, stream} = Ollixir.chat(client, model: "llama3.2", messages: msgs, stream: true)
    Enum.each(stream, &IO.inspect/1)

### Process Mode (for GenServer/LiveView)

    {:ok, task} = Ollixir.chat(client, model: "llama3.2", messages: msgs, stream: self())
    # Receive messages with handle_info/2

See the [Streaming Guide](guides/streaming.md) for details.

## Error Handling

All functions return `{:ok, result}` or `{:error, reason}`.

    case Ollixir.chat(client, opts) do
      {:ok, response} -> handle_success(response)
      {:error, %Ollixir.ConnectionError{} = error} -> handle_connection(error)
      {:error, %Ollixir.ResponseError{status: 404}} -> handle_not_found()
      {:error, %Ollixir.ResponseError{status: status}} -> handle_error(status)
    end

## Links

- [GitHub](https://github.com/nshkrdotcom/ollixir)
- [Ollama API Docs](https://github.com/ollama/ollama/blob/main/docs/api.md)

# `client`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L66)

```elixir
@type client() :: %Ollixir{req: Req.Request.t()}
```

Client struct

# `message`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L103)

```elixir
@type message() ::
  {:role, term()}
  | {:content, binary() | nil}
  | {:images, [binary()]}
  | {:tool_name, binary()}
  | {:tool_calls, [%{optional(atom() | binary()) =&gt; term()}]}
```

Chat message

A chat message is a `t:map/0` with the following fields:

* `:role` - Required. The role of the message, either `system`, `user`, `assistant` or `tool`.
* `:content` - The content of the message. Optional for tool calls.
* `:images` (list of `t:String.t/0`) - *(optional)* List of Base64 encoded images (for multimodal models only).
* `:tool_name` (`t:String.t/0`) - *(optional)* Tool name for tool responses.
* `:tool_calls` - *(optional)* List of tools the model wants to use.

# `response`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L143)

```elixir
@type response() ::
  {:ok, map() | boolean() | binary() | Enumerable.t() | Task.t()}
  | {:error, term()}
```

Client response

# `tool`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L140)

```elixir
@type tool() :: {:type, binary()} | {:function, map()}
```

Tool definition

A tool definition is a `t:map/0` with the following fields:

* `:type` (`t:String.t/0`) - Type of tool. Defaults to `"function"`. The default value is `"function"`.
* `:function` (`t:map/0`) - Required.
  * `:name` (`t:String.t/0`) - Required. The name of the function to be called.
  * `:description` (`t:String.t/0`) - A description of what the function does.
  * `:parameters` - Required. The parameters the functions accepts.

# `chat`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L618)

```elixir
@spec chat(
  client(),
  keyword()
) :: response()
```

Generates the next message in a chat using the specified model. Optionally
streamable.

## Parameters

- `client` - Ollama client from `init/1`
- `params` - Keyword list of chat options (see below)

## Options

* `:model` (`t:String.t/0`) - Required. The Ollama model name.
* `:messages` (list of `t:map/0`) - Required. List of messages - used to keep a chat memory.
* `:tools` (list of `t:map/0`) - Tools for the model to use if supported (requires `stream` to be `false`)
* `:format` - Set the expected format of the response (`json` or JSON schema map).
* `:stream` - See [section on streaming](#module-streaming). The default value is `false`.
* `:think` - Enable thinking mode. Can be true/false or level: 'low', 'medium', 'high' The default value is `false`.
* `:logprobs` (`t:boolean/0`) - Return log probabilities for generated tokens
* `:top_logprobs` (`t:integer/0`) - Number of alternative tokens to return (0-20)
* `:keep_alive` - How long to keep the model loaded.
* `:options` - Additional advanced [model parameters](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values).

## Message structure

Each message is a map with the following fields:

* `:role` - Required. The role of the message, either `system`, `user`, `assistant` or `tool`.
* `:content` - The content of the message. Optional for tool calls.
* `:images` (list of `t:String.t/0`) - *(optional)* List of Base64 encoded images (for multimodal models only).
* `:tool_name` (`t:String.t/0`) - *(optional)* Tool name for tool responses.
* `:tool_calls` - *(optional)* List of tools the model wants to use.

## Tool definitions

* `:type` (`t:String.t/0`) - Type of tool. Defaults to `"function"`. The default value is `"function"`.
* `:function` (`t:map/0`) - Required.
  * `:name` (`t:String.t/0`) - Required. The name of the function to be called.
  * `:description` (`t:String.t/0`) - A description of what the function does.
  * `:parameters` - Required. The parameters the functions accepts.

## Examples

    iex> messages = [
    ...>   %{role: "system", content: "You are a helpful assistant."},
    ...>   %{role: "user", content: "Why is the sky blue?"},
    ...>   %{role: "assistant", content: "Due to rayleigh scattering."},
    ...>   %{role: "user", content: "How is that different than mie scattering?"},
    ...> ]

    iex> Ollixir.chat(client, [
    ...>   model: "llama2",
    ...>   messages: messages,
    ...> ])
    {:ok, %{"message" => %{
      "role" => "assistant",
      "content" => "Mie scattering affects all wavelengths similarly, while Rayleigh favors shorter ones."
    }, ...}}

    # Passing true to the :stream option initiates an async streaming request.
    iex> Ollixir.chat(client, [
    ...>   model: "llama2",
    ...>   messages: messages,
    ...>   stream: true,
    ...> ])
    {:ok, Ollixir.Streaming{}}

## Returns

- `{:ok, map()}` - Success with response data
- `{:ok, Stream.t()}` - When `stream: true`
- `{:ok, Task.t()}` - When `stream: pid`
- `{:error, Ollixir.RequestError.t()}` - On validation errors
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `completion/2` - For single-turn generation
- `embed/2` - For embeddings

# `check_blob`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1433)

```elixir
@spec check_blob(client(), Ollixir.Blob.digest() | binary()) :: response()
```

Checks a blob exists in Ollama by its digest or binary data.

## Parameters

- `client` - Ollama client from `init/1`
- `digest_or_blob` - Digest string or raw binary data

## Examples

    iex> Ollixir.check_blob(client, "sha256:fe938a131f40e6f6d40083c9f0f430a515233eb2edaa6d72eb85c50d64f2300e")
    {:ok, true}

    iex> Ollixir.check_blob(client, "this should not exist")
    {:ok, false}

## Returns

- `{:ok, true}` - When the blob exists
- `{:ok, false}` - When the blob does not exist
- `{:error, Ollixir.RequestError.t()}` - On validation errors
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `create_blob/2` - Create a blob

# `completion`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L748)

```elixir
@spec completion(
  client(),
  keyword()
) :: response()
```

Generates a completion for the given prompt using the specified model.
Optionally streamable.

## Parameters

- `client` - Ollama client from `init/1`
- `params` - Keyword list of completion options (see below)

## Options

* `:model` (`t:String.t/0`) - Required. The Ollama model name.
* `:prompt` (`t:String.t/0`) - Required. Prompt to generate a response for.
* `:suffix` (`t:String.t/0`) - Text to append after generated content (for code completion)
* `:images` (list of `t:String.t/0`) - A list of Base64 encoded images to be included with the prompt (for multimodal models only).
* `:system` (`t:String.t/0`) - System prompt, overriding the model default.
* `:template` (`t:String.t/0`) - Prompt template, overriding the model default.
* `:context` - The context parameter returned from a previous `completion/2` call (enabling short conversational memory).
* `:format` - Set the expected format of the response (`json` or JSON schema map).
* `:raw` (`t:boolean/0`) - Set `true` if specifying a fully templated prompt. (`:template` is ingored)
* `:stream` - See [section on streaming](#module-streaming). The default value is `false`.
* `:think` - Enable thinking mode. Can be true/false or level: 'low', 'medium', 'high' The default value is `false`.
* `:logprobs` (`t:boolean/0`) - Return log probabilities for generated tokens
* `:top_logprobs` (`t:integer/0`) - Number of alternative tokens to return (0-20)
* `:keep_alive` - How long to keep the model loaded.
* `:options` - Additional advanced [model parameters](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values).

## Examples

    iex> Ollixir.completion(client, [
    ...>   model: "llama2",
    ...>   prompt: "Why is the sky blue?",
    ...> ])
    {:ok, %{"response": "The sky is blue because it is the color of the sky.", ...}}

    # Passing true to the :stream option initiates an async streaming request.
    iex> Ollixir.completion(client, [
    ...>   model: "llama2",
    ...>   prompt: "Why is the sky blue?",
    ...>   stream: true,
    ...> ])
    {:ok, %Ollixir.Streaming{}}

## Returns

- `{:ok, map()}` - Success with response data
- `{:ok, Stream.t()}` - When `stream: true`
- `{:ok, Task.t()}` - When `stream: pid`
- `{:error, Ollixir.RequestError.t()}` - On validation errors
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `chat/2` - For multi-turn conversations
- `embed/2` - For embeddings

# `copy`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1186)

```elixir
@spec copy(
  client(),
  keyword()
) :: response()
```

Alias for `copy_model/2` to match the Python client's `copy`.

# `copy_model`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1160)

```elixir
@spec copy_model(
  client(),
  keyword()
) :: response()
```

Creates a model with another name from an existing model.

## Parameters

- `client` - Ollama client from `init/1`
- `params` - Keyword list with `:source` and `:destination`

## Options

* `:source` (`t:String.t/0`) - Required. Name of the model to copy from.
* `:destination` (`t:String.t/0`) - Required. Name of the model to copy to.

## Example

    iex> Ollixir.copy_model(client, [
    ...>   source: "llama2",
    ...>   destination: "llama2-backup"
    ...> ])
    {:ok, true}

## Returns

- `{:ok, true}` - When the copy succeeded
- `{:ok, false}` - When the model was not found
- `{:error, Ollixir.RequestError.t()}` - On validation errors
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `delete_model/2` - Delete a model
- `show_model/2` - Inspect a model

# `create`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L867)

```elixir
@spec create(
  client(),
  keyword()
) :: response()
```

Alias for `create_model/2` to match the Python client's `create`.

# `create_blob`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1467)

```elixir
@spec create_blob(client(), binary()) :: response()
```

Uploads a blob and returns its digest.

## Parameters

- `client` - Ollama client from `init/1`
- `blob` - File path or raw binary data

## Examples

    iex> Ollixir.create_blob(client, "adapter.bin")
    {:ok, "sha256:..."}

    iex> data = File.read!("adapter.bin")
    iex> Ollixir.create_blob(client, data)
    {:ok, "sha256:..."}

## Returns

- `{:ok, digest}` - When the blob was created or already exists
- `{:error, Ollixir.RequestError.t()}` - On validation errors
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `check_blob/2` - Verify blob existence

# `create_model`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L827)

```elixir
@spec create_model(
  client(),
  keyword()
) :: response()
```

Creates a model using the given name and model file. Optionally
streamable.

Any dependent blobs reference in the modelfile, such as `FROM` and `ADAPTER`
instructions, must exist first. See `check_blob/2` and `create_blob/2`.

## Parameters

- `client` - Ollama client from `init/1`
- `params` - Keyword list of model creation options (see below)

## Options

* `:name` (`t:String.t/0`) - Required. Name for the new model
* `:modelfile` (`t:String.t/0`) - Modelfile contents
* `:from` (`t:String.t/0`) - Base model to create from
* `:files` (map of `t:String.t/0` keys and `t:String.t/0` values) - Custom files to include
* `:adapters` (map of `t:String.t/0` keys and `t:String.t/0` values) - LoRA adapter files
* `:template` (`t:String.t/0`) - Custom prompt template
* `:license` - License declaration
* `:system` (`t:String.t/0`) - System prompt
* `:parameters` - Model parameters
* `:messages` (list of `t:map/0`) - Sample conversation messages
* `:quantize` (`t:String.t/0`) - Quantization level (f16, f32, etc.)
* `:stream` - Enable streaming

## Example

    iex> modelfile = "FROM llama2\nSYSTEM \"You are mario from Super Mario Bros.\""
    iex> Ollixir.create_model(client, [
    ...>   name: "mario",
    ...>   modelfile: modelfile,
    ...>   stream: true,
    ...> ])
    {:ok, Ollixir.Streaming{}}

## Returns

- `{:ok, map()}` - Success with response data
- `{:ok, Stream.t()}` - When `stream: true`
- `{:ok, Task.t()}` - When `stream: pid`
- `{:error, Ollixir.RequestError.t()}` - On validation errors
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `check_blob/2` - Verify dependent blobs
- `create_blob/2` - Create blob dependencies

# `delete`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1254)

```elixir
@spec delete(
  client(),
  keyword()
) :: response()
```

Alias for `delete_model/2` to match the Python client's `delete`.

# `delete_model`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1228)

```elixir
@spec delete_model(
  client(),
  keyword()
) :: response()
```

Deletes a model and its data.

## Parameters

- `client` - Ollama client from `init/1`
- `params` - Keyword list with `:name`

## Options

* `:source` (`t:String.t/0`) - Required. Name of the model to copy from.
* `:destination` (`t:String.t/0`) - Required. Name of the model to copy to.

## Example

    iex> Ollixir.delete_model(client, name: "llama2")
    {:ok, true}

## Returns

- `{:ok, true}` - When the delete succeeded
- `{:ok, false}` - When the model was not found
- `{:error, Ollixir.RequestError.t()}` - On validation errors
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `copy_model/2` - Copy a model
- `show_model/2` - Inspect a model

# `embed`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1574)

```elixir
@spec embed(
  client(),
  keyword()
) :: response()
```

Generate embeddings from a model for the given prompt.

## Parameters

- `client` - Ollama client from `init/1`
- `params` - Keyword list of embed options (see below)

## Options

* `:model` (`t:String.t/0`) - Required. The name of the model used to generate the embeddings.
* `:input` - Required. Text or list of text to generate embeddings for.
* `:truncate` (`t:boolean/0`) - Truncates the end of each input to fit within context length.
* `:dimensions` (`t:integer/0`) - Output embedding dimensions (model-specific)
* `:keep_alive` - How long to keep the model loaded.
* `:options` - Additional advanced [model parameters](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values).

## Example

    iex> Ollixir.embed(client, [
    ...>   model: "nomic-embed-text",
    ...>   input: ["Why is the sky blue?", "Why is the grass green?"],
    ...> ])
    {:ok, %{"embedding" => [
      [ 0.009724553, 0.04449892, -0.14063916, 0.0013168337, 0.032128844,
        0.10730086, -0.008447222, 0.010106917, 5.2289694e-4, -0.03554127, ...],
      [ 0.028196355, 0.043162502, -0.18592504, 0.035034444, 0.055619627,
        0.12082449, -0.0090096295, 0.047170386, -0.032078084, 0.0047163847, ...]
    ]}}

## Returns

- `{:ok, map()}` - Embedding response data
- `{:error, Ollixir.RequestError.t()}` - On validation errors
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `embeddings/2` - Deprecated embedding API

# `embeddings`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1644)

> This function is deprecated. Superseded by embed/2.

```elixir
@spec embeddings(
  client(),
  keyword()
) :: response()
```

Generate embeddings from a model for the given prompt.

## Parameters

- `client` - Ollama client from `init/1`
- `params` - Keyword list of embedding options (see below)

## Options

* `:model` (`t:String.t/0`) - Required. The name of the model used to generate the embeddings.
* `:prompt` (`t:String.t/0`) - Required. The prompt used to generate the embedding.
* `:keep_alive` - How long to keep the model loaded.
* `:options` - Additional advanced [model parameters](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values).

## Example

    iex> Ollixir.embeddings(client, [
    ...>   model: "llama2",
    ...>   prompt: "Here is an article about llamas..."
    ...> ])
    {:ok, %{"embedding" => [
      0.5670403838157654, 0.009260174818336964, 0.23178744316101074, -0.2916173040866852, -0.8924556970596313,
      0.8785552978515625, -0.34576427936553955, 0.5742510557174683, -0.04222835972905159, -0.137906014919281
    ]}}

## Returns

- `{:ok, map()}` - Embedding response data
- `{:error, Ollixir.RequestError.t()}` - On validation errors
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `embed/2` - Preferred embedding API

# `generate`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L768)

```elixir
@spec generate(
  client(),
  keyword()
) :: response()
```

Alias for `completion/2` to match the Python client's `generate`.

# `init`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L192)

```elixir
@spec init(Req.url() | keyword() | Req.Request.t()) :: client()
```

Initializes a new Ollama client.

## Parameters

- `opts` - Base URL, host string, `%URI{}`, `Req.Request`, or keyword options for `Req.new/1`.

## Environment Variables

- `OLLAMA_HOST` - Default Ollama server URL (default: http://localhost:11434)
- `OLLAMA_API_KEY` - Bearer token for API authentication

## Examples

    # Uses OLLAMA_HOST or defaults to localhost:11434
    client = Ollixir.init()

    # Explicit URL (overrides OLLAMA_HOST)
    client = Ollixir.init("http://ollama.example.com:11434")

    # Host strings without a scheme use http:// and default port 11434
    client = Ollixir.init("ollama.example.com")
    client = Ollixir.init(":11434")

    # With host option
    client = Ollixir.init(host: "ollama.example.com:11434")

    # With custom options
    client = Ollixir.init(receive_timeout: 120_000)

## Returns

- `t:client/0` - Configured Ollama client

## See Also

- `chat/2` - Chat API requests
- `completion/2` - Completion API requests

# `list`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L911)

```elixir
@spec list(
  client(),
  keyword()
) :: response()
```

Alias for `list_models/2` to match the Python client's `list`.

# `list_models`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L897)

```elixir
@spec list_models(
  client(),
  keyword()
) :: response()
```

Lists all models that Ollama has available.

## Parameters

- `client` - Ollama client from `init/1`

## Example

    iex> Ollixir.list_models(client)
    {:ok, %{"models" => [
      %{"name" => "codellama:13b", ...},
      %{"name" => "llama2:latest", ...},
    ]}}

## Returns

- `{:ok, map()}` - Map containing available models
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `show_model/2` - Fetch model details
- `list_running/1` - List running models

# `list_running`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L940)

```elixir
@spec list_running(
  client(),
  keyword()
) :: response()
```

Lists currently running models, their memory footprint, and process details.

## Parameters

- `client` - Ollama client from `init/1`

## Example

    iex> Ollixir.list_running(client)
    {:ok, %{"models" => [
      %{"name" => "nomic-embed-text:latest", ...},
    ]}}

## Returns

- `{:ok, map()}` - Map containing running models
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `list_models/1` - List available models
- `show_model/2` - Fetch model details

# `preload`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1000)

```elixir
@spec preload(
  client(),
  keyword()
) :: response()
```

Load a model into memory without generating a completion. Optionally specify
a keep alive value (defaults to 5 minutes, set `-1` to permanently keep alive).

## Parameters

- `client` - Ollama client from `init/1`
- `params` - Keyword list with `:model` and optional `:keep_alive`

## Options

* `:model` (`t:String.t/0`) - Required. Name of the model to load.
* `:keep_alive` - How long to keep the model loaded.

## Example

    iex> Ollixir.preload(client, model: "llama3.1", timeout: 3_600_000)
    true

## Returns

- `{:ok, true}` - When the model was loaded
- `{:ok, false}` - When the model was not found
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `unload/2` - Unload a model
- `list_running/1` - Check running models

# `ps`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L954)

```elixir
@spec ps(
  client(),
  keyword()
) :: response()
```

Alias for `list_running/2` to match the Python client's `ps`.

# `pull`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1327)

```elixir
@spec pull(
  client(),
  keyword()
) :: response()
```

Alias for `pull_model/2` to match the Python client's `pull`.

# `pull_model`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1309)

```elixir
@spec pull_model(
  client(),
  keyword()
) :: response()
```

Downloads a model from the Ollama library. Optionally streamable.

## Parameters

- `client` - Ollama client from `init/1`
- `params` - Keyword list with `:name` and optional `:stream`

## Options

* `:name` (`t:String.t/0`) - Required. Name of the model to pull.
* `:insecure` (`t:boolean/0`) - Allow insecure (HTTP) connections.
* `:stream` - See [section on streaming](#module-streaming). The default value is `false`.

## Example

    iex> Ollixir.pull_model(client, name: "llama2")
    {:ok, %{"status" => "success"}}

    # Passing true to the :stream option initiates an async streaming request.
    iex> Ollixir.pull_model(client, name: "llama2", stream: true)
    {:ok, %Ollixir.Streaming{}}

## Returns

- `{:ok, map()}` - Status updates or completion
- `{:ok, Stream.t()}` - When `stream: true`
- `{:ok, Task.t()}` - When `stream: pid`
- `{:error, Ollixir.RequestError.t()}` - On validation errors
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `push_model/2` - Upload a model

# `push`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1401)

```elixir
@spec push(
  client(),
  keyword()
) :: response()
```

Alias for `push_model/2` to match the Python client's `push`.

# `push_model`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1383)

```elixir
@spec push_model(
  client(),
  keyword()
) :: response()
```

Upload a model to a model library. Requires an Ollama account and a public
key from https://ollama.com/settings/keys. Optionally streamable.

## Parameters

- `client` - Ollama client from `init/1`
- `params` - Keyword list with `:name` and optional `:stream`

## Options

* `:name` (`t:String.t/0`) - Required. Name of the model to pull.
* `:insecure` (`t:boolean/0`) - Allow insecure (HTTP) connections.
* `:stream` - See [section on streaming](#module-streaming). The default value is `false`.

## Example

    iex> Ollixir.push_model(client, name: "mattw/pygmalion:latest")
    {:ok, %{"status" => "success"}}

    # Passing true to the :stream option initiates an async streaming request.
    iex> Ollixir.push_model(client, name: "mattw/pygmalion:latest", stream: true)
    {:ok, %Ollixir.Streaming{}}

## Returns

- `{:ok, map()}` - Status updates or completion
- `{:ok, Stream.t()}` - When `stream: true`
- `{:ok, Task.t()}` - When `stream: pid`
- `{:error, Ollixir.RequestError.t()}` - On validation errors
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `pull_model/2` - Download a model

# `show`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1110)

```elixir
@spec show(
  client(),
  keyword()
) :: response()
```

Alias for `show_model/2` to match the Python client's `show`.

# `show_model`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1094)

```elixir
@spec show_model(
  client(),
  keyword()
) :: response()
```

Shows all information for a specific model.

## Parameters

- `client` - Ollama client from `init/1`
- `params` - Keyword list with `:name`

## Options

* `:name` (`t:String.t/0`) - Required. Name of the model to show.

## Example

    iex> Ollixir.show_model(client, name: "llama2")
    {:ok, %{
      "details" => %{
        "families" => ["llama", "clip"],
        "family" => "llama",
        "format" => "gguf",
        "parameter_size" => "7B",
        "quantization_level" => "Q4_0"
      },
      "modelfile" => "...",
      "parameters" => "...",
      "template" => "..."
    }}

## Returns

- `{:ok, map()}` - Model details
- `{:error, Ollixir.RequestError.t()}` - On validation errors
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `list_models/1` - List available models

# `unload`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1037)

```elixir
@spec unload(
  client(),
  keyword()
) :: response()
```

Stops a running model and unloads it from memory.

## Parameters

- `client` - Ollama client from `init/1`
- `params` - Keyword list with `:model`

## Options

- `:model` (`t:String.t/0`) - Required. Name of the model to unload.

## Example

    iex> Ollixir.preload(client, model: "llama3.1")
    true

## Returns

- `{:ok, true}` - When the model was unloaded
- `{:ok, false}` - When the model was not found
- `{:error, Ollixir.ResponseError.t()}` - On HTTP errors

## See Also

- `preload/2` - Load a model
- `list_running/1` - Check running models

# `web_fetch`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1683)

```elixir
@spec web_fetch(
  client(),
  keyword()
) :: response()
```

Fetch content from a URL using Ollama's cloud fetch API.

Delegates to `Ollixir.Web.fetch/2`.

# `web_fetch!`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1693)

```elixir
@spec web_fetch!(
  client(),
  keyword()
) :: Ollixir.Web.FetchResponse.t()
```

Fetch content from a URL using Ollama's cloud fetch API, raising on error.

Delegates to `Ollixir.Web.fetch!/2`.

# `web_search`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1663)

```elixir
@spec web_search(
  client(),
  keyword()
) :: response()
```

Search the web using Ollama's cloud search API.

Delegates to `Ollixir.Web.search/2`.

# `web_search!`
[🔗](https://github.com/nshkrdotcom/ollixir/blob/main/lib/ollixir.ex#L1673)

```elixir
@spec web_search!(
  client(),
  keyword()
) :: Ollixir.Web.SearchResponse.t()
```

Search the web using Ollama's cloud search API, raising on error.

Delegates to `Ollixir.Web.search!/2`.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
