# `Condukt.Providers.Ollama`
[🔗](https://github.com/tuist/condukt/blob/0.16.5/lib/condukt/providers/ollama.ex#L1)

Ollama provider – self-hosted OpenAI-compatible Chat Completions API.

## Implementation

Uses built-in OpenAI-style encoding/decoding defaults.
Ollama exposes an OpenAI-compatible API at `/v1`, so no custom
request/response handling is needed.

## Self-Hosted Configuration

Ollama is a self-hosted inference server. Users must:

1. Install and run Ollama (https://ollama.com)
2. Pull a model (e.g., `ollama pull llama3.2`)
3. Optionally set a custom `base_url` if not running on localhost

## Authentication

Ollama does not require authentication by default.
Set `OLLAMA_API_KEY` to any non-empty value (it is required by the
provider interface but not validated by Ollama).

## Configuration

    # Add to .env file (automatically loaded)
    OLLAMA_API_KEY=ollama

## Examples

    # Basic usage with default localhost
    Condukt.run(agent, "Hello!",
      model: "ollama:llama3.2"
    )

    # With custom base_url for a remote Ollama instance
    MyAgent.start_link(
      model: "ollama:llama3.2",
      base_url: "http://my-server:11434/v1"
    )

# `attach`

Default implementation of attach/3.

Sets up Bearer token authentication and standard pipeline steps.

# `attach_stream`

Default implementation of attach_stream/4.

Builds complete streaming requests using OpenAI-compatible format.

# `base_url`

# `build_body`

Default implementation of build_body/1.

Builds request body using OpenAI-compatible format for chat and embedding operations.

# `decode_response`

Default implementation of decode_response/1.

Handles success/error responses with standard ReqLLM.Response creation.

# `decode_stream_event`

Default implementation of decode_stream_event/2.

Decodes SSE events using OpenAI-compatible format.

# `default_base_url`

# `default_env_key`

# `encode_body`

Default implementation of encode_body/1.

Encodes request body using OpenAI-compatible format for chat and embedding operations.

# `extract_usage`

Default implementation of extract_usage/2.

Extracts usage data from standard `usage` field in response body.

# `prepare_request`

Default implementation of prepare_request/4.

Handles :chat, :object, and :embedding operations using OpenAI-compatible patterns.

# `provider_extended_generation_schema`

# `provider_id`

# `provider_schema`

# `supported_provider_options`

# `translate_options`

Default implementation of translate_options/3.

Pass-through implementation that returns options unchanged.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
