Condukt.Providers.Ollama (Condukt v0.16.5)

Copy Markdown View Source

Ollama provider – self-hosted OpenAI-compatible Chat Completions API.

Implementation

Uses built-in OpenAI-style encoding/decoding defaults. Ollama exposes an OpenAI-compatible API at /v1, so no custom request/response handling is needed.

Self-Hosted Configuration

Ollama is a self-hosted inference server. Users must:

  1. Install and run Ollama (https://ollama.com)
  2. Pull a model (e.g., ollama pull llama3.2)
  3. Optionally set a custom base_url if not running on localhost

Authentication

Ollama does not require authentication by default. Set OLLAMA_API_KEY to any non-empty value (it is required by the provider interface but not validated by Ollama).

Configuration

# Add to .env file (automatically loaded)
OLLAMA_API_KEY=ollama

Examples

# Basic usage with default localhost
Condukt.run(agent, "Hello!",
  model: "ollama:llama3.2"
)

# With custom base_url for a remote Ollama instance
MyAgent.start_link(
  model: "ollama:llama3.2",
  base_url: "http://my-server:11434/v1"
)

Summary

Functions

Default implementation of attach/3.

Default implementation of attach_stream/4.

Default implementation of build_body/1.

Default implementation of decode_response/1.

Default implementation of decode_stream_event/2.

Default implementation of encode_body/1.

Default implementation of extract_usage/2.

Default implementation of prepare_request/4.

Default implementation of translate_options/3.

Functions

attach(request, model_input, user_opts)

Default implementation of attach/3.

Sets up Bearer token authentication and standard pipeline steps.

attach_stream(model, context, opts, finch_name)

Default implementation of attach_stream/4.

Builds complete streaming requests using OpenAI-compatible format.

base_url()

build_body(request)

Default implementation of build_body/1.

Builds request body using OpenAI-compatible format for chat and embedding operations.

decode_response(request_response)

Default implementation of decode_response/1.

Handles success/error responses with standard ReqLLM.Response creation.

decode_stream_event(event, model)

Default implementation of decode_stream_event/2.

Decodes SSE events using OpenAI-compatible format.

default_base_url()

default_env_key()

Callback implementation for ReqLLM.Provider.default_env_key/0.

encode_body(request)

Default implementation of encode_body/1.

Encodes request body using OpenAI-compatible format for chat and embedding operations.

extract_usage(body, model)

Default implementation of extract_usage/2.

Extracts usage data from standard usage field in response body.

prepare_request(operation, model_spec, input, opts)

Default implementation of prepare_request/4.

Handles :chat, :object, and :embedding operations using OpenAI-compatible patterns.

provider_extended_generation_schema()

provider_id()

provider_schema()

supported_provider_options()

translate_options(operation, model, opts)

Default implementation of translate_options/3.

Pass-through implementation that returns options unchanged.