Gateway for Ollama local LLM service.
This gateway provides access to local LLM models through Ollama, supporting text generation, structured output, tool calling, and embeddings.
Configuration
Set environment variables to configure the gateway:
export OLLAMA_HOST=http://localhost:11434
export OLLAMA_TIMEOUT=300000 # 5 minutes in milliseconds (default)The timeout is especially important for larger models which may take longer to generate responses.
Examples
alias Mojentic.LLM.{Broker, Message}
alias Mojentic.LLM.Gateways.Ollama
broker = Broker.new("qwen3:32b", Ollama)
messages = [Message.user("Hello!")]
{:ok, response} = Broker.generate(broker, messages)
Summary
Functions
Pulls a model from the Ollama library with progress tracking.
Functions
Pulls a model from the Ollama library with progress tracking.
This function streams the model download progress, calling the optional progress callback function with status updates.
Parameters
model- The name of the model to pull (e.g., "qwen3:32b")progress_callback- Optional function that receives progress updates. The callback is called with a map containing::status- Status message (e.g., "downloading", "verifying", "success"):completed- Bytes downloaded (if available):total- Total bytes to download (if available):digest- Layer digest being processed (if available)
Returns
{:ok, model_name}- Model successfully pulled{:error, reason}- Pull failed
Examples
# Pull without progress tracking
iex> Ollama.pull_model("qwen3:32b")
{:ok, "qwen3:32b"}
# Pull with progress tracking
iex> callback = fn status -> IO.puts("Status: #{status.status}") end
iex> Ollama.pull_model("qwen3:32b", callback)
{:ok, "qwen3:32b"}