LlmCore.LLM.Ollama (llm_core v0.3.0)

Copy Markdown View Source

Ollama provider implementing the LlmCore.LLM.Provider behaviour.

Supports both synchronous and streaming chat completions, including optional structured-output mode using Ollama's JSON formatting.

Summary

Functions

Checks if the Ollama server is reachable at the configured base URL.

Returns the Ollama capability map including streaming, structured output, and tool use support.

Returns :local — Ollama is a local inference provider.

Sends a prompt to the Ollama chat API and returns the response.

Streams a response from the Ollama chat API.

Functions

available?()

@spec available?() :: boolean()

Checks if the Ollama server is reachable at the configured base URL.

capabilities()

@spec capabilities() :: LlmCore.LLM.Provider.capabilities()

Returns the Ollama capability map including streaming, structured output, and tool use support.

provider_type()

@spec provider_type() :: :local

Returns :local — Ollama is a local inference provider.

send(prompt, opts \\ [])

@spec send(
  LlmCore.LLM.Provider.prompt(),
  keyword()
) :: {:ok, LlmCore.LLM.Response.t()} | {:error, LlmCore.LLM.Error.t()}

Sends a prompt to the Ollama chat API and returns the response.

When opts[:tools] contains a list of LlmToolkit.Tool structs, tool definitions are encoded into the request body (OpenAI-compatible format). If the model responds with tool calls, the returned Response.tool_calls will contain decoded LlmToolkit.Tool.Call structs.

stream(prompt, opts \\ [])

@spec stream(
  LlmCore.LLM.Provider.prompt(),
  keyword()
) :: {:ok, Enumerable.t()} | {:error, LlmCore.LLM.Error.t()}

Streams a response from the Ollama chat API.