# `LlmCore.LLM.Ollama`
[🔗](https://github.com/fosferon/llm_core/blob/v0.3.0/lib/llm_core/llm/ollama.ex#L1)

Ollama provider implementing the `LlmCore.LLM.Provider` behaviour.

Supports both synchronous and streaming chat completions, including optional
structured-output mode using Ollama's JSON formatting.

# `available?`

```elixir
@spec available?() :: boolean()
```

Checks if the Ollama server is reachable at the configured base URL.

# `capabilities`

```elixir
@spec capabilities() :: LlmCore.LLM.Provider.capabilities()
```

Returns the Ollama capability map including streaming, structured output,
and tool use support.

# `provider_type`

```elixir
@spec provider_type() :: :local
```

Returns `:local` — Ollama is a local inference provider.

# `send`

```elixir
@spec send(
  LlmCore.LLM.Provider.prompt(),
  keyword()
) :: {:ok, LlmCore.LLM.Response.t()} | {:error, LlmCore.LLM.Error.t()}
```

Sends a prompt to the Ollama chat API and returns the response.

When `opts[:tools]` contains a list of `LlmToolkit.Tool` structs, tool
definitions are encoded into the request body (OpenAI-compatible format).
If the model responds with tool calls, the returned `Response.tool_calls`
will contain decoded `LlmToolkit.Tool.Call` structs.

# `stream`

```elixir
@spec stream(
  LlmCore.LLM.Provider.prompt(),
  keyword()
) :: {:ok, Enumerable.t()} | {:error, LlmCore.LLM.Error.t()}
```

Streams a response from the Ollama chat API.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
