# `Arcanum.Provider`
[🔗](https://github.com/kakilangit/arcanum/blob/v0.1.0/lib/arcanum/provider.ex#L1)

Behaviour for LLM inference providers.

Each adapter (Ollama, OpenAI, Anthropic, etc.) implements this behaviour
to provide a uniform interface for chat completion and model listing.

Adapters receive a `ModelProfile` that declares model capabilities upfront.
All provider/model-specific serialization decisions are driven by the profile —
no runtime detection, no retry-on-error branching.

# `stream_event`

```elixir
@type stream_event() :: {:data, Arcanum.Response.t()} | {:error, term()} | :done
```

# `chat`

```elixir
@callback chat(
  provider :: map(),
  intent :: Arcanum.Intent.t(),
  profile :: Arcanum.ModelProfile.t()
) ::
  {:ok, Arcanum.Response.t()} | {:error, term()}
```

Sends a chat completion request and returns the full response.

# `embed`
*optional* 

```elixir
@callback embed(provider :: map(), model :: String.t(), input :: String.t()) ::
  {:ok, [float()]} | {:error, term()}
```

Generates embeddings for the given text input.

# `list_models`

```elixir
@callback list_models(provider :: map()) :: {:ok, [String.t()]} | {:error, term()}
```

Lists available models from the provider.

# `stream`

```elixir
@callback stream(
  provider :: map(),
  intent :: Arcanum.Intent.t(),
  profile :: Arcanum.ModelProfile.t()
) ::
  {:ok, Enumerable.t()} | {:error, term()}
```

Sends a streaming chat completion request.

Returns a stream of `{:data, response}` events, terminated by `:done`.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
