# `LlmCore.LLM.Provider`
[🔗](https://github.com/fosferon/llm_core/blob/v0.3.0/lib/llm_core/llm/provider.ex#L1)

Behaviour module defining the contract for LLM providers.

Providers can be API-based, local appliance-based, or CLI wrappers. Each
provider is responsible for exposing capability metadata so the router and
inference pipeline can make informed decisions (streaming, structured output,
tool use, etc.).

## Provider Types

  * `:api`    - Remote HTTP APIs (OpenAI, Anthropic, Z.ai)
  * `:local`  - Local GPU / appliance endpoints (Ollama, DGX Spark)
  * `:cli`    - Command-line tools (Claude Code CLI, Gemini CLI)

## Prompt Shape

Providers must accept either a string prompt or a list of chat-style
messages (%{role: :user | :system | :assistant, content: String.t()}).

## Capability Metadata

`capabilities/0` must return a map containing, at minimum, the keys defined
in `t:capabilities/0`. This allows llm_core to enforce requirements such as
streaming or structured output before dispatching to a provider.

## Implementing a Provider

    defmodule MyProvider do
      @behaviour LlmCore.LLM.Provider

      @impl true
      def send(prompt, opts \ []), do: {:ok, %LlmCore.LLM.Response{content: "hi"}}

      @impl true
      def stream(prompt, opts \ []), do: {:ok, Stream.iterate(1, & &1)}

      @impl true
      def available?, do: true

      @impl true
      def capabilities do
        %{
          streaming: true,
          structured_output: false,
          tool_use: false,
          vision: false,
          models: ["demo"],
          max_context: 16_384
        }
      end

      @impl true
      def provider_type, do: :api
    end

# `capabilities`

```elixir
@type capabilities() :: %{
  streaming: boolean(),
  structured_output: boolean(),
  tool_use: boolean(),
  vision: boolean(),
  models: [String.t()],
  max_context: pos_integer() | nil
}
```

# `message`

```elixir
@type message() :: %{role: role(), content: String.t()}
```

# `opts`

```elixir
@type opts() :: keyword()
```

# `prompt`

```elixir
@type prompt() :: String.t() | [message()]
```

# `role`

```elixir
@type role() :: :system | :user | :assistant | :tool
```

# `available?`

```elixir
@callback available?() :: boolean()
```

Checks if the provider is available and can accept requests.

For CLI providers, this typically checks if the CLI executable exists
(e.g., `which claude`). For API providers, this checks if the required
API key environment variable is set.

## Returns

  * `true` - Provider is available
  * `false` - Provider is not available

## Examples

    if MyProvider.available?() do
      MyProvider.send("Hello")
    end

# `capabilities`

```elixir
@callback capabilities() :: capabilities()
```

Returns a map describing the provider's capabilities.

This allows the system to make intelligent decisions about
which provider to use for specific tasks.

## Expected Keys

  * `:streaming` - Boolean indicating streaming support
  * `:passthrough` - Boolean indicating pass-through mode support (CLI providers)
  * `:models` - List of supported models (optional)

## Returns

  * `map()` - Capability map

## Examples

    MyProvider.capabilities()
    #=> %{streaming: true, passthrough: false, models: ["gpt-4", "gpt-3.5-turbo"]}

# `provider_type`

```elixir
@callback provider_type() :: :cli | :api | :local | :workflow
```

Returns the provider type.

## Returns

  * `:cli` - CLI-based provider (e.g., Claude Code, Gemini CLI)
  * `:api` - API-based provider (e.g., OpenAI, Z.ai)

CLI providers support pass-through mode where raw commands can be
forwarded directly to the underlying CLI.

## Examples

    MyProvider.provider_type()
    #=> :api

# `send`

```elixir
@callback send(prompt(), opts()) ::
  {:ok, LlmCore.LLM.Response.t()} | {:error, LlmCore.LLM.Error.t()}
```

Sends a prompt to the LLM provider and returns the response.

## Parameters

  * `prompt` - The prompt string to send
  * `opts` - Provider-specific options (e.g., model, temperature, max_tokens)

## Returns

  * `{:ok, Response.t()}` - Successful response
  * `{:error, Error.t()}` - Error occurred

## Examples

    {:ok, response} = MyProvider.send("Explain this code", model: "gpt-4")
    response.content
    #=> "This code does..."

# `stream`

```elixir
@callback stream(prompt(), opts()) ::
  {:ok, Enumerable.t()} | {:error, LlmCore.LLM.Error.t()}
```

Sends a prompt and returns a stream of response chunks.

Streaming is essential for real-time user feedback, especially
for long-running completions. The returned enumerable yields
response chunks as they arrive from the provider.

## Parameters

  * `prompt` - The prompt string to send
  * `opts` - Provider-specific options

## Returns

  * `{:ok, Enumerable.t()}` - Stream of response chunks
  * `{:error, Error.t()}` - Error occurred before streaming started

## Examples

    {:ok, stream} = MyProvider.stream("Write a story")
    Enum.each(stream, fn chunk -> IO.write(chunk) end)

# `dispatch`

```elixir
@spec dispatch(module() | struct(), prompt(), keyword()) ::
  {:ok, LlmCore.LLM.Response.t()} | {:error, LlmCore.LLM.Error.t()}
```

Dispatches a prompt to the provider, handling both modules and structs.

Module-based providers: calls `provider.send(prompt, opts)`.
Struct-based providers: calls the struct's module `.send(struct, prompt, opts)`.

# `dispatch_available?`

```elixir
@spec dispatch_available?(module() | struct()) :: boolean()
```

Checks provider availability, handling both modules and structs.

# `dispatch_stream`

```elixir
@spec dispatch_stream(module() | struct(), prompt(), keyword()) ::
  {:ok, Enumerable.t()} | {:error, LlmCore.LLM.Error.t()}
```

Dispatches a streaming prompt to the provider.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
