# `LlmCore.LLM.Appliance`
[🔗](https://github.com/fosferon/llm_core/blob/v0.3.0/lib/llm_core/llm/appliance.ex#L1)

Generic local inference appliance provider (DGX Spark, future devices).

Assumes an OpenAI-compatible chat completion API exposed over HTTP.

# `available?`

```elixir
@spec available?() :: boolean()
```

Checks if the appliance base URL is configured and the health endpoint responds.

# `capabilities`

```elixir
@spec capabilities() :: LlmCore.LLM.Provider.capabilities()
```

Returns the appliance capability map. Models and max_context are sourced from
application config.

# `discover`

```elixir
@spec discover() :: [{String.t() | binary(), URI.t()}]
```

Discover configured appliances (future mDNS hook).

# `health`

```elixir
@spec health(String.t() | URI.t()) :: boolean()
```

Perform a lightweight health check against the appliance.

# `provider_type`

```elixir
@spec provider_type() :: :local
```

Returns `:local` — Appliance is a local inference provider.

# `send`

```elixir
@spec send(
  LlmCore.LLM.Provider.prompt(),
  keyword()
) :: {:ok, LlmCore.LLM.Response.t()} | {:error, LlmCore.LLM.Error.t()}
```

Sends a prompt to the appliance chat completions endpoint.

# `stream`

```elixir
@spec stream(
  LlmCore.LLM.Provider.prompt(),
  keyword()
) :: {:ok, Enumerable.t()} | {:error, LlmCore.LLM.Error.t()}
```

Streams a response from the appliance chat completions endpoint via SSE.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
