# `LlmCore.LLM.OpenAI`
[🔗](https://github.com/fosferon/llm_core/blob/v0.3.0/lib/llm_core/llm/openai.ex#L1)

OpenAI-compatible API provider implementing the Provider behaviour.

Works with OpenAI, OpenRouter, Together, Groq, local vLLM — any endpoint
that speaks the OpenAI chat completions format.

## Configuration

Defaults to OpenAI. Override per-call via opts or globally via app config:

    # Per-call
    OpenAI.send(prompt, base_url: "https://openrouter.ai/api/v1",
                        api_key: System.get_env("OPENROUTER_API_KEY"),
                        model: "anthropic/claude-sonnet-4-20250514")

    # Global (application config)
    config :llm_core, :openai_base_url, "https://openrouter.ai/api/v1"
    config :llm_core, :openai_api_key, System.get_env("OPENROUTER_API_KEY")

## Auth Resolution Order

1. `opts[:api_key]` (per-call)
2. `Application.get_env(:llm_core, :openai_api_key)`
3. `System.get_env("OPENAI_API_KEY")`

## URL Resolution Order

1. `opts[:base_url]` (per-call)
2. `Application.get_env(:llm_core, :openai_base_url)`
3. `"https://api.openai.com/v1"` (default)

# `available?`

```elixir
@spec available?() :: boolean()
```

Checks if an OpenAI-compatible API key is configured.

# `capabilities`

```elixir
@spec capabilities() :: LlmCore.LLM.Provider.capabilities()
```

Returns the OpenAI capability map including streaming, structured output,
tool use, vision, and supported models.

# `provider_type`

```elixir
@spec provider_type() :: :api
```

Returns `:api` — OpenAI is a cloud API provider.

# `send`

```elixir
@spec send(
  LlmCore.LLM.Provider.prompt(),
  keyword()
) :: {:ok, LlmCore.LLM.Response.t()} | {:error, LlmCore.LLM.Error.t()}
```

Sends a prompt to the OpenAI-compatible chat completions endpoint.

When `opts[:tools]` contains a list of `LlmToolkit.Tool` structs, tool
definitions are encoded into the request body. If the model responds
with `finish_reason: "tool_calls"`, the returned `Response.tool_calls`
will contain decoded `LlmToolkit.Tool.Call` structs.

# `stream`

```elixir
@spec stream(
  LlmCore.LLM.Provider.prompt(),
  keyword()
) :: {:ok, Enumerable.t()} | {:error, LlmCore.LLM.Error.t()}
```

Streams a response from the OpenAI-compatible chat completions endpoint.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
