# `LlmCore.Pipelines.InferencePipeline`
[🔗](https://github.com/fosferon/llm_core/blob/v0.3.0/lib/llm_core/pipelines/inference_pipeline.ex#L18)

ALF pipeline that normalizes a request, resolves routing, and dispatches it
to the selected provider using either blocking or streaming mode.

# `alf_components`

# `call`

```elixir
@spec call(any(), Keyword.t()) :: any() | [any()] | nil
@spec call(any(), Keyword.t()) :: reference()
```

# `cast`

# `components`

```elixir
@spec components() :: [map()]
```

# `execute`

```elixir
@spec execute(
  :send | :stream,
  String.t() | [map()] | map(),
  String.t() | atom() | nil,
  keyword()
) :: {:ok, term()} | {:error, term()}
```

Executes the inference pipeline for the given mode (`:send` or `:stream`).

Normalizes the request, resolves routing, dispatches to the provider, and
optionally applies structured output extraction.

# `flow`

```elixir
@spec flow(map(), list(), Keyword.t()) :: Enumerable.t()
```

# `start`

```elixir
@spec start() :: :ok
```

# `start`

```elixir
@spec start(list()) :: :ok
```

# `started?`

```elixir
@spec started?() :: true | false
```

# `stop`

```elixir
@spec stop() :: :ok | {:exit, {atom(), any()}}
```

# `stream`

```elixir
@spec stream(Enumerable.t(), Keyword.t()) :: Enumerable.t()
```

---

*Consult [api-reference.md](api-reference.md) for complete listing*
