LlmCore.Pipelines.InferencePipeline (llm_core v0.3.0)

Copy Markdown View Source

ALF pipeline that normalizes a request, resolves routing, and dispatches it to the selected provider using either blocking or streaming mode.

Summary

Functions

alf_components()

call(event, opts \\ [debug: false])

@spec call(any(), Keyword.t()) :: any() | [any()] | nil
@spec call(any(), Keyword.t()) :: reference()

cast(event, opts \\ [debug: false, send_result: false])

components()

@spec components() :: [map()]

execute(mode, prompt, task_type, opts \\ [])

@spec execute(
  :send | :stream,
  String.t() | [map()] | map(),
  String.t() | atom() | nil,
  keyword()
) :: {:ok, term()} | {:error, term()}

Executes the inference pipeline for the given mode (:send or :stream).

Normalizes the request, resolves routing, dispatches to the provider, and optionally applies structured output extraction.

flow(flow, names, opts \\ [debug: false])

@spec flow(map(), list(), Keyword.t()) :: Enumerable.t()

start()

@spec start() :: :ok

start(opts)

@spec start(list()) :: :ok

started?()

@spec started?() :: true | false

stop()

@spec stop() :: :ok | {:exit, {atom(), any()}}

stream(stream, opts \\ [debug: false])

@spec stream(Enumerable.t(), Keyword.t()) :: Enumerable.t()