PtcRunner.LLM.ReqLLMAdapter (PtcRunner v0.9.0)

Copy Markdown View Source

Built-in LLM adapter using req_llm.

Routes requests based on model prefix:

  • ollama:model-name → Local Ollama server
  • openai-compat:base_url|model → Any OpenAI-compatible API
  • * → ReqLLM (OpenRouter, Anthropic, Google, Bedrock, etc.)

Requires {:req_llm, "~> 1.2"} as a dependency.

Prompt Caching

When cache: true is set in the request, prompt caching is enabled for supported providers (Anthropic direct, OpenRouter Anthropic, Bedrock Claude).

Bedrock Region

For Bedrock models, the region is determined in this order:

  1. AWS_REGION environment variable
  2. config :ptc_runner, :bedrock_region, "region-name"
  3. Default: "eu-north-1"

Summary

Functions

Check if a provider is available.

Generate embeddings for text input.

Generate embeddings, raising on error.

Generate a structured JSON object from an LLM.

Generate a structured JSON object, raising on error.

Generate text from an LLM.

Generate text, raising on error.

Generate text with tool definitions.

Check if the model requires an API key.

Functions

available?(model)

@spec available?(String.t()) :: boolean()

Check if a provider is available.

For Ollama, checks if the server is reachable. For ReqLLM providers, checks if the required API key is set.

embed(model, input, opts \\ [])

@spec embed(String.t(), String.t() | [String.t()], keyword()) ::
  {:ok, [float()] | [[float()]]} | {:error, term()}

Generate embeddings for text input.

Returns

  • {:ok, [float()]} for single input
  • {:ok, [[float()]]} for batch input

embed!(model, input, opts \\ [])

@spec embed!(String.t(), String.t() | [String.t()], keyword()) ::
  [float()] | [[float()]]

Generate embeddings, raising on error.

generate_object(model, messages, schema, opts \\ [])

@spec generate_object(String.t(), [map()], map(), keyword()) ::
  {:ok, map()} | {:error, term()}

Generate a structured JSON object from an LLM.

Only supported for ReqLLM providers. Local providers return {:error, :structured_output_not_supported}.

generate_object!(model, messages, schema, opts \\ [])

@spec generate_object!(String.t(), [map()], map(), keyword()) :: map()

Generate a structured JSON object, raising on error.

generate_text(model, messages, opts \\ [])

@spec generate_text(String.t(), [map()], keyword()) ::
  {:ok, PtcRunner.LLM.response()} | {:error, term()}

Generate text from an LLM.

Options

  • :receive_timeout - Request timeout in ms (default: 120000)
  • :ollama_base_url - Override Ollama server URL
  • :cache - Enable prompt caching for supported providers (default: false)

generate_text!(model, messages, opts \\ [])

@spec generate_text!(String.t(), [map()], keyword()) :: PtcRunner.LLM.response()

Generate text, raising on error.

generate_with_tools(model, messages, tools, opts \\ [])

@spec generate_with_tools(String.t(), [map()], [map()], keyword()) ::
  {:ok, map()} | {:error, term()}

Generate text with tool definitions.

Passes tools to the LLM provider. If the LLM returns tool calls, they are included in the response as tool_calls.

requires_api_key?(model)

@spec requires_api_key?(String.t()) :: boolean()

Check if the model requires an API key.