Default LLM HTTP client using ReqLLM.
Uses the ReqLLM library for making LLM API calls, normalizing responses to a generic format that hides ReqLLM-specific types.
Telemetry
Emits the following telemetry events:
[:aludel, :llm, :http, :start]- When an HTTP request begins- Measurements:
%{system_time: integer()} - Metadata:
%{model_spec: String.t()}
- Measurements:
[:aludel, :llm, :http, :stop]- When an HTTP request completes- Measurements:
%{duration: integer(), input_tokens: integer(), output_tokens: integer()} - Metadata:
%{model_spec: String.t()}
- Measurements:
[:aludel, :llm, :http, :exception]- When an HTTP request fails- Measurements:
%{duration: integer()} - Metadata:
%{model_spec: String.t(), error: term()}
- Measurements:
Summary
Functions
LLM-specific HTTP call using ReqLLM.
Functions
LLM-specific HTTP call using ReqLLM.
Parameters
- model_spec: Provider and model (e.g., "openai:gpt-4o")
- messages: Text prompt or message list
- opts: LLM options (api_key, temperature, max_tokens, documents)
Returns
{:ok, %{content: String.t(), input_tokens: integer(), output_tokens: integer()}}{:error, reason}