Minimal HTTP backend using Erlang's :httpc.
Supports OpenAI-compatible chat completion endpoints.
Configuration
Required options:
:api_key- API key for authentication:url- API endpoint URL (default: OpenAI):model- Model identifier (e.g., "gpt-4", "gpt-3.5-turbo")
Optional:
:temperature- Sampling temperature (default: 0.0 for determinism):max_tokens- Maximum tokens in response (default: 1000)
Example
ExOutlines.generate(schema,
backend: ExOutlines.Backend.HTTP,
backend_opts: [
api_key: System.get_env("OPENAI_API_KEY"),
url: "https://api.openai.com/v1/chat/completions",
model: "gpt-4",
temperature: 0.0
]
)Custom Endpoints
Works with any OpenAI-compatible endpoint:
backend_opts: [
api_key: "sk-...",
url: "https://your-proxy.com/v1/chat/completions",
model: "custom-model"
]