mix req_llm.gen (ReqLLM v1.0.0)
View SourceGenerate text or structured objects from any supported AI model with unified interface.
This consolidated task combines text generation, object generation, streaming, and non-streaming capabilities into a single command. Use flags to control output format and streaming behavior.
Usage
mix req_llm.gen "Your prompt here" [options]Arguments
prompt The text prompt to send to the AI model (required)Options
--model, -m MODEL Model specification in format provider:model-name
Default: openai:gpt-4o-mini
--system, -s SYSTEM System prompt/message to set context for the AI
--max-tokens TOKENS Maximum number of tokens to generate
(integer, provider-specific limits apply)
--temperature, -t TEMP Sampling temperature for randomness (0.0-2.0)
Lower values = more focused, higher = more creative
--stream Stream output in real-time (default: true)
--no-stream Disable streaming (non-streaming mode)
--json Generate structured JSON object (default: text)
--log-level, -l LEVEL Output verbosity level:
quiet - Only show generated content
normal - Show model info and content (default)
verbose - Show timing and usage statistics
debug - Show all internal detailsExamples
# Basic text generation (streams by default)
mix req_llm.gen "Explain how neural networks work"
# Text generation with specific provider and system prompt
mix req_llm.gen "Write a story about space" \
--model openai:gpt-4o \
--system "You are a creative science fiction writer"
# Generate with GPT-5 and high reasoning effort
mix req_llm.gen "Solve this complex math problem step by step" \
--model openai:gpt-5-mini \
--reasoning-effort high
# Generate structured JSON object (streams by default)
mix req_llm.gen "Create a user profile for John Smith, age 30, engineer in Seattle" \
--model openai:gpt-4o-mini \
--json
# JSON generation with metrics (streams by default)
mix req_llm.gen "Extract person info from this text" \
--model anthropic:claude-3-sonnet \
--json \
--temperature 0.1 \
--log-level debug
# Non-streaming mode (waits for complete response)
mix req_llm.gen "What is 2+2?" --no-stream
# Quick generation without extra output (streams by default)
mix req_llm.gen "What is 2+2?" --log-level warningJSON Schema
When using --json flag, objects are generated using a built-in person schema:
{
"name": "string (required) - Full name of the person",
"age": "integer - Age in years",
"occupation": "string - Job or profession",
"location": "string - City or region where they live"
}Supported Providers
openai - OpenAI models (GPT-4, GPT-3.5, etc.)
anthropic - Anthropic Claude models
groq - Groq models (fast inference)
google - Google Gemini models
openrouter - OpenRouter (access to multiple providers)
xai - xAI Grok modelsConfiguration
The default model can be configured in your application config:
# config/config.exs
config :req_llm, default_model: "openai:gpt-4o-mini"Environment Variables
Most providers require API keys set as environment variables:
OPENAI_API_KEY - For OpenAI models
ANTHROPIC_API_KEY - For Anthropic models
GOOGLE_API_KEY - For Google models
OPENROUTER_API_KEY - For OpenRouter
XAI_API_KEY - For xAI modelsOutput Modes
Text Generation
- Non-streaming: Complete response after generation finishes
- Streaming: Real-time token display as they're generated
JSON Generation
- Non-streaming: Complete structured object after validation
- Streaming: Incremental object updates (where supported)
Capability Requirements
Different modes require different model capabilities:
- Text: No special requirements (all models)
- JSON: Structured output support (varies by provider)
- Streaming: Stream support (most models, varies by provider)
Provider Compatibility
Not all providers support all features equally:
openai - Excellent support for all modes
anthropic - Good support, tool-based JSON generation
groq - Fast streaming, limited JSON support
google - Experimental JSON/streaming support
openrouter - Depends on underlying model
xai - Basic support across modes