Nous.Model (nous v0.13.3)
View SourceModel configuration for LLM providers.
This module defines the model configuration structure used by all model adapters to connect to various LLM providers.
Example
model = Model.new(:openai, "gpt-4",
api_key: "sk-...",
default_settings: %{temperature: 0.7}
)
Summary
Types
@type provider() ::
:openai
| :anthropic
| :gemini
| :vertex_ai
| :groq
| :ollama
| :lmstudio
| :llamacpp
| :openrouter
| :together
| :vllm
| :sglang
| :mistral
| :custom
Functions
Create a new model configuration.
Parameters
provider- Provider atom (:openai,:groq,:ollama, etc.)model- Model name stringopts- Optional configuration
Options
:base_url- Custom API base URL:api_key- API key (defaults to environment config):organization- Organization ID (for OpenAI):receive_timeout- HTTP receive timeout in milliseconds (default: 60000). Increase this for local models that may take longer to respond.:default_settings- Default model settings (temperature, max_tokens, etc.):stream_normalizer- Custom stream normalizer module implementingNous.StreamNormalizerbehaviour
Example
model = Model.new(:openai, "gpt-4",
api_key: "sk-...",
default_settings: %{temperature: 0.7, max_tokens: 1000}
)
# For slow local models, increase the timeout
model = Model.new(:lmstudio, "qwen/qwen3-4b",
receive_timeout: 120_000 # 2 minutes
)
Parse a model string into a Model struct.
Supports the format "provider:model-name" for convenient model specification.
Supported Formats
"openai:gpt-4"- OpenAI models"anthropic:claude-3-5-sonnet-20241022"- Anthropic Claude"gemini:gemini-1.5-pro"- Google Gemini"vertex_ai:gemini-2.0-flash"- Google Vertex AI"groq:llama-3.1-70b-versatile"- Groq models"mistral:mistral-large-latest"- Mistral models"ollama:llama2"- Local Ollama"lmstudio:qwen3-vl-4b-thinking-mlx"- Local LM Studio"llamacpp:local"- Local LlamaCpp NIF (requires:llamacpp_modeloption)"vllm:qwen3-vl-4b-thinking-mlx"- vLLM server"sglang:meta-llama/Llama-3-8B"- SGLang server"openrouter:anthropic/claude-3.5-sonnet"- OpenRouter"together:meta-llama/Llama-3-70b-chat-hf"- Together AI"custom:my-model"- Custom OpenAI-compatible endpoint (requires:base_urloption)
Note: The
"custom:"prefix is the recommended approach for any OpenAI-compatible endpoint. The legacy"openai_compatible:"prefix still works for backward compatibility.
Examples
iex> %Model{provider: provider, model: model} = Model.parse("openai:gpt-4")
iex> {provider, model}
{:openai, "gpt-4"}
iex> %Model{provider: provider, model: model} = Model.parse("ollama:llama2")
iex> {provider, model}
{:ollama, "llama2"}
iex> model = Model.parse("custom:my-model", base_url: "http://localhost:8080/v1")
iex> {model.provider, model.model, model.base_url}
{:custom, "my-model", "http://localhost:8080/v1"}Custom Providers with base_url
The custom: prefix works with any OpenAI-compatible endpoint:
# Groq
Model.parse("custom:llama-3.1-70b",
base_url: "https://api.groq.com/openai/v1",
api_key: System.get_env("GROQ_API_KEY")
)
# Together AI
Model.parse("custom:meta-llama/Llama-3-70b",
base_url: "https://api.together.xyz/v1",
api_key: System.get_env("TOGETHER_API_KEY")
)
# Local server (LM Studio, Ollama, etc.)
Model.parse("custom:qwen3", base_url: "http://localhost:1234/v1")Also supports CUSTOM_API_KEY and CUSTOM_BASE_URL environment variables
as defaults (can be overridden per-call).