Nous.Model (nous v0.9.0)
View SourceModel configuration for LLM providers.
This module defines the model configuration structure used by all model adapters to connect to various LLM providers.
Example
model = Model.new(:openai, "gpt-4",
api_key: "sk-...",
default_settings: %{temperature: 0.7}
)
Summary
Types
@type provider() ::
:openai
| :anthropic
| :gemini
| :groq
| :ollama
| :lmstudio
| :openrouter
| :together
| :vllm
| :sglang
| :mistral
| :custom
Functions
Create a new model configuration.
Parameters
provider- Provider atom (:openai,:groq,:ollama, etc.)model- Model name stringopts- Optional configuration
Options
:base_url- Custom API base URL:api_key- API key (defaults to environment config):organization- Organization ID (for OpenAI):receive_timeout- HTTP receive timeout in milliseconds (default: 60000). Increase this for local models that may take longer to respond.:default_settings- Default model settings (temperature, max_tokens, etc.):stream_normalizer- Custom stream normalizer module implementingNous.StreamNormalizerbehaviour
Example
model = Model.new(:openai, "gpt-4",
api_key: "sk-...",
default_settings: %{temperature: 0.7, max_tokens: 1000}
)
# For slow local models, increase the timeout
model = Model.new(:lmstudio, "qwen/qwen3-4b",
receive_timeout: 120_000 # 2 minutes
)
Parse a model string into a Model struct.
Supports the format "provider:model-name" for convenient model specification.
Supported Formats
"openai:gpt-4"- OpenAI models"anthropic:claude-3-5-sonnet-20241022"- Anthropic Claude"gemini:gemini-1.5-pro"- Google Gemini"groq:llama-3.1-70b-versatile"- Groq models"mistral:mistral-large-latest"- Mistral models"ollama:llama2"- Local Ollama"lmstudio:qwen3-vl-4b-thinking-mlx"- Local LM Studio"vllm:qwen3-vl-4b-thinking-mlx"- vLLM server"sglang:meta-llama/Llama-3-8B"- SGLang server"openrouter:anthropic/claude-3.5-sonnet"- OpenRouter"together:meta-llama/Llama-3-70b-chat-hf"- Together AI"custom:my-model"- Custom endpoint (requires:base_urloption)
Examples
iex> %Model{provider: provider, model: model} = Model.parse("openai:gpt-4")
iex> {provider, model}
{:openai, "gpt-4"}
iex> %Model{provider: provider, model: model} = Model.parse("ollama:llama2")
iex> {provider, model}
{:ollama, "llama2"}
iex> model = Model.parse("custom:my-model", base_url: "http://localhost:8080/v1")
iex> {model.provider, model.model, model.base_url}
{:custom, "my-model", "http://localhost:8080/v1"}