Getting Started with Nous AI
View SourceComplete setup guide for the Nous AI framework.
Installation
Add to your mix.exs:
def deps do
[
{:nous, "~> 0.9.0"}
]
endThen run:
mix deps.get
Quick Setup
Option 1: Local AI (Free)
Perfect for development and testing.
- Download LM Studio from lmstudio.ai
- Download a model (recommended: qwen3-vl-4b-thinking-mlx)
- Start server in LM Studio (runs on http://localhost:1234)
- Test it works:
mix run -e " agent = Nous.new(\"lmstudio:qwen3-vl-4b-thinking-mlx") {:ok, result} = Nous.run(agent, \"Hello!\") IO.puts(result.output) "
Option 2: Cloud AI
Perfect for production and advanced models.
Anthropic (Recommended):
export ANTHROPIC_API_KEY="sk-ant-your-key"
OpenAI:
export OPENAI_API_KEY="sk-your-key"
Google Vertex AI:
export GOOGLE_CLOUD_PROJECT="your-project-id"
export GOOGLE_CLOUD_REGION="us-central1" # optional, defaults to us-central1
# Option A: Use gcloud access token
export VERTEX_AI_ACCESS_TOKEN="$(gcloud auth print-access-token)"
# Option B: Use Goth with service account (recommended for production)
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
Test cloud setup:
mix run -e "
agent = Nous.new(\"anthropic:claude-sonnet-4-5-20250929\")
{:ok, result} = Nous.run(agent, \"Hello!\")
IO.puts(result.output)
"
Basic Usage
Simple Chat Agent
# Create an agent
agent = Nous.new("anthropic:claude-sonnet-4-5-20250929",
instructions: "You are a helpful assistant"
)
# Ask questions
{:ok, result} = Nous.run(agent, "What is Elixir?")
IO.puts(result.output)
# Check usage
IO.puts("Tokens used: #{result.usage.total_tokens}")Agent with Tools
defmodule MyTools do
@doc "Get the current weather for a location"
def get_weather(_ctx, %{"location" => location}) do
"The weather in #{location} is sunny and 72°F"
end
end
agent = Nous.new("anthropic:claude-sonnet-4-5-20250929",
instructions: "Use the get_weather tool when asked about weather",
tools: [&MyTools.get_weather/2]
)
{:ok, result} = Nous.run(agent, "What's the weather in Paris?")
IO.puts(result.output) # AI automatically calls the weather toolStructured Output
defmodule UserInfo do
use Ecto.Schema
@primary_key false
embedded_schema do
field :name, :string
field :age, :integer
end
end
agent = Nous.new("openai:gpt-4o-mini", output_type: UserInfo)
{:ok, result} = Nous.run(agent, "Generate a user named Alice, age 30")
# result.output == %UserInfo{name: "Alice", age: 30}For more details, see the Structured Output Guide.
Streaming Responses
agent = Nous.new("anthropic:claude-sonnet-4-5-20250929")
Nous.run_stream(agent, "Tell me a story")
|> Enum.each(fn
{:text_delta, text} -> IO.write(text) # Print as it arrives
{:finish, result} -> IO.puts("\n✅ Complete")
end)Next Steps
Immediate Next Steps (15 minutes)
- Try examples → quickstart examples
- Follow tutorials → structured learning
- Browse by feature → reference guides
Learning Path
- Beginner (15 min) → 01-basics
- Intermediate (1 hour) → 02-patterns
- Advanced (deep dive) → 03-production
- Complete projects → 04-projects
Production Setup
- Best Practices Guide - Production deployment
- Tool Development Guide - Custom tools
- Troubleshooting Guide - Common issues
Provider Configuration
All Supported Providers
# Local (Free)
agent = Nous.new("lmstudio:qwen3-vl-4b-thinking-mlx")
agent = Nous.new("ollama:llama3")
# Cloud
agent = Nous.new("anthropic:claude-sonnet-4-5-20250929")
agent = Nous.new("openai:gpt-4o")
agent = Nous.new("gemini:gemini-1.5-pro")
agent = Nous.new("mistral:mistral-large-latest")
agent = Nous.new("groq:llama-3.1-70b-versatile")Model Settings
agent = Nous.new("anthropic:claude-sonnet-4-5-20250929",
model_settings: %{
temperature: 0.7, # Creativity (0.0 - 1.0)
max_tokens: 2000, # Response length
top_p: 0.9 # Nucleus sampling
}
)Architecture Overview
Core Concepts
- Agent: Stateless configuration object (model + instructions + tools)
- Tools: Elixir functions the AI can call
- Messages: Structured conversation history
- Streaming: Real-time response generation
Key Features
- Multi-provider: Works with 10+ AI providers
- Tool calling: AI can execute Elixir functions
- Streaming: Real-time response generation
- Type safety: Comprehensive type specifications
- Production ready: GenServer, LiveView, distributed systems
Common Patterns
Error Handling
case Nous.run(agent, prompt) do
{:ok, result} ->
IO.puts("Success: #{result.output}")
{:error, reason} ->
IO.puts("Error: #{reason}")
endConversation State
defmodule ChatBot do
use GenServer
def start_link(model) do
GenServer.start_link(__MODULE__, model)
end
def ask(pid, question) do
GenServer.call(pid, {:ask, question})
end
def init(model) do
agent = Nous.new(model)
{:ok, %{agent: agent, messages: []}}
end
def handle_call({:ask, question}, _from, state) do
# Add question to conversation history
messages = state.messages ++ [%{role: "user", content: question}]
# Get response from agent
{:ok, result} = Nous.run(state.agent, messages)
# Update conversation history
new_messages = messages ++ [%{role: "assistant", content: result.output}]
{:reply, result.output, %{state | messages: new_messages}}
end
endTroubleshooting
Connection Issues
- "Connection refused": LM Studio not running or wrong port
- "401 Unauthorized": Check API key is set correctly
- "Model not found": Verify model name spelling
Performance
- Slow responses: Try smaller models or local inference
- High costs: Use local models for development
- Rate limits: Implement exponential backoff
Debug Mode
# Enable debug logging
require Logger
Logger.configure(level: :debug)
# See all agent iterations and tool calls
{:ok, result} = Nous.run(agent, prompt)For more help, see the Troubleshooting Guide.
What's Next?
- Hands-on learning → Examples
- Specific features → Reference guides
- Production deployment → Best practices
- Custom tools → Tool development