Nous.Agent (nous v0.13.3)
View SourcePrimary interface for AI agents.
An Agent is a stateless configuration object that defines how to interact with an AI model. It specifies the model, instructions, tools, and output format.
Example
# Simple agent
agent = Agent.new("openai:gpt-4",
instructions: "Be helpful and concise"
)
{:ok, result} = Agent.run(agent, "What is 2+2?")
IO.puts(result.output) # "4"
# Agent with tools
agent = Agent.new("groq:llama-3.1-70b-versatile",
instructions: "Help users find information",
tools: [&MyTools.search/2]
)
{:ok, result} = Agent.run(agent, "Search for Elixir tutorials",
deps: %{database: MyApp.DB}
)
Summary
Functions
Create a new agent.
Run agent synchronously.
Run agent with streaming.
Register a tool with the agent.
Types
@type t() :: %Nous.Agent{ behaviour_module: module() | nil, deps_type: module() | nil, enable_todos: boolean(), end_strategy: :early | :exhaustive, fallback: [Nous.Model.t()], hooks: [Nous.Hook.t()], instructions: String.t() | function() | nil, model: Nous.Model.t(), model_settings: map(), name: String.t(), output_type: Nous.Types.output_type(), plugins: [module()], retries: non_neg_integer(), skills: [module() | String.t() | Nous.Skill.t() | {:group, atom()}], structured_output: keyword(), system_prompt: String.t() | function() | nil, tools: [Nous.Tool.t()] }
Functions
Create a new agent.
Parameters
model_string- Model in format "provider:model-name"opts- Configuration options
Options
:output_type- Expected output type (:string, Ecto schema, schemaless map, JSON schema, or guided mode tuple):structured_output- Structured output options (mode:,max_retries:):instructions- Static instructions or function returning instructions:system_prompt- Static system prompt or function:deps_type- Module defining dependency structure:name- Agent name for logging:model_settings- Model settings (temperature, max_tokens, etc.):retries- Default retry count for tools:enable_todos- Enable automatic todo tracking (default: false):tools- List of tool functions or Tool structs:plugins- List of plugin modules implementingNous.Pluginbehaviour:hooks- List ofNous.Hookstructs for lifecycle interception:skills- List of skill modules, directory paths,Nous.Skillstructs, or{:group, atom()}:skill_dirs- List of directory paths to scan for.mdskill files (convenience for:skills):end_strategy- How to handle tool calls (:earlyor:exhaustive):behaviour_module- Custom agent behaviour module (default: BasicAgent):fallback- Ordered list of fallback model strings orModelstructs to try when the primary model fails with a provider/model error
Examples
# OpenAI GPT-4
agent = Agent.new("openai:gpt-4")
# Groq Llama with settings
agent = Agent.new("groq:llama-3.1-70b-versatile",
instructions: "Be concise",
model_settings: %{temperature: 0.7, max_tokens: 1000}
)
# Local LM Studio
agent = Agent.new("lmstudio:qwen3-vl-4b-thinking-mlx",
instructions: "Always answer in rhymes"
)
# With tools
agent = Agent.new("openai:gpt-4",
tools: [&MyTools.search/2, &MyTools.calculate/2]
)
Run agent synchronously.
Input Formats
The second argument accepts multiple formats:
- String prompt: Simple string message from user
- Keyword list: Use
:messagesfor custom message list,:contextto continue from previous run
Options
:deps- Dependencies to pass to tools and prompts:message_history- Previous messages to continue conversation:usage_limits- Usage limits for this run:model_settings- Override model settings for this run:callbacks- Map of callback functions for events:notify_pid- PID to receive event messages:context- Existing context to continue from:output_type- Override the agent'soutput_typefor this run:structured_output- Override the agent'sstructured_outputoptions for this run
Examples
# String prompt
{:ok, result} = Agent.run(agent, "What is the capital of France?")
IO.puts(result.output) # "Paris"
# With dependencies
{:ok, result} = Agent.run(agent, "Search for users",
deps: %{database: MyApp.DB}
)
# Message list directly
{:ok, result} = Agent.run(agent,
messages: [
Message.system("Be concise"),
Message.user("What is 2+2?")
]
)
# Continue from previous context
{:ok, result1} = Agent.run(agent, "First question")
{:ok, result2} = Agent.run(agent, "Follow up",
context: result1.context
)
# Continue conversation with message history
{:ok, result2} = Agent.run(agent, "Tell me more",
message_history: result1.new_messages
)
# With callbacks
{:ok, result} = Agent.run(agent, "Hello",
callbacks: %{
on_llm_new_delta: fn _, text -> IO.write(text) end
}
)Returns
{:ok, %{
output: "Result text or structured output",
usage: %Usage{...},
all_messages: [...],
new_messages: [...],
context: %Context{...} # Can be used for continuation
}}
{:error, reason}
@spec run_stream(t(), String.t(), keyword()) :: {:ok, Enumerable.t()} | {:error, term()}
Run agent with streaming.
Returns a stream that yields events as they occur.
Events
{:text_delta, text}- Incremental text update{:tool_call, call}- Tool is being called{:tool_result, result}- Tool execution completed{:complete, result}- Final result
Example
{:ok, stream} = Agent.run_stream(agent, "Tell me a story")
stream
|> Stream.each(fn
{:text_delta, text} -> IO.write(text)
{:complete, result} -> IO.puts("\nDone!")
end)
|> Stream.run()
Register a tool with the agent.
Returns a new agent with the tool added.
Options
:name- Custom tool name (default: function name):description- Custom description:retries- Retry count for this tool:requires_approval- Whether tool needs human approval
Example
agent = Agent.new("openai:gpt-4")
agent = Agent.tool(agent, &MyTools.search/2,
description: "Search the database"
)