Nous.LLM (nous v0.13.3)
View SourceSimple LLM API for direct model calls without agents.
This module provides a lightweight interface for making LLM calls without the full agent machinery. Use this when you need simple text generation, optionally with tools.
Examples
# Simple generation
{:ok, text} = Nous.generate_text("openai:gpt-4", "What is 2+2?")
IO.puts(text) # "4"
# With options
text = Nous.generate_text!("anthropic:claude-haiku-4-5", "Hello",
system: "You are a pirate",
temperature: 0.7,
max_tokens: 500
)
# With tools
{:ok, text} = Nous.generate_text("openai:gpt-4", "What's the weather in Paris?",
tools: [&MyTools.get_weather/2]
)
# Streaming
{:ok, stream} = Nous.stream_text("openai:gpt-4", "Write a story")
stream |> Stream.each(&IO.write/1) |> Stream.run()
Summary
Functions
Generate text from a model.
Generate text from a model, raising on error.
Stream text from a model.
Types
@type option() :: {:system, String.t()} | {:temperature, float()} | {:max_tokens, pos_integer()} | {:top_p, float()} | {:base_url, String.t()} | {:api_key, String.t()} | {:receive_timeout, non_neg_integer()} | {:tools, [function() | Nous.Tool.t()]} | {:deps, map()} | {:fallback, [String.t() | Nous.Model.t()]}
Functions
@spec generate_text(String.t() | Nous.Model.t(), String.t(), [option()]) :: {:ok, String.t()} | {:error, term()}
Generate text from a model.
Returns {:ok, text} on success, {:error, reason} on failure.
If tools are provided and the model calls them, they will be executed automatically and the conversation will continue until the model returns a text response.
Parameters
model- Model string ("provider:model-name") or%Model{}structprompt- The user promptopts- Options (see below)
Options
:system- System prompt:temperature- Sampling temperature (0.0 to 2.0):max_tokens- Maximum tokens to generate:top_p- Nucleus sampling parameter:base_url- Override API base URL:api_key- Override API key:receive_timeout- HTTP receive timeout in milliseconds (default varies by provider):tools- List of tool functions or Tool structs:deps- Dependencies to pass to tool functions:fallback- Ordered list of fallback model strings orModelstructs to try when the primary model fails with a provider/model error
Examples
{:ok, text} = Nous.LLM.generate_text("openai:gpt-4", "What is 2+2?")
{:ok, text} = Nous.LLM.generate_text("anthropic:claude-haiku-4-5", "Hello",
system: "You are helpful",
temperature: 0.7
)
# With tools
{:ok, text} = Nous.LLM.generate_text("openai:gpt-4", "What's the weather?",
tools: [&MyTools.get_weather/2],
deps: %{api_key: "..."}
)
@spec generate_text!(String.t() | Nous.Model.t(), String.t(), [option()]) :: String.t()
Generate text from a model, raising on error.
Same as generate_text/3 but raises Nous.Errors.ModelError on failure.
Examples
text = Nous.LLM.generate_text!("openai:gpt-4", "What is 2+2?")
IO.puts(text) # "4"
@spec stream_text(String.t() | Nous.Model.t(), String.t(), [option()]) :: {:ok, Enumerable.t()} | {:error, term()}
Stream text from a model.
Returns {:ok, stream} where stream yields text chunks as strings.
Parameters
model- Model string ("provider:model-name") or%Model{}structprompt- The user promptopts- Options (same asgenerate_text/3)
Examples
{:ok, stream} = Nous.LLM.stream_text("openai:gpt-4", "Write a haiku")
stream |> Stream.each(&IO.write/1) |> Stream.run()