AgentObs (agent_obs v0.1.4)
View SourceAn Elixir library for LLM agent observability.
AgentObs provides a simple, powerful, and idiomatic interface for instrumenting LLM agentic applications with telemetry events. It supports multiple observability backends through a pluggable handler architecture.
Architecture
AgentObs uses a two-layer architecture:
Layer 1: Core Telemetry API (Backend-Agnostic)
- Leverages Elixir's native
:telemetryecosystem - Provides high-level helpers for instrumenting agent operations
- Defines standardized event schemas
Layer 2: Pluggable Backend Handlers
- Phoenix handler with OpenInference semantic conventions
- Generic OpenTelemetry handler
- Extensible to other platforms (Langfuse, Datadog, etc.)
Quick Start
# Configure in config/config.exs
config :agent_obs,
enabled: true,
handlers: [AgentObs.Handlers.Phoenix]
# Instrument your agent
defmodule MyAgent do
def run(query) do
AgentObs.trace_agent("my_agent", %{input: query}, fn ->
# Agent logic here
result = do_agent_work(query)
{:ok, result, %{iterations: 1}}
end)
end
defp do_agent_work(query) do
AgentObs.trace_llm("gpt-4o", %{
input_messages: [%{role: "user", content: query}]
}, fn ->
response = call_llm(query)
{:ok, response, %{tokens: %{prompt: 10, completion: 25}}}
end)
end
endHigh-Level Instrumentation Helpers
trace_agent/3- Instruments agent loops or invocationstrace_tool/3- Instruments tool callstrace_llm/3- Instruments LLM API callstrace_prompt/3- Instruments prompt template rendering
Integrations
AgentObs.ReqLLM- Automatic instrumentation for ReqLLM streaming callsAgentObs.JidoTracer- Zero-code tracing for Jido composer workflows
Low-Level API
emit/2- Emits custom telemetry eventsconfigure/1- Runtime configuration updates
Summary
Functions
Runtime configuration of handlers and options.
Emits a custom telemetry event with AgentObs standardized metadata.
Instruments an agent loop or agent invocation.
Instruments an LLM API call (chat completion, embedding, etc.).
Instruments prompt construction or template rendering.
Instruments a tool call or function execution within an agent.
Types
Functions
@spec configure(keyword()) :: :ok
Runtime configuration of handlers and options.
Parameters
opts- Keyword list of configuration options:handlers- List of handler modules to enable:event_prefix- Custom event prefix (default:[:agent_obs]):enabled- Enable/disable instrumentation (default:true)
Examples
AgentObs.configure(
handlers: [AgentObs.Handlers.Phoenix],
event_prefix: [:my_app, :ai]
)
Emits a custom telemetry event with AgentObs standardized metadata.
For advanced use cases not covered by the high-level helpers.
Parameters
event_type- One of:agent,:tool,:llm,:prompt, or a custom atommetadata- Event-specific metadata
Examples
AgentObs.emit(:custom_event, %{
name: "vector_search",
input: query,
output: results,
metadata: %{index: "docs", k: 10}
})
@spec trace_agent(String.t(), metadata(), trace_fun()) :: trace_result()
Instruments an agent loop or agent invocation.
Wraps the agent logic in a telemetry span, automatically emitting start, stop, and exception events with standardized metadata.
Parameters
name- Human-readable name for the agent operationmetadata- Context about the agent invocation:input(required) - The input/query/task given to the agent:model(optional) - The routing or orchestration model used:metadata(optional) - Additional custom metadata
fun- The agent logic to execute
Return Value
The function should return one of:
{:ok, output}- Success with output only{:ok, output, metadata}- Success with output and additional metadata{:error, reason}- Failure
Examples
AgentObs.trace_agent("weather_assistant", %{input: "What's the weather?"}, fn ->
result = perform_weather_lookup()
{:ok, result, %{tools_used: ["weather_api"], iterations: 2}}
end)
@spec trace_llm(String.t(), metadata(), trace_fun()) :: trace_result()
Instruments an LLM API call (chat completion, embedding, etc.).
Parameters
model- The LLM model identifier (e.g., "gpt-4o", "claude-3-opus")metadata- LLM call context:input_messages(required for chat) - List of message maps with:roleand:content:type(optional) - "chat", "completion", "embedding" (default: "chat"):temperature,:max_tokens, etc. - Model parameters
fun- The LLM API call logic
Return Value
The function should return {:ok, response, metadata} where metadata includes:
:output_messages- Response messages:tokens- Token usage map with:prompt,:completion,:total:cost- Cost in USD (optional)
Examples
AgentObs.trace_llm("gpt-4o", %{
input_messages: [%{role: "user", content: "Hello"}]
}, fn ->
response = call_openai_api()
{:ok, response.content, %{
output_messages: [%{role: "assistant", content: response.content}],
tokens: %{prompt: 10, completion: 25},
cost: 0.00015
}}
end)
@spec trace_prompt(String.t(), metadata(), trace_fun()) :: trace_result()
Instruments prompt construction or template rendering.
Parameters
template_name- Name of the prompt templatemetadata- Template rendering context:variables(required) - Variables used in template rendering:template(optional) - The template string itself
fun- The prompt rendering logic
Return Value
The function should return {:ok, rendered_prompt} or {:ok, rendered_prompt, metadata}.
Examples
AgentObs.trace_prompt("system_prompt", %{
variables: %{user_name: "Alice", task: "weather"}
}, fn ->
{:ok, render_template(@system_template, variables)}
end)
@spec trace_tool(String.t(), metadata(), trace_fun()) :: trace_result()
Instruments a tool call or function execution within an agent.
Parameters
tool_name- Name of the tool being invokedmetadata- Tool invocation context:arguments(required) - The arguments passed to the tool (map or JSON string):description(optional) - Tool description
fun- The tool execution logic
Return Value
The function should return one of:
{:ok, result}- Success with result{:ok, result, metadata}- Success with result and additional metadata{:error, reason}- Failure
Examples
AgentObs.trace_tool("get_weather", %{arguments: %{city: "SF"}}, fn ->
{:ok, %{temp: 72, condition: "sunny"}}
end)