Nous.Eval.Metrics (nous v0.13.3)
View SourceMetrics collected during evaluation runs.
Tracks token usage, latency, tool calls, and costs.
Example
metrics = Metrics.new()
metrics = Metrics.from_usage(agent_result.usage)
IO.puts("Total tokens: #{metrics.total_tokens}")
Summary
Functions
Create metrics from an agent result.
Create metrics from a Nous.Usage struct.
Merge two metrics structs.
Create empty metrics.
Add cost estimation to metrics.
Types
@type t() :: %Nous.Eval.Metrics{ estimated_cost: float() | nil, first_token_ms: non_neg_integer() | nil, input_tokens: non_neg_integer(), iterations: non_neg_integer(), model_latency_ms: non_neg_integer(), output_tokens: non_neg_integer(), requests: non_neg_integer(), retries: non_neg_integer(), tool_calls: non_neg_integer(), tool_errors: non_neg_integer(), tool_latency_ms: non_neg_integer(), tools_used: %{required(String.t()) => non_neg_integer()}, total_duration_ms: non_neg_integer(), total_tokens: non_neg_integer() }
Functions
@spec from_agent_result(map(), non_neg_integer()) :: t()
Create metrics from an agent result.
@spec from_usage(Nous.Usage.t()) :: t()
Create metrics from a Nous.Usage struct.
Merge two metrics structs.
@spec new() :: t()
Create empty metrics.
Add cost estimation to metrics.