Nous.Eval.Metrics.Summary (nous v0.9.0)

View Source

Aggregated metrics summary across multiple evaluation runs.

Summary

Functions

Compare two summaries.

Create a summary from a list of metrics and scores.

Types

t()

@type t() :: %Nous.Eval.Metrics.Summary{
  count: non_neg_integer(),
  fail_count: non_neg_integer(),
  max_score: float(),
  mean_cost_per_run: float() | nil,
  mean_latency_ms: float(),
  mean_score: float(),
  mean_tokens: float(),
  min_score: float(),
  p50_latency_ms: non_neg_integer(),
  p50_tokens: non_neg_integer(),
  p95_latency_ms: non_neg_integer(),
  p95_tokens: non_neg_integer(),
  p99_latency_ms: non_neg_integer(),
  p99_tokens: non_neg_integer(),
  pass_count: non_neg_integer(),
  pass_rate: float(),
  tool_call_distribution: %{required(String.t()) => non_neg_integer()},
  tool_error_rate: float(),
  total_estimated_cost: float() | nil,
  total_tokens: non_neg_integer(),
  total_tool_calls: non_neg_integer()
}

Functions

compare(a, b)

@spec compare(t(), t()) :: map()

Compare two summaries.

from_metrics(metrics_list, scores)

@spec from_metrics([Nous.Eval.Metrics.t()], [float()]) :: t()

Create a summary from a list of metrics and scores.