Professional observability suite for Gleam
Structured logging โข Metrics collection โข Statistical benchmarking
Install
gleam add viva_telemetry@1
Architecture
graph TB
subgraph viva_telemetry
direction TB
LOG[๐ Log]
METRICS[๐ Metrics]
BENCH[โก Bench]
end
subgraph Handlers
CONSOLE[๐ฅ๏ธ Console]
JSON[๐ JSON File]
FILE[๐ Plain File]
CUSTOM[๐ง Custom]
end
subgraph Storage
PROCDICT[(Process Dict)]
ETS[(ETS Tables)]
end
subgraph Export
PROM[Prometheus]
MD[Markdown]
CSV[JSON/CSV]
end
LOG --> CONSOLE
LOG --> JSON
LOG --> FILE
LOG --> CUSTOM
LOG --> PROCDICT
METRICS --> ETS
METRICS --> PROM
BENCH --> MD
BENCH --> CSV
style LOG fill:#2E8B57,stroke:#1a5235,color:#fff
style METRICS fill:#4169E1,stroke:#2d4a9e,color:#fff
style BENCH fill:#CD5C5C,stroke:#8b3d3d,color:#fff
Quick Start
import viva_telemetry/log
import viva_telemetry/metrics
import viva_telemetry/bench
pub fn main() {
// ๐ Logging - one import setup!
log.configure_console(log.debug_level)
log.info("Server started", [#("port", "8080")])
// ๐ Metrics
let requests = metrics.counter("http_requests")
metrics.inc(requests)
// โก Benchmarking
bench.run("my_function", fn() { heavy_work() })
|> bench.print()
}
๐ Logging
flowchart LR
A[Log Call] --> B{Level Check}
B -->|Enabled| C[Build Entry]
B -->|Disabled| X[Skip]
C --> D[Add Context]
D --> E[Dispatch]
E --> F[Console]
E --> G[JSON File]
E --> H[Custom Handler]
style A fill:#2E8B57,stroke:#1a5235,color:#fff
style X fill:#CD5C5C,stroke:#8b3d3d,color:#fff
Features
| Feature | Description |
|---|---|
| RFC 5424 Levels | Emergency โ Trace (9 levels) |
| Structured Fields | Key-value pairs with every log |
| Context Propagation | Inherit fields in nested calls |
| Lazy Evaluation | Avoid string construction when disabled |
| Sampling | Log only N% of high-volume messages |
| Multiple Handlers | Console, JSON, File, Custom |
Usage
// Quick setup (one import!)
log.configure_console(log.debug_level)
// Structured logging
log.info("User logged in", [#("user_id", "42"), #("ip", "192.168.1.1")])
// Context propagation
log.with_context([#("request_id", "abc123")], fn() {
log.debug("Processing...") // inherits request_id
})
// Lazy evaluation - avoid string construction when disabled
log.debug_lazy(fn() { "Heavy: " <> expensive_to_string(data) }, [])
// Sampling for high-volume logs (1% of messages)
log.sampled(log.trace_level, 0.01, "Hot path", [])
Handlers
log.configure_console(log.info_level) // Console only
log.configure_json("app.jsonl", log.debug_level) // JSON file
log.configure_full(log.debug_level, "app.jsonl", log.info_level) // Both
๐ Metrics
flowchart TB
subgraph Types
C[Counter]
G[Gauge]
H[Histogram]
end
subgraph Operations
C --> INC[inc / inc_by]
G --> SET[set / add]
H --> OBS[observe / time]
end
subgraph Storage
INC --> ETS[(ETS)]
SET --> ETS
OBS --> ETS
end
subgraph Export
ETS --> PROM[to_prometheus]
ETS --> BEAM[beam_memory]
end
style C fill:#4169E1,stroke:#2d4a9e,color:#fff
style G fill:#4169E1,stroke:#2d4a9e,color:#fff
style H fill:#4169E1,stroke:#2d4a9e,color:#fff
Metric Types
| Type | Use Case | Operations |
|---|---|---|
| Counter | Requests, errors, events | inc(), inc_by(n) |
| Gauge | Connections, queue size | set(v), add(v), inc(), dec() |
| Histogram | Latency, response sizes | observe(v), time(fn) |
Usage
// Counter (monotonically increasing)
let requests = metrics.counter("http_requests_total")
metrics.inc(requests)
metrics.inc_by(requests, 5)
// Gauge (can go up or down)
let connections = metrics.gauge("active_connections")
metrics.set(connections, 42.0)
metrics.gauge_inc(connections)
// Histogram (distribution)
let latency = metrics.histogram("latency_ms", [10.0, 50.0, 100.0, 500.0])
metrics.observe(latency, 75.5)
// Time a function automatically
let result = metrics.time_ms(latency, fn() { do_work() })
// BEAM memory tracking
let mem = metrics.beam_memory()
// โ BeamMemory(total, processes, system, atom, binary, ets)
// Export Prometheus format
io.println(metrics.to_prometheus())
โก Benchmarking
flowchart LR
A[Function] --> B[Warmup]
B --> C[Collect Samples]
C --> D[Calculate Stats]
D --> E[Results]
E --> F[Print]
E --> G[to_json]
E --> H[to_markdown]
E --> I[Compare]
style A fill:#CD5C5C,stroke:#8b3d3d,color:#fff
style E fill:#2E8B57,stroke:#1a5235,color:#fff
Statistics
Each benchmark calculates:
| Stat | Description |
|---|---|
| mean | Average duration |
| stddev | Standard deviation |
| min/max | Range |
| p50 | Median (50th percentile) |
| p95 | 95th percentile |
| p99 | 99th percentile |
| ips | Iterations per second |
| ci_95 | 95% confidence interval |
Usage
// Simple benchmark
bench.run("fib_recursive", fn() { fib(30) })
|> bench.print()
// Compare implementations
let slow = bench.run("v1", fn() { algo_v1() })
let fast = bench.run("v2", fn() { algo_v2() })
bench.compare(slow, fast)
|> bench.print_comparison()
// โ v1 vs v2: 2.3x faster ๐
// Export results
bench.to_json(result) // JSON object
bench.to_json_string(result) // JSON string
bench.to_markdown(result) // | Name | Mean | p50 | p99 | IPS |
Build
make test # Run 32 tests
make bench # Run benchmarks
make log # Run log example
make metrics # Run metrics example
make docs # Generate documentation
Part of VIVA Ecosystem
graph LR
VIVA[๐ง VIVA] --> MATH[viva_math]
VIVA --> EMOTION[viva_emotion]
VIVA --> TENSOR[viva_tensor]
VIVA --> AION[viva_aion]
VIVA --> GLYPH[viva_glyph]
VIVA --> TELEMETRY[viva_telemetry]
style TELEMETRY fill:#FFAFF3,stroke:#333,stroke-width:2px
| Package | Purpose |
|---|---|
| viva_math | Mathematical foundations |
| viva_emotion | PAD emotional dynamics |
| viva_tensor | Tensor compression (INT8/NF4/AWQ) |
| viva_aion | Time perception |
| viva_glyph | Symbolic language |
| viva_telemetry | Observability โ this package |
Inspired By
- Logging: structlog (Python), zap (Go), tracing (Rust)
- Metrics: Prometheus, BEAM telemetry
- Benchmarking: criterion (Rust), benchee (Elixir)
Built with pure Gleam for the BEAM โ๏ธ