TimelessMetrics API Reference

Copy Markdown View Source

TimelessMetrics is an embedded time series database for Elixir. It can run as a library inside your application or as a standalone containerized service. This document covers the Elixir API, the HTTP interface, and the SVG charting endpoint.


Table of Contents


Elixir API

Starting a Store

Add TimelessMetrics to your supervision tree:

children = [
  {TimelessMetrics, name: :metrics, data_dir: "/var/lib/metrics"}
]

All subsequent API calls reference the store by its name (:metrics above).

Store Options

OptionTypeDefaultDescription
:nameatomrequiredStore name used in all API calls
:data_dirstring"data"Directory for Rust engine chunk data and SQLite admin data
:modeatom:disk:disk persists data; :memory keeps the store ephemeral
:schemamodule/structTimelessMetrics.Schema.default()Rollup tier configuration
:raw_retention_secondsinteger604_800Raw point retention in seconds
:daily_retention_secondsinteger31_536_000Daily rollup retention in seconds
:ingest_workersintegermax(div(cpus, 4), 2)HTTP ingest queue workers
:alert_intervalinteger60_000Alert evaluation interval in ms
:self_monitorbooleantrueEnable internal self-monitoring metrics
:self_monitor_labelsmap%{}Labels applied to self-monitoring metrics
:scrapingbooleantrueEnable Prometheus scraping support
:engineatom:rustEngine selection; :rust is the default and maintained path

Legacy-only knobs such as :buffer_shards, :flush_threshold, :flush_interval, :segment_duration, and :compression_level only apply when you intentionally run engine: :legacy.

Writing Data

Single point

TimelessMetrics.write(:metrics, "cpu_usage", %{"host" => "web-1"}, 73.2)

# With explicit timestamp (unix seconds)
TimelessMetrics.write(:metrics, "cpu_usage", %{"host" => "web-1"}, 73.2,
  timestamp: 1_700_000_000)

Batch write

Each entry is {metric, labels, value} or {metric, labels, value, timestamp}:

TimelessMetrics.write_batch(:metrics, [
  {"cpu_usage", %{"host" => "web-1"}, 73.2},
  {"cpu_usage", %{"host" => "web-2"}, 81.0},
  {"mem_usage", %{"host" => "web-1"}, 4_200_000, 1_700_000_000}
])

Writing Text Data

Text series store string values alongside numeric time-series. They use RLE + zstd compression and are ideal for values that rarely change (firmware versions, interface descriptions, SNMP sysDescr, etc.).

Single text point

TimelessMetrics.write_text(:metrics, "sysDescr", %{"host" => "router1"},
  "Cisco IOS 15.2")

# With explicit timestamp
TimelessMetrics.write_text(:metrics, "sysDescr", %{"host" => "router1"},
  "Cisco IOS 15.2", timestamp: 1_700_000_000)

Text batch write

Each entry is {metric, labels, value} or {metric, labels, value, timestamp}:

TimelessMetrics.write_text_batch(:metrics, [
  {"ifDescr", %{"host" => "sw1", "ifIndex" => "1"}, "GigabitEthernet0/0"},
  {"ifDescr", %{"host" => "sw1", "ifIndex" => "2"}, "FastEthernet0/1"},
  {"sysDescr", %{"host" => "sw1"}, "Cisco IOS 15.2", 1_700_000_000}
])

Querying Text Data

Text points (single series, exact label match)

{:ok, points} = TimelessMetrics.query_text(:metrics, "sysDescr", %{"host" => "router1"},
  from: System.os_time(:second) - 86_400,
  to: System.os_time(:second))

# points = [{1700000000, "Cisco IOS 15.2"}, {1700000060, "Cisco IOS 15.2"}, ...]

Text points (multi-series, label filter)

{:ok, results} = TimelessMetrics.query_text_multi(:metrics, "ifDescr",
  %{"host" => "sw1"},
  from: System.os_time(:second) - 3600)

# results = [
#   %{labels: %{"host" => "sw1", "ifIndex" => "1"}, points: [{ts, "GigabitEthernet0/0"}, ...]},
#   %{labels: %{"host" => "sw1", "ifIndex" => "2"}, points: [{ts, "FastEthernet0/1"}, ...]},
# ]

Latest text value

{:ok, {timestamp, value}} = TimelessMetrics.latest_text(:metrics, "sysDescr", %{"host" => "router1"})
{:ok, nil} = TimelessMetrics.latest_text(:metrics, "nonexistent", %{})

Pre-Resolved Writes (Legacy Engine)

Pre-resolved writes are a legacy-engine optimization. They are still available for compatibility, but they are not the primary hot path on the rust-default engine.

If you are running the default rust engine, prefer write_batch/2 for sustained throughput. If you are intentionally running engine: :legacy, you can still use pre-resolved writes:

sid = TimelessMetrics.resolve_series(:metrics, "cpu_usage", %{"host" => "web-1"})
TimelessMetrics.write_resolved(:metrics, sid, 73.2)
TimelessMetrics.write_resolved(:metrics, sid, 74.1, timestamp: 1_700_000_060)

Each batch entry is {series_id, value} or {series_id, value, timestamp}:

TimelessMetrics.write_batch_resolved(:metrics, [
  {sid_1, 73.2},
  {sid_2, 81.0, 1_700_000_000}
])

Querying Data

Raw points (single series, exact label match)

{:ok, points} = TimelessMetrics.query(:metrics, "cpu_usage", %{"host" => "web-1"},
  from: System.os_time(:second) - 3600,
  to: System.os_time(:second))

# points = [{1700000000, 73.2}, {1700000060, 74.1}, ...]

Raw points (multi-series, label filter)

An empty label filter matches all series for the metric:

{:ok, results} = TimelessMetrics.query_multi(:metrics, "cpu_usage", %{})

# results = [
#   %{labels: %{"host" => "web-1"}, points: [{ts, val}, ...]},
#   %{labels: %{"host" => "web-2"}, points: [{ts, val}, ...]},
# ]

Filter by partial labels:

{:ok, results} = TimelessMetrics.query_multi(:metrics, "cpu_usage",
  %{"region" => "us-east"},
  from: now - 7200, to: now)

Latest value

{:ok, {timestamp, value}} = TimelessMetrics.latest(:metrics, "cpu_usage", %{"host" => "web-1"})
{:ok, nil} = TimelessMetrics.latest(:metrics, "nonexistent", %{})

Aggregation Queries

Bucket data into time intervals with an aggregate function:

{:ok, buckets} = TimelessMetrics.query_aggregate(:metrics, "cpu_usage",
  %{"host" => "web-1"},
  from: now - 86400,
  to: now,
  bucket: {300, :seconds},
  aggregate: :avg)

# buckets = [{1700000000, 73.5}, {1700000300, 74.2}, ...]

Multi-series aggregation

{:ok, results} = TimelessMetrics.query_aggregate_multi(:metrics, "cpu_usage", %{},
  from: now - 86400,
  to: now,
  bucket: :hour,
  aggregate: :max)

# results = [
#   %{labels: %{"host" => "web-1"}, data: [{bucket_ts, max_val}, ...]},
#   %{labels: %{"host" => "web-2"}, data: [{bucket_ts, max_val}, ...]},
# ]

Bucket sizes

  • :minute, :hour, :day — named intervals
  • {n, :seconds} — arbitrary interval (e.g., {300, :seconds} for 5 min)

Aggregate functions

FunctionDescription
:avgMean value in bucket
:minMinimum value
:maxMaximum value
:sumSum of all values
:countNumber of points
:firstFirst value in bucket
:lastLast value in bucket
:ratePer-second rate of change

Tier Queries

Read pre-computed rollup data directly from a tier:

{:ok, rows} = TimelessMetrics.query_tier(:metrics, :hourly, "cpu_usage",
  %{"host" => "web-1"},
  from: now - 86400, to: now)

# rows = [%{bucket: ts, avg: v, min: v, max: v, count: n, sum: v, last: v}, ...]

Series Discovery

# List all metric names
{:ok, names} = TimelessMetrics.list_metrics(:metrics)
# ["cpu_usage", "mem_usage", "disk_io"]

# List all series for a metric
{:ok, series} = TimelessMetrics.list_series(:metrics, "cpu_usage")
# [%{labels: %{"host" => "web-1"}}, %{labels: %{"host" => "web-2"}}]

# List distinct values for a label key
{:ok, hosts} = TimelessMetrics.label_values(:metrics, "cpu_usage", "host")
# ["web-1", "web-2", "web-3"]

Metric Metadata

# Register metadata
TimelessMetrics.register_metric(:metrics, "cpu_usage", :gauge,
  unit: "%",
  description: "CPU utilization percentage")

# Get metadata
{:ok, meta} = TimelessMetrics.get_metadata(:metrics, "cpu_usage")
# %{type: :gauge, unit: "%", description: "CPU utilization percentage"}

Metric types: :gauge, :counter, :histogram

Annotations

Annotations are event markers that overlay on charts (deploys, incidents, etc.):

# Create
{:ok, id} = TimelessMetrics.annotate(:metrics, System.os_time(:second), "Deploy v2.1",
  description: "Rolled out new caching layer",
  tags: ["deploy", "prod"])

# Query time range
{:ok, annotations} = TimelessMetrics.annotations(:metrics, from, to, tags: ["deploy"])
# [%{id: 1, timestamp: ts, title: "Deploy v2.1", description: "...", tags: ["deploy", "prod"]}]

# Delete
TimelessMetrics.delete_annotation(:metrics, id)

Alerts

# Create alert rule
{:ok, rule_id} = TimelessMetrics.create_alert(:metrics,
  name: "High CPU",
  metric: "cpu_usage",
  condition: :above,
  threshold: 90.0,
  labels: %{"host" => "web-1"},
  duration: 300,
  aggregate: :avg,
  webhook_url: "http://hooks.example.com/alert")

# List all rules with current state
{:ok, rules} = TimelessMetrics.list_alerts(:metrics)

# Evaluate all rules (also runs automatically on a timer)
TimelessMetrics.evaluate_alerts(:metrics)

# Delete a rule
TimelessMetrics.delete_alert(:metrics, rule_id)

Operational

# Flush all buffered data to disk
TimelessMetrics.flush(:metrics)

# Get store statistics
info = TimelessMetrics.info(:metrics)
# %{series_count: 1000, total_points: 5_000_000, bytes_per_point: 0.78, ...}

# Force rollup
TimelessMetrics.rollup(:metrics)         # all tiers
TimelessMetrics.rollup(:metrics, :hourly) # specific tier

# Force late-arrival catch-up scan
TimelessMetrics.catch_up(:metrics)

# Force retention enforcement
TimelessMetrics.enforce_retention(:metrics)

Info fields

FieldDescription
series_countNumber of unique time series
segment_countNumber of compressed raw segments
total_pointsTotal data points stored
raw_compressed_bytesRaw segment storage in bytes
bytes_per_pointCompression efficiency
storage_bytesTotal on-disk storage (segment files + metadata DB)
oldest_timestampEarliest data point
newest_timestampLatest data point
buffer_pointsPoints still in raw buffers (not yet compressed)
tiersMap of tier names to stats
raw_retentionRaw data retention in seconds
db_pathMain database file path

HTTP API

Start the HTTP server alongside TimelessMetrics:

children = [
  {TimelessMetrics, name: :metrics, data_dir: "/var/lib/metrics"},
  {TimelessMetrics.HTTP, store: :metrics, port: 8428}
]

Or run the container (starts both automatically).

All ingest and query endpoints are compatible with VictoriaMetrics tooling (Vector, Grafana, etc.).

Authentication

Set the TIMELESS_BEARER_TOKEN environment variable to enable token authentication. When set, all endpoints except /health require a valid token.

Via header (API clients, curl, Grafana):

curl -H "Authorization: Bearer my-secret-token" \
  http://localhost:8428/api/v1/query_range?metric=cpu_usage&from=-1h

Via query parameter (browsers, embedded charts):

http://localhost:8428/chart?metric=cpu_usage&from=-6h&token=my-secret-token
http://localhost:8428/?token=my-secret-token

Elixir library usage (pass :bearer_token in HTTP opts):

{TimelessMetrics.HTTP, store: :metrics, port: 8428, bearer_token: "my-secret-token"}
ResponseMeaning
401 {"error":"unauthorized"}No token provided (missing header and no ?token= param)
403 {"error":"forbidden"}Token provided but doesn't match

When TIMELESS_BEARER_TOKEN is not set, all endpoints are open (no auth enforced). /health is always open regardless of token configuration.

Ingest Endpoints

POST /api/v1/import

VictoriaMetrics JSON line format. Each line is a JSON object:

{"metric":{"__name__":"cpu_usage","host":"web-1"},"values":[73.2,74.1],"timestamps":[1700000000,1700000060]}
{"metric":{"__name__":"mem_usage","host":"web-1"},"values":[4200000],"timestamps":[1700000000]}
  • metric.__name__ is the metric name; all other keys become labels
  • values and timestamps are parallel arrays
  • Max body size: 10 MB

Response:

  • 204 No Content on success
  • 200 with {"samples": N, "errors": N} if some lines had errors
  • 413 if body exceeds 10 MB

Example:

curl -X POST http://localhost:8428/api/v1/import -d '
{"metric":{"__name__":"cpu_usage","host":"web-1"},"values":[73.2],"timestamps":[1700000000]}
{"metric":{"__name__":"cpu_usage","host":"web-2"},"values":[81.0],"timestamps":[1700000000]}
'

Vector sink configuration:

[sinks.timeless]
type = "http"
inputs = ["metrics_transform"]
uri = "http://localhost:8428/api/v1/import"
encoding.codec = "text"
framing.method = "newline_delimited"

POST /api/v1/import/prometheus

Prometheus text exposition format:

cpu_usage{host="web-1"} 73.2 1700000000000
cpu_usage{host="web-2"} 81.0
mem_usage 4200000
  • Lines starting with # are skipped (comments, HELP, TYPE)
  • Timestamp is in milliseconds (converted to seconds internally)
  • If timestamp is omitted, current time is used
  • Labels are optional

Example:

curl -X POST http://localhost:8428/api/v1/import/prometheus -d '
# HELP cpu_usage CPU utilization
cpu_usage{host="web-1",region="us-east"} 73.2 1700000000000
cpu_usage{host="web-2",region="us-east"} 81.0 1700000000000
'

Query Endpoints

All query endpoints accept these common parameters:

ParameterDescription
metricMetric name (required)
from or startStart timestamp — unix seconds or relative (-1h, -30m, -7d)
to or endEnd timestamp — unix seconds, relative, or now
Any other paramTreated as a label filter (e.g., host=web-1)

Relative time syntax: -<number><unit> where unit is s, m, h, d, or w.

GET /api/v1/export

Export raw points in VictoriaMetrics JSON line format (one line per series).

curl 'http://localhost:8428/api/v1/export?metric=cpu_usage&host=web-1&from=-1h'

Response (newline-delimited JSON):

{"metric":{"__name__":"cpu_usage","host":"web-1"},"values":[73.2,74.1],"timestamps":[1700000000,1700000060]}

GET /api/v1/query

Latest value for matching series.

curl 'http://localhost:8428/api/v1/query?metric=cpu_usage&host=web-1'

Response (single series):

{"labels":{"host":"web-1"},"timestamp":1700000060,"value":74.1}

Response (multiple series):

{"data":[
  {"labels":{"host":"web-1"},"timestamp":1700000060,"value":74.1},
  {"labels":{"host":"web-2"},"timestamp":1700000060,"value":81.0}
]}

GET /api/v1/query_range

Range query with time-bucketed aggregation.

ParameterDefaultDescription
step60Bucket size in seconds
aggregateavgOne of: avg, min, max, sum, count, last, first, rate
curl 'http://localhost:8428/api/v1/query_range?metric=cpu_usage&from=-6h&step=300&aggregate=max'

Response:

{
  "metric": "cpu_usage",
  "series": [
    {
      "labels": {"host": "web-1"},
      "data": [[1700000000, 73.5], [1700000300, 91.2]]
    },
    {
      "labels": {"host": "web-2"},
      "data": [[1700000000, 81.0], [1700000300, 79.3]]
    }
  ]
}

Prometheus-Compatible Endpoints

GET /prometheus/api/v1/query_range

Grafana-compatible Prometheus query_range endpoint. Supports simple PromQL selectors (metric name with optional label matchers).

ParameterDescription
queryPromQL selector, e.g. cpu_usage{host="web-1"}
startStart timestamp (unix seconds, float)
endEnd timestamp (unix seconds, float)
stepStep in seconds or duration string (60s, 5m, 1h)
curl 'http://localhost:8428/prometheus/api/v1/query_range?query=cpu_usage{host="web-1"}&start=1700000000&end=1700003600&step=60'

Response (Prometheus API format):

{
  "status": "success",
  "data": {
    "resultType": "matrix",
    "result": [
      {
        "metric": {"__name__": "cpu_usage", "host": "web-1"},
        "values": [[1700000000, "73.2"], [1700000060, "74.1"]]
      }
    ]
  }
}

Grafana data source configuration:

Set Prometheus URL to http://timeless:8428/prometheus and queries will work with standard PromQL selectors.

Series Discovery Endpoints

GET /api/v1/label/__name__/values

List all metric names.

curl http://localhost:8428/api/v1/label/__name__/values
{"status":"success","data":["cpu_usage","mem_usage","disk_io"]}

GET /api/v1/label/:name/values?metric=<metric>

List distinct values for a label key.

curl 'http://localhost:8428/api/v1/label/host/values?metric=cpu_usage'
{"status":"success","data":["web-1","web-2","web-3"]}

GET /api/v1/series?metric=<metric>

List all series (label combinations) for a metric.

curl 'http://localhost:8428/api/v1/series?metric=cpu_usage'
{"status":"success","data":[{"labels":{"host":"web-1"}},{"labels":{"host":"web-2"}}]}

Metadata Endpoints

POST /api/v1/metadata

Register or update metric metadata.

curl -X POST http://localhost:8428/api/v1/metadata -d '{
  "metric": "cpu_usage",
  "type": "gauge",
  "unit": "%",
  "description": "CPU utilization percentage"
}'
FieldRequiredValues
metricyesMetric name
typeyesgauge, counter, or histogram
unitnoUnit string (e.g., %, bytes, ms)
descriptionnoHuman-readable description

GET /api/v1/metadata?metric=<metric>

Get metadata for a metric. Returns default values (type: gauge) if none registered.

curl 'http://localhost:8428/api/v1/metadata?metric=cpu_usage'
{"metric":"cpu_usage","type":"gauge","unit":"%","description":"CPU utilization percentage"}

Annotation Endpoints

Annotations are event markers (deploys, incidents, maintenance windows) that overlay on charts as dashed vertical lines with labels.

POST /api/v1/annotations

curl -X POST http://localhost:8428/api/v1/annotations -d '{
  "title": "Deploy v2.1",
  "description": "Rolled out new caching layer",
  "tags": ["deploy", "prod"],
  "timestamp": 1700000000
}'
FieldRequiredDefaultDescription
titleyesShort annotation title
descriptionnonullLonger description
tagsno[]List of tag strings for filtering
timestampnocurrent timeUnix seconds

Response: 201 {"id": 1, "status": "created"}

GET /api/v1/annotations

Query annotations in a time range.

ParameterDefaultDescription
from24h agoStart timestamp
tonowEnd timestamp
tags(all)Comma-separated tag filter (matches any)
curl 'http://localhost:8428/api/v1/annotations?from=-7d&tags=deploy,incident'
{
  "data": [
    {"id": 1, "timestamp": 1700000000, "title": "Deploy v2.1", "description": "...", "tags": ["deploy", "prod"]}
  ]
}

DELETE /api/v1/annotations/:id

curl -X DELETE http://localhost:8428/api/v1/annotations/1

Alert Endpoints

POST /api/v1/alerts

Create an alert rule.

curl -X POST http://localhost:8428/api/v1/alerts -d '{
  "name": "High CPU",
  "metric": "cpu_usage",
  "condition": "above",
  "threshold": 90.0,
  "labels": {"host": "web-1"},
  "duration": 300,
  "aggregate": "avg",
  "webhook_url": "http://hooks.example.com/alert"
}'
FieldRequiredDefaultDescription
nameyesAlert name
metricyesMetric to monitor
conditionyesabove or below
thresholdyesNumeric threshold
labelsno{}Label filter (empty = all series)
durationno0Seconds value must breach before firing
aggregatenoavgAggregate function for evaluation
webhook_urlnonullURL to POST on state transitions

Response: 201 {"id": 1, "status": "created"}

GET /api/v1/alerts

List all alert rules with current state.

curl http://localhost:8428/api/v1/alerts

DELETE /api/v1/alerts/:id

curl -X DELETE http://localhost:8428/api/v1/alerts/1

Charts and Dashboard

GET /chart

Render an SVG line chart. See SVG Charts for full details.

GET /

Auto-generated HTML dashboard with all metrics, alert badges, and auto-refresh.

ParameterDefaultDescription
from-1hTime range start
tonowTime range end
Any other paramLabel filter
http://localhost:8428/?from=-6h&host=web-1

Health Check

GET /health

curl http://localhost:8428/health
{
  "status": "ok",
  "series": 1000,
  "points": 5000000,
  "storage_bytes": 4194304,
  "buffer_points": 1234,
  "bytes_per_point": 0.78
}

SVG Charts

TimelessMetrics generates pure SVG line charts with no JavaScript or external dependencies. Charts can be embedded anywhere that renders images: HTML <img> tags, markdown, emails, Slack, notebooks, etc.

Embedding Charts

HTML <img> tag

<img src="http://localhost:8428/chart?metric=cpu_usage&from=-6h" />

Markdown

![CPU Usage](http://localhost:8428/chart?metric=cpu_usage&from=-6h&theme=light)

Multi-series comparison

<img src="http://localhost:8428/chart?metric=cpu_usage&from=-24h&step=300&aggregate=max&width=1000&height=400" />

All series matching the label filter appear as separate colored lines with an auto-generated legend.

Chart Parameters

ParameterDefaultDescription
metricrequiredMetric name
from / start1h agoStart time (unix seconds or relative: -1h, -7d)
to / endnowEnd time
stepautoBucket size in seconds. Auto-computed from range if omitted (~200 buckets)
aggregateavgAggregation: avg, min, max, sum, count, last, first, rate
width800SVG width in pixels
height300SVG height in pixels
themeautolight, dark, or auto
Any other paramLabel filter (e.g., host=web-1)

Themes

ThemeBehavior
autoUses CSS prefers-color-scheme media query. Renders correctly in both light and dark contexts without server-side knowledge of the viewer's preference.
lightWhite background, dark text
darkDark gray background, light text

Chart Features

  • Multi-series: All matching series render as separate colored lines (up to 8 colors, cycling)
  • Auto-legend: When multiple series are present, a legend appears at the bottom using the most-varying label key
  • Annotation markers: Dashed amber vertical lines for any annotations in the time range, with title labels
  • Smart axes: Y-axis uses "nice" tick values (1, 2, 5 multiples); X-axis snaps to clean time intervals
  • Value formatting: Large values shown as 1.5K, 2.3M
  • Time formatting: Adapts to range — HH:MM for <1 day, Mon HH:MM for <1 week, M/D for longer ranges
  • Empty state: Renders a clean "No data" placeholder when no points match
  • Cache-friendly: Responses include Cache-Control: public, max-age=60

Programmatic Usage

The chart module can be used directly from Elixir without the HTTP server:

{:ok, results} = TimelessMetrics.query_aggregate_multi(:metrics, "cpu_usage", %{},
  from: now - 3600,
  to: now,
  bucket: {60, :seconds},
  aggregate: :avg)

{:ok, annotations} = TimelessMetrics.annotations(:metrics, now - 3600, now)

svg = TimelessMetrics.Chart.render("cpu_usage", results,
  width: 800,
  height: 300,
  theme: :dark,
  annotations: annotations)

File.write!("chart.svg", svg)

Configuration

Store Options

When starting TimelessMetrics as a library:

{TimelessMetrics,
  name: :metrics,
  data_dir: "/var/lib/metrics",
  schema: MyApp.MetricsSchema,
  raw_retention_seconds: 14 * 86_400,
  daily_retention_seconds: 365 * 86_400,
  ingest_workers: 8,
  alert_interval: :timer.seconds(30),
  self_monitor: true}

Schema and Rollup Tiers

Define custom rollup tiers:

defmodule MyApp.MetricsSchema do
  use TimelessMetrics.Schema

  raw_retention {7, :days}
  rollup_interval {5, :minutes}
  retention_interval {1, :hours}

  tier :hourly,
    resolution: :hour,
    aggregates: [:avg, :min, :max, :count, :sum, :last],
    retention: {30, :days}

  tier :daily,
    resolution: :day,
    aggregates: [:avg, :min, :max, :count, :sum, :last],
    retention: {365, :days}

  tier :monthly,
    resolution: {30, :days},
    aggregates: [:avg, :min, :max, :count, :sum, :last],
    retention: :forever
end

Default schema (used when no :schema option is provided):

  • Raw retention: 7 days
  • Hourly rollups retained 30 days
  • Daily rollups retained 365 days
  • Monthly rollups retained forever
  • Rollup runs every 5 minutes
  • Retention enforced every hour

Environment Variables (Container)

When running as a standalone container, these environment variables configure the store:

VariableDefaultDescription
TIMELESS_DATA_DIR/dataStorage directory (mount a volume here)
TIMELESS_PORT8428HTTP listen port
TIMELESS_BEARER_TOKEN(none)Bearer token for API auth (unset = no auth)
podman run -d \
  -p 8428:8428 \
  -v timeless_data:/data:Z \
  -e TIMELESS_BEARER_TOKEN=my-secret \
  localhost/timeless:latest