glimr_redis/cache/cache

Redis Cache Operations

The framework’s unified cache module defines composite operations like pull, remember, and JSON helpers — but those are all built on top of 8 simple primitives that each backend must provide. This module is the Redis implementation of those primitives. Adding composite logic here would mean reimplementing it for every backend (ETS, SQLite, etc.), so we deliberately keep this layer thin and let the framework do the heavy lifting.

Values

pub fn flush(pool: pool.Pool) -> Result(Nil, cache.CacheError)

The tempting approach is KEYS glimr:cache:* followed by DEL, but KEYS scans the entire keyspace in one shot and blocks Redis while it does it. On a production instance with millions of keys, that can freeze every other client for seconds. SCAN does the same work in small batches, giving Redis a chance to serve other requests between each batch.

pub fn forget(
  pool: pool.Pool,
  key: String,
) -> Result(Nil, cache.CacheError)

Returning Ok even when the key doesn’t exist is intentional — if you had to check has before calling forget, another request could delete the key between your check and your delete, causing a spurious error. Making it idempotent means callers can just fire and forget without worrying about race conditions.

pub fn get(
  pool: pool.Pool,
  key: String,
) -> Result(String, cache.CacheError)

Valkyrie returns its own error type when a key is missing, but the framework’s CachePool expects NotFound specifically. If we passed Valkyrie errors through directly, the framework’s remember and pull functions wouldn’t know whether the key was missing or the connection was broken.

pub fn has(pool: pool.Pool, key: String) -> Bool

The obvious way to check if a key exists is to GET it and see if you got a value back — but that transfers the entire value over the network for nothing. For a 50KB cached JSON blob, that’s a lot of wasted bandwidth just to learn “yes, it’s there.” EXISTS returns a simple count without touching the value at all.

pub fn increment(
  pool: pool.Pool,
  key: String,
  by: Int,
) -> Result(Int, cache.CacheError)

Incrementing a counter by doing GET, parsing the string to an int, adding, and SET back is both slow and broken under concurrency — two requests could read the same value and both write back the same increment, losing one update. Redis INCRBY does the whole thing atomically on the server side, which is essential for rate limiters and hit counters.

pub fn put(
  pool: pool.Pool,
  key: String,
  value: String,
  ttl_seconds: Int,
) -> Result(Nil, cache.CacheError)

A naive approach would be to SET the key and then call EXPIRE separately — but that leaves a brief window where the key exists without a TTL. If the app crashes between the two calls, that key lives in Redis forever, slowly leaking memory. Passing the TTL as part of SetOptions makes it a single atomic operation.

pub fn put_forever(
  pool: pool.Pool,
  key: String,
  value: String,
) -> Result(Nil, cache.CacheError)

You might think this could just call put with a very large TTL, but that’s subtly wrong — a key with a TTL of 100 years will still show a TTL when you inspect it, and code that checks “does this key expire?” would get the wrong answer. Omitting the expiry option entirely creates a truly persistent key with no TTL metadata.

Search Document