glimr/cache/cache

Cache Unified API

Every cache backend (Redis, file, SQLite) has its own way of talking to storage, but application code shouldn’t care which one is active. This module defines the CachePool type that all backends produce, plus composite operations like remember and JSON helpers that are written once here instead of being reimplemented in every backend.

Types

Having typed error variants instead of a generic string error lets callers respond differently to each situation. For example, remember_json treats SerializationError as “the cached format changed, recompute” but treats ConnectionError as “something is actually broken, bail out.” A flat string error would force callers to parse messages to figure out what happened.

pub type CacheError {
  NotFound
  SerializationError(message: String)
  ConnectionError(message: String)
  Expired
}

Constructors

  • NotFound

    The key doesn’t exist or has expired

  • SerializationError(message: String)

    The cached value couldn’t be encoded or decoded — often means the data shape changed between deployments

  • ConnectionError(message: String)

    The backend is unreachable or returned an unexpected error — usually a network or permissions issue

  • Expired

    The entry existed but has expired — used internally by backends that do lazy expiration

The trick that makes multiple backends work is closures — each backend captures its own connection pool inside these functions at startup. Application code never imports Redis or SQLite modules directly, it just calls cache.get() and the right thing happens. Making this opaque means you can’t construct a CachePool without going through new_pool, which ensures all 8 operations are wired up.

pub opaque type CachePool

Values

pub fn decrement(
  pool: CachePool,
  key: String,
  by: Int,
) -> Result(Int, CacheError)

Just increment with a negative value, but having a named function makes code like “decrement remaining attempts” read naturally instead of the confusing “increment(key, -1)” which looks like a bug at first glance.

pub fn flush(pool: CachePool) -> Result(Nil, CacheError)

Wipes everything in this cache pool. Backends scope this to the pool’s prefix, so flushing one pool won’t touch keys belonging to other pools or other applications sharing the same storage.

pub fn forget(
  pool: CachePool,
  key: String,
) -> Result(Nil, CacheError)

Idempotent by design — deleting a key that doesn’t exist still returns Ok. This prevents race conditions where two concurrent requests both try to invalidate the same key and one of them would get a spurious error.

pub fn get(
  pool: CachePool,
  key: String,
) -> Result(String, CacheError)

All the core operations below are thin delegates to the closures inside CachePool. The indirection means app code imports glimr/cache/cache and never touches backend modules — swapping from Redis to file caching is a config change, not a code change.

pub fn get_json(
  pool: CachePool,
  key: String,
  decoder: decode.Decoder(a),
) -> Result(a, CacheError)

Fetches a cached string and runs it through a JSON decoder in one step. If the cached value doesn’t match the decoder, you get a SerializationError rather than a generic parse failure — which lets remember_json distinguish “stale format, recompute” from “the backend is broken.”

pub fn has(pool: CachePool, key: String) -> Bool

Useful when you need to know if something is cached without actually fetching the value — like checking if a rate limit key exists. Some backends (Redis) can do this without transferring the value over the network, saving bandwidth on large cached blobs.

pub fn increment(
  pool: CachePool,
  key: String,
  by: Int,
) -> Result(Int, CacheError)

Atomic increment is essential for things like rate limiters and hit counters — doing get/parse/add/set manually would lose updates under concurrency. Starting from 0 when the key doesn’t exist means callers don’t need to initialize counters before using them.

pub fn pull(
  pool: CachePool,
  key: String,
) -> Result(String, CacheError)

Get-then-delete in one call. Perfect for one-time tokens like email verification codes or CSRF tokens — you want to read the value and ensure it can never be used again. Doing this as two separate calls would let another request sneak in and read the token before it’s deleted.

pub fn put(
  pool: CachePool,
  key: String,
  value: String,
  ttl_seconds: Int,
) -> Result(Nil, CacheError)

Caching without expiration is a memory leak waiting to happen, so this requires an explicit TTL. If you genuinely want permanent storage, use put_forever — the separate function name makes that a conscious decision rather than an accidental omission.

pub fn put_forever(
  pool: CachePool,
  key: String,
  value: String,
) -> Result(Nil, CacheError)

Some things genuinely need to live forever — like feature flags or configuration lookups that only change on deploy. Having a separate function instead of put() with ttl=0 makes it obvious in code reviews that the author intended permanent storage.

pub fn put_json(
  pool: CachePool,
  key: String,
  value: a,
  encoder: fn(a) -> json.Json,
  ttl_seconds: Int,
) -> Result(Nil, CacheError)

Encodes a value to JSON and stores the resulting string. Keeping serialization here means callers don’t need to manually call json.to_string before every put — and more importantly, get_json and put_json always agree on the format because they both go through the same path.

pub fn put_json_forever(
  pool: CachePool,
  key: String,
  value: a,
  encoder: fn(a) -> json.Json,
) -> Result(Nil, CacheError)

Same as put_json but without expiration — for config lookups, feature flags, or any JSON structure you want cached until an explicit forget or flush clears it.

pub fn remember(
  pool: CachePool,
  key: String,
  ttl_seconds: Int,
  compute: fn() -> String,
) -> String

Deprecated: Use try_remember, which takes a Result-returning compute callback and does not cache errors.

The bread and butter of caching — try the cache first, and if it’s a miss, run an expensive computation (database query, API call, etc.) and store the result for next time. Returns the value directly, no Result to unwrap. If the cache backend is down, it just calls compute every time — your app stays working, it’s just slower.

pub fn remember_forever(
  pool: CachePool,
  key: String,
  compute: fn() -> String,
) -> String

Deprecated: Use try_remember_forever, which takes a Result-returning compute callback and does not cache errors.

Same as remember but the cached result never expires. Good for values that are expensive to compute but rarely change — like a site’s navigation menu built from the database. The only way to refresh these is an explicit forget() or flush().

pub fn remember_json(
  pool: CachePool,
  key: String,
  ttl_seconds: Int,
  decoder: decode.Decoder(a),
  encoder: fn(a) -> json.Json,
  compute: fn() -> a,
) -> a

Deprecated: Use try_remember_json, which takes a Result-returning compute callback and does not cache errors.

When you deploy a new version that changes a type’s fields, the old cached JSON no longer matches your decoder. Rather than returning a cryptic decode error, this treats SerializationError the same as NotFound — it recomputes the value, caches the new format, and life goes on. No need to manually flush the cache after every deploy.

pub fn remember_json_forever(
  pool: CachePool,
  key: String,
  decoder: decode.Decoder(a),
  encoder: fn(a) -> json.Json,
  compute: fn() -> a,
) -> a

Deprecated: Use try_remember_json_forever, which takes a Result-returning compute callback and does not cache errors.

Same as remember_json but the cached result never expires. Good for things like a site’s configuration or navigation tree that are expensive to build from the database but change so rarely that TTL-based expiry would just waste computation. The only way to refresh is an explicit forget() or flush().

pub fn stop(pool: CachePool) -> Nil

Console commands create temporary pools that should be cleaned up when they’re done. Without explicit shutdown, those connections would sit open until the process exits — which could exhaust connection limits if someone runs several CLI commands in quick succession.

pub fn try_remember(
  pool: CachePool,
  key: String,
  ttl_seconds: Int,
  compute: fn() -> Result(String, e),
) -> Result(String, e)

The bread and butter of caching — try the cache first, and if it’s a miss, run a fallible computation (database query, API call, etc.). On Ok(value) the result gets cached and returned; on Error(e) the error is propagated untouched and nothing is written to the cache. Errors are never cached, so transient failures won’t poison the TTL.

pub fn try_remember_forever(
  pool: CachePool,
  key: String,
  compute: fn() -> Result(String, e),
) -> Result(String, e)

Same as try_remember but the cached result never expires. Good for values that are expensive to compute but rarely change — like a site’s navigation menu built from the database. The only way to refresh these is an explicit forget() or flush().

pub fn try_remember_json(
  pool: CachePool,
  key: String,
  ttl_seconds: Int,
  decoder: decode.Decoder(a),
  encoder: fn(a) -> json.Json,
  compute: fn() -> Result(a, e),
) -> Result(a, e)

The JSON remember pattern with explicit error propagation. On a cache hit, returns Ok(value). On a miss, runs the compute callback; Ok(value) is cached and returned, while Error(e) is passed through without touching the cache. The decoder and encoder operate on the success type a because the error branch is never serialized.

pub fn try_remember_json_forever(
  pool: CachePool,
  key: String,
  decoder: decode.Decoder(a),
  encoder: fn(a) -> json.Json,
  compute: fn() -> Result(a, e),
) -> Result(a, e)

Same as try_remember_json but the cached result never expires. Good for things like a site’s configuration or navigation tree that are expensive to build from the database but change so rarely that TTL-based expiry would just waste computation. The only way to refresh is an explicit forget() or flush().

Search Document