glimr/cache/database

Database Cache Backend

Sometimes you don’t want to run Redis just for caching — if you already have PostgreSQL or SQLite, why not use it? This backend stores cached values in a regular database table, with the same API as every other backend. Both Postgres and SQLite share the exact same SQL thanks to db’s placeholder conversion; the only difference is the integer type for the expiration column.

Values

pub fn cleanup_expired(
  db_pool: db.DbPool,
  table: String,
) -> Result(Nil, cache.CacheError)

Unlike Redis where keys expire automatically via TTLs, database entries just sit there forever until someone deletes them. Without periodic cleanup, the cache table grows without bound. This is called by the session GC and can be hooked into a scheduled task to keep the table lean.

pub fn create_table(
  db_pool: db.DbPool,
  table: String,
) -> Result(Nil, cache.CacheError)

PostgreSQL uses BIGINT for Unix timestamps because INTEGER maxes out at 2^31 (year 2038), while SQLite’s INTEGER is already 64-bit so it’s fine. Checking the pool driver here means developers don’t need to think about this — the right column type is chosen automatically based on which database they’re actually using.

pub fn start(db_pool: db.DbPool, name: String) -> cache.CachePool

Looks up the named cache store in config, extracts the table name, and wires up a CachePool. This is the one-liner developers use in their main module — they already have a database pool from their app setup, so all they need is the store name to get caching going.

Search Document