glimr/db/migrate

Database Migration Utilities

Both PostgreSQL and SQLite adapters need the same migration workflow — load files, diff against the applied set, execute pending ones, and record the results. Centralising that logic here avoids duplicating file-parsing and tracking-table code in each driver adapter while still branching on SQL dialect differences like TIMESTAMP vs TEXT.

Types

A migration is identified by its version (timestamp prefix) and carries the raw SQL to execute. The version doubles as the sort key for execution order and the primary key in the tracking table.

pub type Migration {
  Migration(version: String, name: String, sql: String)
}

Constructors

  • Migration(version: String, name: String, sql: String)

Values

pub fn apply_pending(
  conn: db.Connection,
  pending: List(Migration),
) -> Result(List(String), db.DbError)

Stopping on the first error leaves the database in a known state — all migrations up to the failure are applied and recorded, so re-running picks up exactly where it left off.

pub fn ensure_table(
  conn: db.Connection,
) -> Result(Nil, db.DbError)

The tracking table must exist before any migration can be recorded. Using CREATE IF NOT EXISTS makes this safe to call on every run. The applied_at column branches on driver because PostgreSQL supports TIMESTAMP natively while SQLite stores dates as TEXT.

pub fn extract_sql(sql: String) -> String

Migration files contain header comments added by the generator (driver tag, timestamp). Stripping them before execution avoids sending comment-only lines to drivers that might choke on leading -- lines in multi-statement strings.

pub fn get_applied(
  conn: db.Connection,
) -> Result(List(String), db.DbError)

The pending-migration filter needs to know which versions are already in the database. Returning them sorted keeps the output deterministic for logging and debugging.

pub fn get_pending_migrations(
  all: List(Migration),
  applied: List(String),
) -> List(Migration)

Re-running migrations must be idempotent, so filtering out already-applied versions before execution prevents duplicate DDL statements that would fail or corrupt the schema.

pub fn load_all_migrations(
  connection_name: String,
) -> Result(List(Migration), String)

Scanning the migrations directory at startup avoids a hard-coded migration registry that must be updated every time a new file is added. Sorting by the version prefix guarantees chronological execution order regardless of filesystem listing order.

Search Document