import dream_test/unit.{describe, it}
import dream_test/assertions/should.{be_error, be_ok, equal, or_fail_with, should}
pub fn tests() {
describe("Calculator", [
it("adds two numbers", fn() {
add(2, 3)
|> should()
|> equal(5)
|> or_fail_with("2 + 3 should equal 5")
}),
it("handles division", fn() {
divide(10, 2)
|> should()
|> be_ok()
|> equal(5)
|> or_fail_with("10 / 2 should equal 5")
}),
it("returns error for division by zero", fn() {
divide(1, 0)
|> should()
|> be_error()
|> or_fail_with("Division by zero should error")
}),
])
}
Calculator
✓ adds two numbers
✓ handles division
✓ returns error for division by zero
3 tests, 0 failures
Installation
# gleam.toml
[dev-dependencies]
dream_test = "~> 1.0"
Why Dream Test?
| Feature | What you get |
|---|---|
| Parallel by default | Tests run concurrently across all cores—100 tests finish ~4x faster on 4 cores |
| Crash-proof | Each test runs in an isolated BEAM process; one crash doesn’t kill the suite |
| Timeout-protected | Hanging tests get killed automatically; no more stuck CI pipelines |
| Lifecycle hooks | before_all, before_each, after_each, after_all for setup/teardown |
| Gleam-native | Pipe-first assertions that feel natural; no macros, no reflection, no magic |
| Familiar syntax | If you’ve used Jest, RSpec, or Mocha, you already know the basics |
| Type-safe | Your tests are just Gleam code; the compiler catches mistakes early |
| Self-hosting | Dream Test tests itself; we eat our own cooking |
Quick Start
1. Write tests with describe and it
// test/my_app_test.gleam
import dream_test/unit.{describe, it, to_test_cases}
import dream_test/runner.{run_all}
import dream_test/reporter/bdd.{report}
import dream_test/assertions/should.{should, equal, or_fail_with}
import gleam/io
import gleam/string
pub fn tests() {
describe("String utilities", [
it("trims whitespace", fn() {
" hello "
|> string.trim()
|> should()
|> equal("hello")
|> or_fail_with("Should remove surrounding whitespace")
}),
it("finds substrings", fn() {
"hello world"
|> string.contains("world")
|> should()
|> equal(True)
|> or_fail_with("Should find 'world' in string")
}),
])
}
pub fn main() {
to_test_cases("my_app_test", tests())
|> run_all()
|> report(io.print)
}
2. Run with gleam test
gleam test
3. See readable output
String utilities
✓ trims whitespace
✓ finds substrings
2 tests, 0 failures
The Assertion Pattern
Every assertion follows the same pattern:
value |> should() |> matcher() |> or_fail_with("message")
Chaining matchers
Matchers can be chained. Each one passes its unwrapped value to the next:
// Unwrap Some, then check the value
Some(42)
|> should()
|> be_some()
|> equal(42)
|> or_fail_with("Should contain 42")
// Unwrap Ok, then check the value
Ok("success")
|> should()
|> be_ok()
|> equal("success")
|> or_fail_with("Should be Ok with 'success'")
Available matchers
| Category | Matchers |
|---|---|
| Equality | equal, not_equal |
| Boolean | be_true, be_false |
| Option | be_some, be_none |
| Result | be_ok, be_error |
| Collections | contain, not_contain, have_length, be_empty |
| Comparison | be_greater_than, be_less_than, be_at_least, be_at_most, be_between, be_in_range |
| String | start_with, end_with, contain_string |
Explicit failures
When you need to fail unconditionally:
import dream_test/assertions/should.{fail_with}
case result {
Ok(_) -> fail_with("Should have returned an error")
Error(_) -> handle_expected_error()
}
Skipping tests
Use skip instead of it to temporarily disable a test:
import dream_test/unit.{describe, it, skip}
describe("Feature", [
it("works correctly", fn() { ... }),
skip("not implemented yet", fn() { ... }), // Skipped
it("handles edge cases", fn() { ... }),
])
Feature
✓ works correctly
- not implemented yet
✓ handles edge cases
Summary: 3 run, 0 failed, 2 passed, 1 skipped
The test body is preserved but not executed—just change skip back to it when ready.
Lifecycle Hooks
Setup and teardown logic for your tests. Dream_test supports four lifecycle hooks that let you run code before and after tests.
import dream_test/unit.{describe, it, before_each, after_each, before_all, after_all}
import dream_test/types.{AssertionOk}
describe("Database tests", [
before_all(fn() {
start_database()
AssertionOk
}),
before_each(fn() {
begin_transaction()
AssertionOk
}),
it("creates a user", fn() { ... }),
it("deletes a user", fn() { ... }),
after_each(fn() {
rollback_transaction()
AssertionOk
}),
after_all(fn() {
stop_database()
AssertionOk
}),
])
Hook Types
| Hook | Runs | Use case |
|---|---|---|
before_all | Once before all tests in group | Start services, create temp files |
before_each | Before each test | Reset state, begin transaction |
after_each | After each test (even on failure) | Rollback, cleanup temp data |
after_all | Once after all tests in group | Stop services, remove temp files |
Two Execution Modes
Choose the mode based on which hooks you need:
| Mode | Function | Hooks supported |
|---|---|---|
| Flat | to_test_cases → run_all | before_each, after_each |
| Suite | to_test_suite → run_suite | All four hooks |
Flat mode — simpler, faster; use when you only need per-test setup:
import dream_test/unit.{describe, it, before_each, to_test_cases}
import dream_test/runner.{run_all}
to_test_cases("my_test", tests())
|> run_all()
|> report(io.print)
Suite mode — preserves group structure; use when you need once-per-group setup:
import dream_test/unit.{describe, it, before_all, after_all, to_test_suite}
import dream_test/runner.{run_suite}
to_test_suite("my_test", tests())
|> run_suite()
|> report(io.print)
Hook Inheritance
Nested describe blocks inherit parent hooks. Hooks run outer-to-inner for
setup, inner-to-outer for teardown:
describe("Outer", [
before_each(fn() {
io.println("1. outer setup")
AssertionOk
}),
after_each(fn() {
io.println("4. outer teardown")
AssertionOk
}),
describe("Inner", [
before_each(fn() {
io.println("2. inner setup")
AssertionOk
}),
after_each(fn() {
io.println("3. inner teardown")
AssertionOk
}),
it("test", fn() {
io.println("(test)")
AssertionOk
}),
]),
])
// Output: 1. outer setup → 2. inner setup → (test) → 3. inner teardown → 4. outer teardown
Hook Failure Behavior
If a hook fails, Dream Test handles it gracefully:
| Failure in | Result |
|---|---|
before_all | All tests in group marked SetupFailed, skipped |
before_each | That test marked SetupFailed, skipped |
after_each | Test result preserved; hook failure recorded |
after_all | Hook failure recorded; all test results preserved |
describe("Handles failures", [
before_all(fn() {
case connect_to_database() {
Ok(_) -> AssertionOk
Error(e) -> fail_with("Database connection failed: " <> e)
}
}),
// If before_all fails, these tests are marked SetupFailed (not run)
it("test1", fn() { AssertionOk }),
it("test2", fn() { AssertionOk }),
])
BEAM-Powered Test Isolation
Every test runs in its own BEAM process:
| Feature | What it means |
|---|---|
| Crash isolation | A panic in one test doesn’t affect others |
| Timeout handling | Slow tests get killed; suite keeps running |
| Parallel execution | Tests run concurrently (configurable) |
| Automatic cleanup | Resources linked to the test process are freed automatically |
// This test crashes, but others keep running
it("handles edge case", fn() {
panic as "oops" // Other tests still execute and report
})
// This test hangs, but gets killed after timeout
it("fetches data", fn() {
infinite_loop() // Killed after 5 seconds (default)
})
Configuring execution
import dream_test/runner.{run_all_with_config, RunnerConfig}
let config = RunnerConfig(
max_concurrency: 8,
default_timeout_ms: 10_000,
)
let test_cases = to_test_cases("my_test", tests())
run_all_with_config(config, test_cases)
|> report(io.print)
How It Works
Dream_test uses an explicit pipeline—no hidden globals, no magic test discovery.
Flat Mode (most common)
describe/it → to_test_cases → run_all → report
(DSL) (flatten) (execute) (format)
- Define tests with
describe/it— builds a test tree - Convert with
to_test_cases— flattens to runnable cases - Run with
run_all— executes in parallel with isolation - Report with your choice of formatter — outputs results
Suite Mode (for before_all/after_all)
describe/it → to_test_suite → run_suite → report
(DSL) (preserve) (execute) (format)
Suite mode preserves the group hierarchy so hooks can run at group boundaries.
Under the Hood
Each test runs in its own BEAM process:
flowchart TB
runner[Test Runner]
runner --> t1[Test 1]
runner --> t2[Test 2]
runner --> t3[Test 3]
runner --> t4[Test 4]
t1 --> collect[Collect Results]
t2 --> collect
t3 --> collect
t4 --> collect
collect --> report[Report]
Benefits:
- A crashing test doesn’t affect others
- Timeouts are enforced via process killing
- Resources linked to test processes are cleaned up automatically
Documentation
| Document | Audience |
|---|---|
| Hexdocs | API reference with examples |
| CONTRIBUTING.md | How to contribute |
| STANDARDS.md | Coding conventions |
Status
Stable — v1.0 release. API is stable and ready for production use.
| Feature | Status |
|---|---|
Core DSL (describe/it/skip) | ✅ Stable |
| Lifecycle hooks | ✅ Stable |
Assertions (should.*) | ✅ Stable |
| BDD Reporter | ✅ Stable |
| Parallel execution | ✅ Stable |
| Process isolation | ✅ Stable |
| Crash handling | ✅ Stable |
| Timeout handling | ✅ Stable |
| Polling helpers | ✅ Stable |
Contributing
git clone https://github.com/TrustBound/dream_test
cd dream_test
make all # build, test, format
See CONTRIBUTING.md for development workflow and guidelines.
License
MIT — see LICENSE.md
Built in Gleam, on the BEAM, by the Dream Team ❤️