Supertester API Guide
View SourceVersion: 0.5.1 Last Updated: January 6, 2026
Reference guide for the primary Supertester modules and workflows.
Table of Contents
- Core Modules
- Supertester.ConcurrentHarness
- Supertester.PropertyHelpers
- Supertester.MessageHarness
- Supertester.Telemetry
- Isolation Extensions
- OTP Testing
- Chaos Engineering
- Performance Testing
- Assertions
- Quick Reference
Core Modules
Supertester
Main module providing version information.
Supertester.version()
# => "0.5.1"Supertester.ExUnitFoundation
Drop-in ExUnit adapter that configures isolation automatically.
Isolation Modes (:isolation option)
:basic– Basic isolation with unique naming (async-friendly):registry– Registry-based process isolation (async-friendly):full_isolation– Complete process and ETS isolation (recommended, async-friendly):contamination_detection– Isolation with leak detection (runs synchronously)
Usage
defmodule MyApp.MyTest do
use Supertester.ExUnitFoundation, isolation: :full_isolation
test "isolated test", context do
# context.isolation_context contains isolation info
{:ok, server} = setup_isolated_genserver(MyServer)
# Test runs in complete isolation
end
endAdditional Isolation Options
telemetry_isolation: trueenablesSupertester.TelemetryHelpersfor the test process.logger_isolation: trueenablesSupertester.LoggerIsolationfor the test process.ets_isolation: [...]mirrors named ETS tables into isolated copies.@tag telemetry_events: [...]auto-attaches isolated telemetry handlers.@tag ets_tables: [...]mirrors tables for the current test.@tag logger_level: :debugoverrides the logger level for the test process.
defmodule MyApp.MyTest do
use Supertester.ExUnitFoundation,
isolation: :full_isolation,
telemetry_isolation: true,
logger_isolation: true,
ets_isolation: [:my_table]
@tag telemetry_events: [[:supertester, :concurrent, :scenario, :stop]]
@tag logger_level: :debug
test "captures telemetry + logs", _context do
# ...
end
endSupertester.UnifiedTestFoundation
Isolation runtime powering Supertester. Use it directly for custom harnesses or non-ExUnit integrations. The legacy use Supertester.UnifiedTestFoundation macro delegates to Supertester.ExUnitFoundation and emits a warning.
defmodule CustomHarnessTest do
use ExUnit.Case, async: true
setup context do
Supertester.UnifiedTestFoundation.setup_isolation(:full_isolation, context)
end
endSupertester.Env
Environment abstraction used to register cleanup callbacks. The default implementation uses ExUnit.Callbacks.on_exit/1, but you can configure a custom module that implements the Supertester.Env behaviour:
defmodule MyHarness.Env do
@behaviour Supertester.Env
@impl true
def on_exit(fun), do: MyHarness.register_cleanup(fun)
end
# config/test.exs
import Config
config :supertester, :env_module, MyHarness.EnvSupertester.TestableGenServer
Automatically injects sync handlers into GenServers for deterministic testing.
Usage in GenServer
defmodule MyServer do
use GenServer
use Supertester.TestableGenServer # Adds __supertester_sync__ handler
# Your GenServer implementation
endUsage in Tests
test "async operations" do
{:ok, server} = MyServer.start_link()
# Send async cast
GenServer.cast(server, :some_operation)
# Synchronize - ensures cast is processed
GenServer.call(server, :__supertester_sync__)
# Now safe to verify
assert :sys.get_state(server).operation_complete == true
endWith State Return
# Get state without :sys.get_state
{:ok, state} = GenServer.call(server, {:__supertester_sync__, return_state: true})Supertester.ConcurrentHarness
High-level scenario harness for orchestrating concurrent threads against a target process.
run/1
@spec run(Supertester.ConcurrentHarness.scenario()) ::
{:ok, %{events: [map()], metrics: map(), mailbox: map() | nil}} | {:error, term()}Runs a scenario built via simple_genserver_scenario/4, from_property_config/3, or a manual map.
:threads– List of thread scripts ([operation()]):timeout_ms– Overall timeout (default: 5_000):mailbox– Keyword list forwarded toPerformanceHelpers.measure_mailbox_growth/3:invariant–fn pid, ctx -> ... endrun after threads complete:chaos– Optional(pid, ctx) -> anyhook executed concurrently (see helpers below):performance_expectations– Keyword list of bounds enforced automatically
Every run emits telemetry events under [:supertester, :concurrent, :scenario, :start|:stop]
along with optional mailbox/performance/chaos events. Reports include :chaos, :performance,
and the auto-generated :scenario_id metadata for downstream correlation.
simple_genserver_scenario/4
@spec simple_genserver_scenario(module(), [term()], pos_integer(), keyword()) ::
Supertester.ConcurrentHarness.Scenario.t()Bootstraps a scenario for a GenServer module. Accepts options such as:
:server_opts– Passed tostart_link/1:default_operation– Tag bare terms as:callor:cast:invariant– Function to run after threads finish:mailbox– Monitoring configuration:chaos– Chaos hook (e.g.,chaos_inject_crash/2):performance_expectations– Keyword list for automatic performance enforcement
from_property_config/3
@spec from_property_config(module(), map(), keyword()) ::
Supertester.ConcurrentHarness.Scenario.t()Converts a map (often emitted by PropertyHelpers.concurrent_scenario/1) into a runnable scenario.
run_with_performance/2
@spec run_with_performance(scenario(), keyword()) ::
{:ok, map()} | {:error, term()}Convenience helper that measures run/1, enforces expectations, and returns the scenario result.
Avoids wrapping every test manually with assert_performance/2.
chaos_kill_children/1 and chaos_inject_crash/2
@spec chaos_kill_children(keyword()) :: chaos_fun()
@spec chaos_inject_crash(ChaosHelpers.crash_spec(), keyword()) :: chaos_fun()Generate ready-to-use chaos hooks sourced from Supertester.ChaosHelpers. Use them in the :chaos
option when building scenarios:
scenario = Supertester.ConcurrentHarness.simple_genserver_scenario(
MySupervisor,
[:status],
3,
chaos: Supertester.ConcurrentHarness.chaos_kill_children(kill_rate: 0.2)
)Supertester.PropertyHelpers
StreamData helpers for generating concurrency scenarios.
genserver_operation_sequence/2
@spec genserver_operation_sequence([term()], keyword()) ::
StreamData.t([Supertester.ConcurrentHarness.operation()])Generates lists of normalized operations ({:call, term}, {:cast, term}, or {:custom, fun}).
Options include :default_operation, :min_length, and :max_length.
concurrent_scenario/1
@spec concurrent_scenario(keyword()) :: StreamData.t(map())Produces property-test-friendly configs with :thread_scripts, :timeout_ms, and metadata.
Feed the output to ConcurrentHarness.from_property_config/3.
Supertester.MessageHarness
Mailbox visibility utilities.
trace_messages/3
@spec trace_messages(pid(), (() -> any()), keyword()) :: %{
messages: [term()],
initial_mailbox: [term()],
final_mailbox: [term()],
result: term()
}Enables :erlang.trace/3 for :receive events while running the provided function, capturing the
messages delivered to the target process and snapshotting its mailbox before/after execution.
Supertester.Telemetry
Single entry point for emitting telemetry events with the [:supertester | event] prefix.
scenario_start/1 and scenario_stop/2
Telemetry.scenario_start(%{scenario_id: 123})
Telemetry.scenario_stop(%{duration_ms: 42}, %{scenario_id: 123, status: :ok})Used internally by the concurrent harness but available if you extend the library.
mailbox_sample/2, chaos_event/3, performance_event/2
Emit mailbox metrics, chaos lifecycle updates, and performance measurements respectively.
All helpers ultimately call emit/3, so you can attach via :telemetry.attach/4 or
attach_many/4:
:telemetry.attach(
"supertester-log",
[:supertester, :performance, :scenario, :measured],
fn _event, measurements, metadata, _ ->
Logger.debug("Scenario #{metadata.scenario_id} took #{measurements.time_us / 1000}ms")
end,
nil
)Isolation Extensions
Supertester.TelemetryHelpers
Per-test telemetry isolation that only delivers events tagged with the current test id.
{:ok, _test_id} = Supertester.TelemetryHelpers.setup_telemetry_isolation()
{:ok, _handler} = Supertester.TelemetryHelpers.attach_isolated([:my, :event])
Supertester.TelemetryHelpers.emit_with_context([:my, :event], %{value: 1}, %{})
assert Supertester.TelemetryHelpers.assert_telemetry([:my, :event])Key helpers:
setup_telemetry_isolation/0andsetup_telemetry_isolation/1attach_isolated/2withpassthrough,buffer,filter_key, andtransformassert_telemetry/1-3,refute_telemetry/2,assert_telemetry_count/2,flush_telemetry/1with_telemetry/2andemit_with_context/3
Supertester.LoggerIsolation
Process-scoped Logger isolation with convenience capture helpers.
:ok = Supertester.LoggerIsolation.setup_logger_isolation()
Supertester.LoggerIsolation.isolate_level(:debug)
{log, result} =
Supertester.LoggerIsolation.capture_isolated(:debug, fn ->
Logger.debug("hello")
:ok
end)Key helpers:
setup_logger_isolation/0andsetup_logger_isolation/1isolate_level/1,restore_level/0,get_isolated_level/0capture_isolated/2,capture_isolated!/2,with_level/2,with_level_and_capture/2
Supertester.ETSIsolation
Per-test ETS table isolation and safe injection helpers.
:ok = Supertester.ETSIsolation.setup_ets_isolation()
{:ok, table} = Supertester.ETSIsolation.create_isolated(:set, name: :temp_table)
{:ok, restore} =
Supertester.ETSIsolation.inject_table(MyModule, :table_name, :temp_table)
restore.()Key helpers:
setup_ets_isolation/0-2create_isolated/1-2,mirror_table/1-2inject_table/3-4,with_table/2-3
OTP Testing
Supertester.OTPHelpers
Core OTP testing utilities.
setup_isolated_genserver/3
Sets up an isolated GenServer with automatic cleanup.
@spec setup_isolated_genserver(module(), String.t(), keyword()) ::
{:ok, pid()} | {:error, term()}Parameters:
module- The GenServer moduletest_name- Test context for unique naming (optional, default: "")opts- Options passed toGenServer.start_link(optional, default: [])
Example:
{:ok, server} = setup_isolated_genserver(MyServer, "test_1")
{:ok, server} = setup_isolated_genserver(MyServer, "test_2", init_args: [config: :test])setup_isolated_supervisor/3
Sets up an isolated Supervisor with automatic cleanup.
@spec setup_isolated_supervisor(module(), String.t(), keyword()) ::
{:ok, pid()} | {:error, term()}Example:
{:ok, supervisor} = setup_isolated_supervisor(MySupervisor)wait_for_genserver_sync/2
Waits for GenServer to be responsive.
@spec wait_for_genserver_sync(GenServer.server(), timeout()) ::
:ok | {:error, term()}Example:
{:ok, server} = MyServer.start_link()
:ok = wait_for_genserver_sync(server, 5000)wait_for_process_restart/3
Waits for a process to restart after termination.
@spec wait_for_process_restart(atom(), pid(), timeout()) ::
{:ok, pid()} | {:error, term()}Example:
original_pid = Process.whereis(MyServer)
GenServer.stop(MyServer)
{:ok, new_pid} = wait_for_process_restart(MyServer, original_pid, 1000)
assert new_pid != original_pidSupertester.GenServerHelpers
GenServer-specific testing patterns.
get_server_state_safely/1
Safely retrieves GenServer state without crashing.
@spec get_server_state_safely(GenServer.server()) ::
{:ok, term()} | {:error, term()}Example:
{:ok, state} = get_server_state_safely(server)
assert state.counter == 5cast_and_sync/4
Sends a cast and synchronizes to ensure processing.
@spec cast_and_sync(GenServer.server(), term(), term(), keyword()) ::
:ok | {:ok, term()} | {:error, term()}Example:
# No more Process.sleep!
:ok = cast_and_sync(server, {:increment, 5})
{:ok, state} = get_server_state_safely(server)
assert state.counter == 5concurrent_calls/3
Stress-tests GenServer with concurrent calls.
@spec concurrent_calls(GenServer.server(), [term()], pos_integer(), keyword()) ::
{:ok, [map()]}Example:
{:ok, results} = concurrent_calls(server, [:increment, :decrement], 10, timeout: 20)
for %{call: call, successes: successes, errors: errors} <- results do
IO.inspect({call, successes, errors})
endstress_test_server/4
Runs a short stress scenario with mixed calls and casts.
@spec stress_test_server(GenServer.server(), [term()], pos_integer(), keyword()) ::
{:ok, %{calls: non_neg_integer, casts: non_neg_integer, errors: non_neg_integer, duration_ms: non_neg_integer}}Example:
operations = [
{:call, :get_state},
{:cast, {:queue_job, payload()}}
]
{:ok, report} = stress_test_server(server, operations, 1_000, workers: 4)
assert report.errors == 0test_server_crash_recovery/2
Tests GenServer crash and recovery behavior.
@spec test_server_crash_recovery(GenServer.server(), term()) ::
{:ok, map()} | {:error, term()}Example:
{:ok, info} = test_server_crash_recovery(server, :test_crash)
assert info.recovered == true
assert info.new_pid != info.original_pidSupertester.SupervisorHelpers
Supervision tree testing utilities.
test_restart_strategy/3
Tests supervisor restart strategies.
@spec test_restart_strategy(Supervisor.supervisor(), atom(), restart_scenario()) ::
test_result()Strategies: :one_for_one, :one_for_all, :rest_for_one
Scenarios:
{:kill_child, child_id}{:kill_children, [child_id]}
Example:
result = test_restart_strategy(supervisor, :one_for_one, {:kill_child, :worker_1})
assert result.restarted == [:worker_1]
assert result.not_restarted == [:worker_2, :worker_3]
assert result.supervisor_alive == trueassert_supervision_tree_structure/2
Verifies supervision tree matches expected structure.
@spec assert_supervision_tree_structure(Supervisor.supervisor(), tree_structure()) :: :okExample:
assert_supervision_tree_structure(root_supervisor, %{
supervisor: RootSupervisor,
strategy: :one_for_one,
children: [
{:cache, CacheServer},
{:worker_pool, %{
supervisor: WorkerPoolSupervisor,
strategy: :one_for_all,
children: [
{:worker_1, Worker},
{:worker_2, Worker}
]
}}
]
})trace_supervision_events/2
Monitors supervisor for restart events.
@spec trace_supervision_events(Supervisor.supervisor(), keyword()) ::
{:ok, (() -> [supervision_event()])}Events:
{:child_started, child_id, pid}{:child_terminated, child_id, pid, reason}{:child_restarted, child_id, old_pid, new_pid}
Example:
{:ok, stop_trace} = trace_supervision_events(supervisor)
# Cause failures
Process.exit(child_pid, :kill)
events = stop_trace.()
assert Enum.any?(events, &match?({:child_restarted, _, _, _}, &1))wait_for_supervisor_stabilization/2
Waits until all children are running.
@spec wait_for_supervisor_stabilization(Supervisor.supervisor(), timeout()) ::
:ok | {:error, :timeout}Example:
# Cause chaos
Enum.each(children, fn {_id, pid, _type, _mods} ->
Process.exit(pid, :kill)
end)
# Wait for recovery
:ok = wait_for_supervisor_stabilization(supervisor)
assert_all_children_alive(supervisor)Chaos Engineering
Supertester.ChaosHelpers
Chaos engineering toolkit for resilience testing.
inject_crash/3
Injects controlled crashes into processes.
@spec inject_crash(pid(), crash_spec(), keyword()) :: :okCrash Specifications:
:immediate- Crash immediately{:after_ms, duration}- Crash after delay{:random, probability}- Crash with probability (0.0 to 1.0)
Example:
# Immediate crash
inject_crash(worker_pid, :immediate)
# Delayed crash (100ms)
inject_crash(worker_pid, {:after_ms, 100})
# Random crash (30% probability)
inject_crash(worker_pid, {:random, 0.3}, reason: :chaos_test)chaos_kill_children/3
Randomly kills children in supervision tree.
@spec chaos_kill_children(Supervisor.supervisor(), keyword()) :: chaos_report()Options:
:kill_rate- Percentage of children to kill (default: 0.3):duration_ms- How long to run chaos (default: 5000):kill_interval_ms- Time between kills (default: 100):kill_reason- Reason for kills (default: :kill)
Example:
report = chaos_kill_children(supervisor,
kill_rate: 0.5, # Kill 50% of children
duration_ms: 3000,
kill_interval_ms: 200
)
assert report.killed > 0
assert report.supervisor_crashed == falsesimulate_resource_exhaustion/2
Simulates resource limit scenarios.
@spec simulate_resource_exhaustion(atom(), keyword()) ::
{:ok, cleanup_fn()} | {:error, term()}Resources:
:process_limit- Spawn many processes:ets_tables- Create many ETS tables:memory- Allocate memory
Example:
test "system handles process pressure" do
{:ok, cleanup} = simulate_resource_exhaustion(:process_limit,
spawn_count: 1000
)
# Test under pressure
result = perform_operation()
# Cleanup
cleanup.()
# Verify graceful degradation
assert match?({:ok, _} | {:error, :resource_limit}, result)
endassert_chaos_resilient/3
Asserts system recovers from chaos.
@spec assert_chaos_resilient(pid(), (() -> any()), (() -> boolean()), keyword()) :: :okExample:
assert_chaos_resilient(supervisor,
fn -> chaos_kill_children(supervisor, kill_rate: 0.5) end,
fn -> all_workers_alive?(supervisor) end,
timeout: 10_000
)run_chaos_suite/3
Runs comprehensive chaos scenario testing.
@spec run_chaos_suite(pid(), [map()], keyword()) :: chaos_suite_report()Example: Supports both legacy scenario maps and concurrent harness scenarios:
scenarios = [
%{type: :kill_children, kill_rate: 0.3, duration_ms: 1000},
%{
type: :concurrent,
build: fn supervisor ->
Supertester.ConcurrentHarness.simple_genserver_scenario(
MyWorker,
[{:cast, :do_work}, {:call, :get_state}],
3,
setup: fn -> {:ok, supervisor, %{}} end,
cleanup: fn _, _ -> :ok end
)
end
}
]
report = run_chaos_suite(supervisor, scenarios, timeout: 30_000)For concurrent scenarios you may also pass scenario: <ConcurrentHarness scenario/map> directly
if no special build logic is required. Each harness run shares the same telemetry/reporting
infrastructure as Supertester.ConcurrentHarness.
Performance Testing
Supertester.PerformanceHelpers
Performance testing and regression detection.
assert_performance/2
Asserts operation meets performance bounds.
@spec assert_performance((() -> any()), keyword()) :: :okExpectations:
:max_time_ms- Maximum execution time:max_memory_bytes- Maximum memory consumption:max_reductions- Maximum CPU work
Example:
test "API meets performance SLA" do
assert_performance(
fn -> API.get_user(1) end,
max_time_ms: 50,
max_memory_bytes: 1_000_000,
max_reductions: 100_000
)
end
#### assert_expectations/2
Validates a measurement map (typically returned by `measure_operation/1`) against the same
expectations supported by `assert_performance/2`.
@spec assert_expectations(map(), keyword()) :: :ok
Useful when you need the measured result but still want to enforce limits:
measurement = measure_operation(fn -> run_workload() end) assert_expectations(measurement, max_time_ms: 50) assert measurement.result == :ok
assert_no_memory_leak/2
Detects memory leaks over many iterations.
@spec assert_no_memory_leak(pos_integer(), (() -> any()), keyword()) :: :okOptions:
:threshold- Acceptable growth rate (default: 0.1 = 10%)
Example:
test "no memory leak in message handling" do
{:ok, worker} = setup_isolated_genserver(Worker)
assert_no_memory_leak(10_000, fn ->
Worker.handle_message(worker, random_message())
end, threshold: 0.05)
endmeasure_operation/1
Measures operation performance metrics.
@spec measure_operation((() -> any())) :: map()Returns:
:time_us- Execution time in microseconds:memory_bytes- Memory used:reductions- CPU work:result- Operation result
Example:
metrics = measure_operation(fn ->
expensive_calculation()
end)
IO.puts "Time: #{metrics.time_us}μs"
IO.puts "Memory: #{metrics.memory_bytes} bytes"
IO.puts "Reductions: #{metrics.reductions}"measure_mailbox_growth/3
Monitors mailbox size during operation.
@spec measure_mailbox_growth(pid(), (() -> any()), keyword()) :: map()Options:
:sampling_interval- Interval in ms between samples (default: 10)
Returns:
:initial_size- Mailbox size before:final_size- Mailbox size after:max_size- Maximum observed:avg_size- Average size:result- Return value from the wrapped operation
Example:
report = measure_mailbox_growth(server, fn ->
send_many_messages(server, 1000)
end, sampling_interval: 5)
assert report.max_size < 100assert_mailbox_stable/2
Asserts mailbox doesn't grow unbounded.
@spec assert_mailbox_stable(pid(), keyword()) :: :okOptions:
:during- Function to execute (required):max_size- Maximum mailbox size (default: 100)- Additional options forwarded to
measure_mailbox_growth/3
Example:
assert_mailbox_stable(server,
during: fn ->
for _ <- 1..1000 do
GenServer.cast(server, :work)
end
end,
max_size: 50
)compare_performance/2
Compares performance of multiple functions.
@spec compare_performance(map()) :: map()Example:
results = compare_performance(%{
"approach_a" => fn -> approach_a() end,
"approach_b" => fn -> approach_b() end,
"approach_c" => fn -> approach_c() end
})
# Find fastest
fastest = Enum.min_by(results, fn {_name, m} -> m.time_us end)
{name, metrics} = fastest
IO.puts "Fastest: #{name} at #{metrics.time_us}μs"Assertions
Supertester.Assertions
Custom OTP-aware assertions.
assert_process_alive/1
@spec assert_process_alive(pid()) :: :okExample:
assert_process_alive(server_pid)assert_process_dead/1
@spec assert_process_dead(pid()) :: :okassert_process_restarted/2
@spec assert_process_restarted(atom(), pid()) :: :okExample:
original = Process.whereis(MyServer)
GenServer.stop(MyServer)
assert_process_restarted(MyServer, original)assert_genserver_state/2
Asserts GenServer has expected state.
@spec assert_genserver_state(GenServer.server(), term() | (term() -> boolean())) :: :okExamples:
# Exact match
assert_genserver_state(server, %{counter: 5})
# Function validation
assert_genserver_state(server, fn state ->
state.counter > 0 and state.status == :active
end)assert_genserver_responsive/1
@spec assert_genserver_responsive(GenServer.server()) :: :okassert_child_count/2
@spec assert_child_count(Supervisor.supervisor(), non_neg_integer()) :: :okExample:
assert_child_count(supervisor, 5)assert_all_children_alive/1
@spec assert_all_children_alive(Supervisor.supervisor()) :: :okassert_no_process_leaks/1
@spec assert_no_process_leaks((() -> any())) :: :okExample:
assert_no_process_leaks(fn ->
{:ok, temp_server} = GenServer.start_link(TempServer, [])
# Do work
GenServer.stop(temp_server)
end)assert_memory_usage_stable/2
@spec assert_memory_usage_stable((() -> any()), float()) :: :okExample:
assert_memory_usage_stable(fn ->
for _ <- 1..1000 do
GenServer.call(server, :operation)
end
end, 0.05) # 5% toleranceQuick Reference
Common Patterns
Pattern 1: Basic GenServer Test
test "counter increments" do
{:ok, counter} = setup_isolated_genserver(Counter)
:ok = cast_and_sync(counter, :increment)
:ok = cast_and_sync(counter, :increment)
assert_genserver_state(counter, fn s -> s.count == 2 end)
endPattern 2: Supervision Tree Test
test "supervisor restarts failed children" do
{:ok, supervisor} = setup_isolated_supervisor(MySupervisor)
result = test_restart_strategy(supervisor, :one_for_one,
{:kill_child, :worker_1}
)
assert :worker_1 in result.restarted
wait_for_supervisor_stabilization(supervisor)
assert_all_children_alive(supervisor)
endPattern 3: Chaos Testing
test "system is resilient" do
{:ok, system} = setup_isolated_supervisor(MySystem)
report = chaos_kill_children(system,
kill_rate: 0.5,
duration_ms: 5000
)
assert Process.alive?(system)
assert report.supervisor_crashed == false
endPattern 4: Performance SLA
test "meets performance requirements" do
{:ok, api} = setup_isolated_genserver(APIServer)
assert_performance(
fn -> APIServer.critical_operation(api) end,
max_time_ms: 100,
max_memory_bytes: 1_000_000
)
endPattern 5: Memory Leak Detection
test "no memory leak" do
{:ok, worker} = setup_isolated_genserver(Worker)
assert_no_memory_leak(50_000, fn ->
Worker.process(worker, data())
end)
endPattern 6: Telemetry Isolation
use Supertester.ExUnitFoundation, telemetry_isolation: true
test "telemetry is scoped to this test" do
{:ok, _} = Supertester.TelemetryHelpers.attach_isolated([:my, :event])
Supertester.TelemetryHelpers.emit_with_context([:my, :event], %{}, %{})
assert Supertester.TelemetryHelpers.assert_telemetry([:my, :event])
endImport Patterns
# Core OTP testing
import Supertester.{OTPHelpers, GenServerHelpers, Assertions}
# Supervision testing
import Supertester.{OTPHelpers, SupervisorHelpers, Assertions}
# Chaos testing
import Supertester.{ChaosHelpers, SupervisorHelpers}
# Performance testing
import Supertester.PerformanceHelpers
# Everything
import Supertester.{
OTPHelpers,
GenServerHelpers,
SupervisorHelpers,
ChaosHelpers,
PerformanceHelpers,
Assertions
}TestableGenServer Pattern
# In your GenServer
defmodule MyServer do
use GenServer
use Supertester.TestableGenServer # Add this line
# Rest of implementation
end
# In your tests
test "with sync" do
GenServer.cast(server, :async_op)
GenServer.call(server, :__supertester_sync__) # Wait for processing
# Now safe to assert
endBest Practices
1. Always Use Isolation
use Supertester.ExUnitFoundation, isolation: :full_isolation2. Use setupisolated* Functions
setup do
{:ok, server} = setup_isolated_genserver(MyServer)
{:ok, server: server}
end3. Never Use Process.sleep
# ❌ Bad
GenServer.cast(server, :op)
Process.sleep(50)
# ✅ Good
cast_and_sync(server, :op)4. Use Expressive Assertions
# ❌ Verbose
state = :sys.get_state(server)
assert state.counter == 5
# ✅ Better
assert_genserver_state(server, %{counter: 5})
# ✅ Best (with validation)
assert_genserver_state(server, fn s -> s.counter == 5 and s.status == :active end)5. Test Resilience with Chaos
test "system handles chaos" do
{:ok, system} = setup_isolated_supervisor(MySystem)
assert_chaos_resilient(system,
fn -> chaos_kill_children(system, kill_rate: 0.3) end,
fn -> system_healthy?(system) end
)
end6. Assert Performance SLAs
test "meets SLA" do
assert_performance(
fn -> critical_path() end,
max_time_ms: 100
)
endTesting Supertester Tests
When writing tests for code that uses Supertester:
defmodule MyApp.MyModuleTest do
use Supertester.ExUnitFoundation, isolation: :full_isolation
import Supertester.{OTPHelpers, Assertions}
describe "my functionality" do
test "works correctly" do
{:ok, server} = setup_isolated_genserver(MyModule)
# Your test logic
assert_genserver_responsive(server)
end
end
endMigration from Process.sleep
Before
test "async operation" do
GenServer.cast(server, :operation)
Process.sleep(50) # Hope this is enough!
assert :sys.get_state(server).done == true
endAfter
test "async operation" do
:ok = cast_and_sync(server, :operation)
assert_genserver_state(server, fn s -> s.done == true end)
endTroubleshooting
Q: Tests are still flaky
A: Ensure you're using cast_and_sync instead of GenServer.cast + sleep
Q: Name conflicts in tests
A: Use setup_isolated_genserver which generates unique names
Q: Supervisor tests fail
A: Use wait_for_supervisor_stabilization after causing failures
Q: Performance tests are inconsistent
A: Run with :erlang.garbage_collect() before measurements, use sufficient iterations
Q: Chaos tests too aggressive
A: Reduce kill_rate or duration_ms parameters
See Also
Version: 0.5.1 License: MIT Maintainer: nshkrdotcom