# `Lockstep.Controller`
[🔗](https://github.com/b-erdem/lockstep/blob/v0.1.0/lib/lockstep/controller.ex#L1)

The scheduler. A `GenServer` that intercepts every Lockstep sync point
and picks the next process to run per the configured strategy.

Invariant: at most one *managed* process is running locally between
sync points. All others are blocked in `GenServer.call`. A freshly
spawned child is unmanaged until it makes its first `hello` call.

See `Lockstep.Strategy` for the pluggable scheduling strategies.

# `alive?`

# `await_done`

# `cancel_timer`

# `child_spec`

Returns a specification to start this module under a supervisor.

See `Supervisor`.

# `cluster_ets_lookup`

Look up an ETS table by `atom_name` on `caller_pid`'s node. Returns
the underlying tid (an :ets reference) or `:undefined`.

# `cluster_ets_register`

Register an ETS `tid` under `atom_name` on `caller_pid`'s node.
Idempotent — re-registering the same name overwrites.

# `cluster_ets_unregister`

Remove the ETS-name binding `atom_name` on `caller_pid`'s node.

# `cluster_heal`

Heal all active partitions; drains deferred messages.

# `cluster_monitor_node`

Monitor a node from `mon_pid`. When `node` goes down, mon_pid
receives `{:nodedown, node}`. Mirrors `Node.monitor/2`.

# `cluster_node_of`

Look up the node a managed pid belongs to.

# `cluster_nodes`

List all registered cluster nodes (always includes :nonode@nohost).

# `cluster_nodes_up`

List currently-up nodes (excludes :nonode@nohost).

# `cluster_partition`

Add a partition between two groups of nodes. While active,
cross-group sends are dropped (`:drop`) or queued for later
delivery (`:defer`).

# `cluster_register`

Register a notional cluster node by atom name. Subsequent spawns
with `node: name` are tagged accordingly. Idempotent.

# `cluster_register_name`

Register `target_pid` under `name` on `target_pid`'s notional
node. Returns `:yes` on success, `{:already, existing}` if the
name is taken on that node, or `{:already, existing}` if `target_pid`
is already registered under another name on that node.

# `cluster_registered_names`

List all names registered on `caller_pid`'s node.

# `cluster_set_ticktime`

Set the virtual `net_ticktime` (default 15000ms). Determines how
long after a partition the cross-partition monitors fire :DOWN
with reason `:noconnection`.

# `cluster_start_node`

Bring a previously-stopped node back up. Fresh state.

# `cluster_stop_node`

Stop a node: kills all its processes, fires `:nodedown` to
monitors, marks the node as `:down`.

# `cluster_unregister_name`

Unregister `name` on `caller_pid`'s node. Returns `:ok` whether
or not the name was registered.

# `cluster_whereis_name`

Look up `name` on `caller_pid`'s notional node. Returns the pid
or `:undefined`. Mirrors `Process.whereis/1` but per-node.

# `cluster_whereis_on`

Look up `name` on a specific `node`. Returns the pid or
`:undefined`. Used by `Lockstep.send({name, node}, msg)` and
`Lockstep.GenServer.cast({name, node}, msg)` to route cross-node
by-name sends correctly.

# `demonitor`

# `exit_msg`

# `flag`

# `global_register_name`

Cluster-wide name registry (mirror of Erlang's `:global`). Atomic
register-or-fail: first caller wins, the rest see `:no`.

# `global_registered_names`

# `global_unregister_name`

# `global_whereis_name`

# `hello`

# `link`

# `monitor`

# `nif_sync`

Generic NIF sync point. Used by `Lockstep.ETS`, `Lockstep.Atomics`,
and `Lockstep.PersistentTerm` to make NIF-backed shared-state
operations interleavable. The wrapper does the actual NIF call
*after* this returns; the sync point itself just yields to the
strategy and records a trace event.

`kind` is a small descriptor (typically `{:ets_insert, table_name}`
or similar) that gets recorded as `{:nif, pid, kind}` in the trace.

# `now`

# `recv_match`

# `recv_msg`

# `request_spawn`

# `request_spawn_link`

# `send_after`

# `send_msg`

# `spawn_root`

# `start_link`

# `status`

# `trace`

# `unlink`

---

*Consult [api-reference.md](api-reference.md) for complete listing*
