SuperCache.Cluster.Manager (SuperCache v1.3.0)

Copy Markdown View Source

Maintains the cluster membership list and the partition → primary/replica mapping.

Responsibilities

  1. Membership tracking — reacts to :nodeup / :nodedown events forwarded by SuperCache.Cluster.NodeMonitor.
  2. Partition map — builds and republishes a %{partition_idx => {primary, [replicas]}} map whenever membership changes. The map is stored in :persistent_term so hot-path reads are allocation-free (no GenServer hop).
  3. Full sync — when a new node joins, triggers Replicator.push_partition/2 for every partition that this node owns (as primary or replica) so the joining node receives a consistent snapshot.

Cold-start behaviour

During application boot, SuperCache.Cluster.Bootstrap.start!/1 has not been called yet, so node_running?/1 returns false even for node() itself. The manager therefore always seeds the membership list with node() regardless of the health-check result, so the partition map is never built from an empty list. Remote peers are added only when the health-check confirms they are running.

Adding nodes at runtime

Nodes are added automatically via :nodeup events delivered by SuperCache.Cluster.NodeMonitor. You can also add a node manually:

SuperCache.Cluster.Manager.node_up(:peer@host)

Partition assignment

Partitions are assigned by rotating the sorted node list. With N nodes and replication factor R, partition idx gets:

  • primarysorted_nodes[idx mod N]
  • replicas — the next min(R-1, N-1) nodes in the rotated list

This gives a balanced, deterministic assignment with no external coordination.

Summary

Functions

Returns a specification to start this module under a supervisor.

Trigger a full partition sync from this node to all peers.

Return {primary_node, [replica_nodes]} for partition_idx.

Return the current list of live nodes (includes this node).

Notify the manager that node has disconnected.

Notify the manager that node has connected.

Return the cluster-wide replication mode configured via SuperCache.Cluster.Bootstrap.start!/1.

Functions

child_spec(init_arg)

Returns a specification to start this module under a supervisor.

See Supervisor.

full_sync()

@spec full_sync() :: :ok

Trigger a full partition sync from this node to all peers.

get_replicas(partition_idx)

@spec get_replicas(non_neg_integer()) :: {node(), [node()]}

Return {primary_node, [replica_nodes]} for partition_idx.

Zero-cost read from :persistent_term — no GenServer hop. Falls back to {node(), []} when the map has not been built yet.

live_nodes()

@spec live_nodes() :: [node()]

Return the current list of live nodes (includes this node).

node_down(node)

@spec node_down(node()) :: :ok

Notify the manager that node has disconnected.

node_up(node)

@spec node_up(node()) :: :ok

Notify the manager that node has connected.

replication_mode()

@spec replication_mode() :: :async | :sync | :strong

Return the cluster-wide replication mode configured via SuperCache.Cluster.Bootstrap.start!/1.

ValueGuarantee
:asyncEventual (default)
:syncAt-least-once delivery
:strongThree-phase commit

Zero-cost read from SuperCache.Config — no GenServer hop.

Example

SuperCache.Cluster.Manager.replication_mode()
# => :async

start_link(opts)