# `SuperCache.Cluster.Manager`
[🔗](https://github.com/ohhi-vn/super_cache/blob/main/lib/cluster/manager.ex#L1)

Maintains the cluster membership list and the partition → primary/replica
mapping.

## Responsibilities

1. **Membership tracking** — reacts to `:nodeup` / `:nodedown` events
   forwarded by `SuperCache.Cluster.NodeMonitor`.
2. **Partition map** — builds and republishes a
   `%{partition_idx => {primary, [replicas]}}` map whenever membership
   changes.  The map is stored in `:persistent_term` so hot-path reads
   are allocation-free (no GenServer hop).
3. **Full sync** — when a new node joins, triggers
   `Replicator.push_partition/2` for every partition that this node owns
   (as primary or replica) so the joining node receives a consistent
   snapshot.

## Cold-start behaviour

During application boot, `SuperCache.Cluster.Bootstrap.start!/1` has not
been called yet, so `node_running?/1` returns `false` even for `node()`
itself.  The manager therefore always seeds the membership list with
`node()` regardless of the health-check result, so the partition map is
never built from an empty list.  Remote peers are added only when the
health-check confirms they are running.

## Adding nodes at runtime

Nodes are added automatically via `:nodeup` events delivered by
`SuperCache.Cluster.NodeMonitor`.  You can also add a node manually:

    SuperCache.Cluster.Manager.node_up(:peer@host)

## Partition assignment

Partitions are assigned by rotating the sorted node list.  With `N` nodes
and replication factor `R`, partition `idx` gets:

- **primary** — `sorted_nodes[idx mod N]`
- **replicas** — the next `min(R-1, N-1)` nodes in the rotated list

This gives a balanced, deterministic assignment with no external
coordination.

# `child_spec`

Returns a specification to start this module under a supervisor.

See `Supervisor`.

# `full_sync`

```elixir
@spec full_sync() :: :ok
```

Trigger a full partition sync from this node to all peers.

# `get_replicas`

```elixir
@spec get_replicas(non_neg_integer()) :: {node(), [node()]}
```

Return `{primary_node, [replica_nodes]}` for `partition_idx`.

Zero-cost read from `:persistent_term` — no GenServer hop.
Falls back to `{node(), []}` when the map has not been built yet.

# `live_nodes`

```elixir
@spec live_nodes() :: [node()]
```

Return the current list of live nodes (includes this node).

# `node_down`

```elixir
@spec node_down(node()) :: :ok
```

Notify the manager that `node` has disconnected.

# `node_up`

```elixir
@spec node_up(node()) :: :ok
```

Notify the manager that `node` has connected.

# `replication_mode`

```elixir
@spec replication_mode() :: :async | :sync | :strong
```

Return the cluster-wide replication mode configured via
`SuperCache.Cluster.Bootstrap.start!/1`.

| Value     | Guarantee              |
|-----------|------------------------|
| `:async`  | Eventual (default)     |
| `:sync`   | At-least-once delivery |
| `:strong` | Three-phase commit     |

Zero-cost read from `SuperCache.Config` — no GenServer hop.

## Example

    SuperCache.Cluster.Manager.replication_mode()
    # => :async

# `start_link`

---

*Consult [api-reference.md](api-reference.md) for complete listing*
