Cache.HashRing (elixir_cache v0.4.6)

View Source

Consistent hash ring strategy adapter using libring.

This strategy distributes cache keys across Erlang cluster nodes using a consistent hash ring. When a key is hashed to the local node, the operation is executed directly. When it hashes to a remote node, the operation is forwarded via a configurable RPC module (defaults to :erpc).

The ring automatically tracks Erlang node membership using HashRing.Managed with monitor_nodes: true, so nodes joining or leaving the cluster are reflected in the ring automatically.

Usage

defmodule MyApp.DistributedCache do
  use Cache,
    adapter: {Cache.HashRing, Cache.ETS},
    name: :distributed_cache,
    opts: [read_concurrency: true]
end

Options

  • :ring_opts (keyword/0) - Options passed to HashRing.Worker, such as node_blacklist and node_whitelist. The default value is [].

  • :node_weight (pos_integer/0) - Number of virtual nodes (shards) per node on the ring. Higher values give more even distribution. The default value is 128.

  • :rpc_module (atom/0) - Module used for remote calls. Must implement call/4 with the same signature as :erpc.call/4. The default value is :erpc.

How It Works

Each node in the cluster starts the same underlying adapter locally. When a cache operation is performed:

  1. The key is hashed to determine which node owns it via the consistent ring.
  2. If the owning node is Node.self(), the operation is executed locally.
  3. If the owning node is a remote node, the operation is forwarded via the configured rpc_module (default :erpc).

This ensures that each key is always stored on the same node (with the same ring configuration), enabling efficient distributed caching without a centralised store.

Read-Repair

When the ring topology changes (node up/down), some keys will hash to a different node. Cache.HashRing.RingMonitor snapshots the ring before each change, keeping up to ring_history_size previous rings.

On a get miss, the previous rings are consulted in order (newest first). For each previous ring, if the key hashed to a different (live) node, a get is attempted there. On a hit:

  1. The value is returned immediately.
  2. It is written to the current owning node (migration).
  3. It is deleted from the old node asynchronously.

This lazily migrates hot keys after rebalancing without scanning the ring.

Note: When sandbox?: true, the ring is bypassed and all operations are executed locally against the sandbox adapter.