Cache.HashRing (elixir_cache v0.4.6)
View SourceConsistent hash ring strategy adapter using libring.
This strategy distributes cache keys across Erlang cluster nodes using a
consistent hash ring. When a key is hashed to the local node, the operation
is executed directly. When it hashes to a remote node, the operation is
forwarded via a configurable RPC module (defaults to :erpc).
The ring automatically tracks Erlang node membership using
HashRing.Managed with monitor_nodes: true, so nodes joining or leaving
the cluster are reflected in the ring automatically.
Usage
defmodule MyApp.DistributedCache do
use Cache,
adapter: {Cache.HashRing, Cache.ETS},
name: :distributed_cache,
opts: [read_concurrency: true]
endOptions
:ring_opts(keyword/0) - Options passed toHashRing.Worker, such asnode_blacklistandnode_whitelist. The default value is[].:node_weight(pos_integer/0) - Number of virtual nodes (shards) per node on the ring. Higher values give more even distribution. The default value is128.:rpc_module(atom/0) - Module used for remote calls. Must implementcall/4with the same signature as:erpc.call/4. The default value is:erpc.
How It Works
Each node in the cluster starts the same underlying adapter locally. When a cache operation is performed:
- The key is hashed to determine which node owns it via the consistent ring.
- If the owning node is
Node.self(), the operation is executed locally. - If the owning node is a remote node, the operation is forwarded via the
configured
rpc_module(default:erpc).
This ensures that each key is always stored on the same node (with the same ring configuration), enabling efficient distributed caching without a centralised store.
Read-Repair
When the ring topology changes (node up/down), some keys will hash to a
different node. Cache.HashRing.RingMonitor snapshots the ring before each
change, keeping up to ring_history_size previous rings.
On a get miss, the previous rings are consulted in order (newest first).
For each previous ring, if the key hashed to a different (live) node, a
get is attempted there. On a hit:
- The value is returned immediately.
- It is written to the current owning node (migration).
- It is deleted from the old node asynchronously.
This lazily migrates hot keys after rebalancing without scanning the ring.
Note: When
sandbox?: true, the ring is bypassed and all operations are executed locally against the sandbox adapter.