Default transaction implementation for distributed cache adapters.
This module provides a transaction implementation based on Erlang's :global
module for distributed locking across multiple nodes. It is designed for
distributed cache topologies such as partitioned, multilevel, and replicated
caches where transactions need to coordinate across a cluster of nodes.
Distributed adapters in the nebulex_distributed package use this module
via use Nebulex.Distributed.Transaction to inherit the :global-based
transaction implementation.
How It Works
The transaction mechanism uses :global.set_lock/3 to acquire distributed
locks across specified nodes:
- Lock acquisition: Attempts to acquire locks for specified keys (or a global lock if no keys are specified) across all nodes in the cluster.
- All-or-nothing: If any lock cannot be acquired, all partial locks are released and the transaction is aborted.
- Execution: Once all locks are acquired, the transaction function executes.
- Lock release: Locks are released in an
afterblock to ensure cleanup even if the transaction fails.
Lock Scope
Global Lock (Not Recommended)
When no keys are specified, a global lock is used, serializing all transactions across the cluster:
MyCache.transaction(fn ->
# Critical section - entire cache is locked
end)Warning: This approach severely impacts performance as all transactions are serialized, regardless of which keys they access.
Fine-Grained Locking (Recommended)
Specify the keys involved to enable concurrent transactions on different keys:
MyCache.transaction(fn ->
# Only :counter is locked
counter = MyCache.get(:counter)
MyCache.put(:counter, counter + 1)
end, keys: [:counter])Multiple processes can run transactions concurrently as long as they don't access the same keys.
Nested Transactions
Nested transactions are supported. If a transaction is already in progress (detected via process dictionary), the nested transaction executes without attempting to acquire locks again:
MyCache.transaction(fn ->
# Outer transaction acquires locks
MyCache.transaction(fn ->
# Nested transaction - reuses outer locks
end)
end)Node Coordination
By default, locks are acquired only on the local node ([node()]). For true
distributed transactions, specify all nodes in the cluster:
MyCache.transaction(
fn ->
# Critical section
end,
keys: [:key1],
nodes: [node() | Node.list()]
)This ensures the transaction is coordinated across all nodes in the cluster.
💡 Important Note
When using any distributed adapter (Nebulex.Adapters.Partitioned,
Nebulex.Adapters.Multilevel, etc.), you do not need to specify the
:nodes option. The adapters automatically determine and set the nodes
based on the cluster topology.
Performance Considerations
- Fine-grained locking: Always specify keys to maximize concurrency.
- Lock contention: Multiple transactions on the same keys will serialize.
- Network overhead: Distributed lock coordination adds latency.
- Retry mechanism: Failed lock acquisitions retry indefinitely by default
(configurable via
:retriesoption).
Use Cases
This implementation is suitable for:
- Distributed caches running across multiple nodes.
- Strong consistency requirements across the cluster.
- Atomic operations on cache entries that need cluster-wide coordination.
- Partitioned caches where transactions may span multiple partitions.
For single-node scenarios, consider using a local locking mechanism like
Nebulex.Locks (used by nebulex_local) for better performance.
Options
:keys(list ofterm/0) - The list of keys the transaction will lock. Since the lock ID is generated based on the key, the transaction uses a fixed lock ID if the option is not provided or is an empty list. Then, all subsequent transactions without this option (or set to an empty list) are serialized, and performance is significantly affected. For that reason, it is recommended to pass the list of keys involved in the transaction. The default value is[].:nodes(list ofatom/0) - The list of the nodes where to set the lock.The default value is
[node()].Note: When using
Nebulex.Adapters.PartitionedorNebulex.Adapters.Multilevel, this option is automatically set by the adapter based on the cluster topology. You do not need to specify it manually, and if you do, it will be overridden by the adapter.:retries(:infinity|non_neg_integer/0) - If the key has already been locked by another process and retries are not equal to 0, the process sleeps for a while and tries to execute the action later. When:retriesattempts have been made, an exception is raised. If:retriesis:infinity(the default), the function will eventually be executed (unless the lock is never released). The default value is:infinity.
Examples
Basic Transaction with Fine-Grained Locking
# Increment a counter atomically
MyCache.transaction(fn ->
counter = MyCache.get!(:counter, default: 0)
MyCache.put!(:counter, counter + 1)
end, keys: [:counter])Multi-Key Transaction
# Transfer balance between two accounts
MyCache.transaction(fn ->
alice = MyCache.get!(:alice)
bob = MyCache.get!(:bob)
MyCache.put!(:alice, %{alice | balance: alice.balance - 100})
MyCache.put!(:bob, %{bob | balance: bob.balance + 100})
end, keys: [:alice, :bob])Distributed Transaction Across Cluster
# With Partitioned or Multilevel adapters (nodes automatically determined)
MyCache.transaction(fn ->
# Critical section coordinated across cluster
# Nodes are automatically discovered via :pg
value = MyCache.get!(:shared_resource)
MyCache.put!(:shared_resource, update(value))
end, keys: [:shared_resource])
# With custom adapters or direct module usage (manual node specification)
nodes = [node() | Node.list()]
MyCache.transaction(fn ->
# Critical section coordinated across specified nodes
value = MyCache.get!(:shared_resource)
MyCache.put!(:shared_resource, update(value))
end, keys: [:shared_resource], nodes: nodes)Transaction with Custom Retry Policy
# Limit retry attempts to avoid indefinite blocking
MyCache.transaction(
fn ->
# Critical section
end,
keys: [:key1],
retries: 5
)
|> case do
{:ok, result} ->
# Transaction succeeded
...
{:error, %Nebulex.Error{reason: :transaction_aborted}} ->
# Failed to acquire locks after retries
...
end