View Source Nebulex.Adapters.Partitioned (Nebulex v2.5.2)

Built-in adapter for partitioned cache topology.

overall-features

Overall features

  • Partitioned cache topology (Sharding Distribution Model).
  • Configurable primary storage adapter.
  • Configurable Keyslot to distributed the keys across the cluster members.
  • Support for transactions via Erlang global name registration facility.
  • Stats support rely on the primary storage adapter.

partitioned-cache-topology

Partitioned Cache Topology

There are several key points to consider about a partitioned cache:

  • Partitioned: The data in a distributed cache is spread out over all the servers in such a way that no two servers are responsible for the same piece of cached data. This means that the size of the cache and the processing power associated with the management of the cache can grow linearly with the size of the cluster. Also, it means that operations against data in the cache can be accomplished with a "single hop," in other words, involving at most one other server.

  • Load-Balanced: Since the data is spread out evenly over the servers, the responsibility for managing the data is automatically load-balanced across the cluster.

  • Ownership: Exactly one node in the cluster is responsible for each piece of data in the cache.

  • Point-To-Point: The communication for the partitioned cache is all point-to-point, enabling linear scalability.

  • Location Transparency: Although the data is spread out across cluster nodes, the exact same API is used to access the data, and the same behavior is provided by each of the API methods. This is called location transparency, which means that the developer does not have to code based on the topology of the cache, since the API and its behavior will be the same with a local cache, a replicated cache, or a distributed cache.

  • Failover: Failover of a distributed cache involves promoting backup data to be primary storage. When a cluster node fails, all remaining cluster nodes determine what data each holds in backup that the failed cluster node had primary responsible for when it died. Those data becomes the responsibility of whatever cluster node was the backup for the data. However, this adapter does not provide fault-tolerance implementation, each piece of data is kept in a single node/machine (via sharding), then, if a node fails, the data kept by this node won't be available for the rest of the cluster members.

Based on "Distributed Caching Essential Lessons" by Cameron Purdy and Coherence Partitioned Cache Service.

additional-implementation-notes

Additional implementation notes

:pg2 or :pg (>= OTP 23) is used under-the-hood by the adapter to manage the cluster nodes. When the partitioned cache is started in a node, it creates a group and joins it (the cache supervisor PID is joined to the group). Then, when a function is invoked, the adapter picks a node from the group members, and then the function is executed on that specific node. In the same way, when a partitioned cache supervisor dies (the cache is stopped or killed for some reason), the PID of that process is automatically removed from the PG group; this is why it's recommended to use consistent hashing for distributing the keys across the cluster nodes.

NOTE: pg2 will be replaced by pg in future, since the pg2 module is deprecated as of OTP 23 and scheduled for removal in OTP 24.

This adapter depends on a local cache adapter (primary storage), it adds a thin layer on top of it in order to distribute requests across a group of nodes, where is supposed the local cache is running already. However, you don't need to define any additional cache module for the primary storage, instead, the adapter initializes it automatically (it adds the primary storage as part of the supervision tree) based on the given options within the primary_storage_adapter: argument.

usage

Usage

When used, the Cache expects the :otp_app and :adapter as options. The :otp_app should point to an OTP application that has the cache configuration. For example:

defmodule MyApp.PartitionedCache do
  use Nebulex.Cache,
    otp_app: :my_app,
    adapter: Nebulex.Adapters.Partitioned
end

Optionally, you can configure the desired primary storage adapter with the option :primary_storage_adapter; defaults to Nebulex.Adapters.Local.

defmodule MyApp.PartitionedCache do
  use Nebulex.Cache,
    otp_app: :my_app,
    adapter: Nebulex.Adapters.Partitioned,
    primary_storage_adapter: Nebulex.Adapters.Local
end

Also, you can provide a custom keyslot function:

defmodule MyApp.PartitionedCache do
  use Nebulex.Cache,
    otp_app: :my_app,
    adapter: Nebulex.Adapters.Partitioned,
    primary_storage_adapter: Nebulex.Adapters.Local

  @behaviour Nebulex.Adapter.Keyslot

  @impl true
  def hash_slot(key, range) do
    key
    |> :erlang.phash2()
    |> :jchash.compute(range)
  end
end

Where the configuration for the cache must be in your application environment, usually defined in your config/config.exs:

config :my_app, MyApp.PartitionedCache,
  keyslot: MyApp.PartitionedCache,
  primary: [
    gc_interval: 3_600_000,
    backend: :shards
  ]

If your application was generated with a supervisor (by passing --sup to mix new) you will have a lib/my_app/application.ex file containing the application start callback that defines and starts your supervisor. You just need to edit the start/2 function to start the cache as a supervisor on your application's supervisor:

def start(_type, _args) do
  children = [
    {MyApp.PartitionedCache, []},
    ...
  ]

See Nebulex.Cache for more information.

options

Options

This adapter supports the following options and all of them can be given via the cache configuration:

  • :primary - The options that will be passed to the adapter associated with the local primary storage. These options will depend on the local adapter to use.

  • :keyslot - Defines the module implementing Nebulex.Adapter.Keyslot behaviour.

  • :task_supervisor_opts - Start-time options passed to Task.Supervisor.start_link/1 when the adapter is initialized.

  • :join_timeout - Interval time in milliseconds for joining the running partitioned cache to the cluster. This is to ensure it is always joined. Defaults to :timer.seconds(180).

shared-options

Shared options

Almost all of the cache functions outlined in Nebulex.Cache module accept the following options:

  • :timeout - The time-out value in milliseconds for the command that will be executed. If the timeout is exceeded, then the current process will exit. For executing a command on remote nodes, this adapter uses Task.await/2 internally for receiving the result, so this option tells how much time the adapter should wait for it. If the timeout is exceeded, the task is shut down but the current process doesn't exit, only the result associated with that task is skipped in the reduce phase.

telemetry-events

Telemetry events

This adapter emits all recommended Telemetry events, and documented in Nebulex.Cache module (see "Adapter-specific events" section).

Since the partitioned adapter depends on the configured primary storage adapter (local cache adapter), this one may also emit Telemetry events. Therefore, there will be events emitted by the partitioned adapter as well as the primary storage adapter. For example, for the cache defined before MyApp.PartitionedCache, these would be the emitted events:

  • [:my_app, :partitioned_cache, :command, :start]
  • [:my_app, :partitioned_cache, :primary, :command, :start]
  • [:my_app, :partitioned_cache, :command, :stop]
  • [:my_app, :partitioned_cache, :primary, :command, :stop]
  • [:my_app, :partitioned_cache, :command, :exception]
  • [:my_app, :partitioned_cache, :primary, :command, :exception]

As you may notice, the telemetry prefix by default for the partitioned cache is [:my_app, :partitioned_cache], and the prefix for its primary storage [:my_app, :partitioned_cache, :primary].

See also the Telemetry guide for more information and examples.

adapter-specific-telemetry-events

Adapter-specific telemetry events

This adapter exposes following Telemetry events:

  • telemetry_prefix ++ [:bootstrap, :started] - Dispatched by the adapter when the bootstrap process is started.

    • Measurements: %{system_time: non_neg_integer}

    • Metadata:

      %{
        adapter_meta: %{optional(atom) => term},
        cluster_nodes: [node]
      }
  • telemetry_prefix ++ [:bootstrap, :stopped] - Dispatched by the adapter when the bootstrap process is stopped.

    • Measurements: %{system_time: non_neg_integer}

    • Metadata:

      %{
        adapter_meta: %{optional(atom) => term},
        cluster_nodes: [node],
        reason: term
      }
  • telemetry_prefix ++ [:bootstrap, :exit] - Dispatched by the adapter when the bootstrap has received an exit signal.

    • Measurements: %{system_time: non_neg_integer}

    • Metadata:

      %{
        adapter_meta: %{optional(atom) => term},
        cluster_nodes: [node],
        reason: term
      }
  • telemetry_prefix ++ [:bootstrap, :joined] - Dispatched by the adapter when the bootstrap has joined the cache to the cluster.

    • Measurements: %{system_time: non_neg_integer}

    • Metadata:

      %{
        adapter_meta: %{optional(atom) => term},
        cluster_nodes: [node]
      }

stats

Stats

This adapter depends on the primary storage adapter for the stats support. Therefore, it is important to ensure the underlying primary storage adapter does support stats, otherwise, you may get unexpected errors.

extended-api

Extended API

This adapter provides some additional convenience functions to the Nebulex.Cache API.

Retrieving the primary storage or local cache module:

MyCache.__primary__()

Retrieving the cluster nodes associated with the given cache name:

MyCache.nodes()

Get a cluster node based on the given key:

MyCache.get_node("mykey")

Joining the cache to the cluster:

MyCache.join_cluster()

Leaving the cluster (removes the cache from the cluster):

MyCache.leave_cluster()

caveats-of-partitioned-adapter

Caveats of partitioned adapter

For Nebulex.Cache.get_and_update/3 and Nebulex.Cache.update/4, they both have a parameter that is the anonymous function, and it is compiled into the module where it is created, which means it necessarily doesn't exists on remote nodes. To ensure they work as expected, you must provide functions from modules existing in all nodes of the group.

Link to this section Summary

Functions

Helper to perform stream/3 locally.

Helper function to use dynamic cache for internal primary cache storage when needed.

Link to this section Functions

Link to this function

do_put_all(action, adapter_meta, entries, opts)

View Source
Link to this function

eval_stream(meta, query, opts)

View Source

Helper to perform stream/3 locally.

Link to this function

with_dynamic_cache(map, action, args)

View Source

Helper function to use dynamic cache for internal primary cache storage when needed.