API Reference libcluster v3.2.2

Modules

This module defines the behaviour for implementing clustering strategies.

Assumes you have nodes that respond to the specified DNS query (A record), and which follow the node name pattern of <name>@<ip-address>. If your setup matches those assumptions, this strategy will periodically poll DNS and connect all nodes it finds.

This clustering strategy relies on Erlang's built-in distribution protocol.

This clustering strategy relies on Erlang's built-in distribution protocol by using a .hosts.erlang file (as used by the :net_adm module).

This clustering strategy uses multicast UDP to gossip node names to other nodes on the network. These packets are listened for on each node as well, and a connection will be established between the two nodes if they are reachable on the network, and share the same magic cookie. In this way, a cluster of nodes may be formed dynamically.

This clustering strategy works by loading all endpoints in the current Kubernetes namespace with the configured label. It will fetch the addresses of all endpoints with that label and attempt to connect. It will continually monitor and update its connections every 5s. Alternatively the IP can be looked up from the pods directly by setting kubernetes_ip_lookup_mode to :pods.

This clustering strategy works by loading all your Erlang nodes (within Pods) in the current Kubernetes namespace. It will fetch the addresses of all pods under a shared headless service and attempt to connect. It will continually monitor and update its connections every 5s.

This clustering strategy works by issuing a SRV query for the kubernetes headless service under which the stateful set containing your nodes is running.

This clustering strategy is specific to the Rancher container platform. It works by querying the platform's metadata API for containers belonging to the same service as the node and attempts to connect them. (see: http://rancher.com/docs/rancher/latest/en/rancher-services/metadata-service/)

This module handles supervising the configured topologies, and is designed to support being started within your own supervision tree, as shown below