View Source Cluster.Strategy.Kubernetes.DNSSRV (libcluster v3.4.1)

This clustering strategy works by issuing a SRV query for the headless service where the StatefulSet containing your nodes is running.

This strategy requires deploying pods as a StatefulSet which is exposed by a headless service. If you want to avoid that, you could use Cluster.Strategy.Kubernetes.DNS.

It assumes that all Erlang nodes are using longnames - <basename>@<domain>:

  • all nodes are using the same <basename>
  • all nodes are using unique <domain>

In <basename>@<domain>:

  • <basename> would be the value configured by :application_name option.
  • <domain> would be the value which is controlled by following options:
    • :service
    • :namespace
    • :resolver

Getting <basename>

As said above, the basename is configured by :application_name option.

Just one thing to keep in mind - when building an OTP release, make sure that the name of the OTP release matches the name configured by :application_name.

Getting <domain>

For more information, see the kubernetes stateful-application documentation

Setup

Getting this strategy to work requires:

  1. deploying pods as a StatefulSet (otherwise, hostname won't set for pods)
  2. exposing above StatefulSet by a headless service (otherwise, the SRV query won't work as expected)
  3. setting the name of Erlang node according to hostname of pods

First, deploying pods as a StatefulSet which is exposed by a headless service. And here is an example of a corresponding Kubernetes definition:

apiVersion: v1
kind: Service
metadata:
  name: "myapp-headless"
  labels:
    app: myapp
spec:
  ports:
  - port: 4000
    name: web
  clusterIP: None
  selector:
    app: myapp
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myapp
spec:
  serviceName: "myapp-headless"
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v1.0.0
        imagePullPolicy: Always
        ports:
        - containerPort: 4000
          name: http
          protocol: TCP

Then, set the name of Erlang node by using the hostname of pod. If you use mix releases, you can configure the required options in rel/env.sh.eex:

# rel/env.sh.eex
export RELEASE_DISTRIBUTION=name
export RELEASE_NODE=<%= @release.name %>@$(hostname -f)

Polling Interval

The default interval to sync topologies is 5000 (5 seconds). You can configure it with :polling_interval option.

An example configuration

config :libcluster,
  topologies: [
    erlang_nodes_in_k8s: [
      strategy: Elixir.Cluster.Strategy.Kubernetes.DNSSRV,
      config: [
        service: "myapp-headless",
        application_name: "myapp",
        namespace: "default",
        polling_interval: 10_000
      ]
    ]
  ]

An example of how this strategy extracts topology information from DNS

$ hostname -f
myapp-1.myapp-headless.default.svc.cluster.local

# An SRV query for a headless service returns multiple entries
$ dig SRV myapp-headless.default.svc.cluster.local

; <<>> DiG 9.14.3 <<>> SRV myapp-headless.default.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7169
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 2

;; QUESTION SECTION:
;myapp-headless.default.svc.cluster.local. IN SRV

;; ANSWER SECTION:
myapp-headless.default.svc.cluster.local. 30 IN SRV 10 50 0 myapp-0.myapp-headless.default.svc.cluster.local.
myapp-headless.default.svc.cluster.local. 30 IN SRV 10 50 0 myapp-1.myapp-headless.default.svc.cluster.local.

;; ADDITIONAL SECTION:
myapp-0.myapp-headless.default.svc.cluster.local. 30 IN A 10.1.0.95
myapp--1.myapp-headless.default.svc.cluster.local. 30 IN A 10.1.0.96

;; Query time: 0 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Jul 03 11:55:27 UTC 2019
;; MSG SIZE  rcvd: 167

Summary

Functions

Returns a specification to start this module under a supervisor.

Functions

Returns a specification to start this module under a supervisor.

See Supervisor.