Demo time: Multi-cluster Kubernetes with Linkerd 2.8

Demo time: Multi-cluster Kubernetes with Linkerd 2.8

Charles Pretzer

Jul 20, 2020

Blog >
Linkerd

At the Camp Cloud Native virtual event on June 24, I had the opportunity to talk about how Linkerd 2.8 implements multi-cluster support simply and securely for anyone who needs services to operate reliability across pods, availability zones, or regions.

Running multiple production Kubernetes clusters is an increasingly common practice, driven by requirements around high availability, latency reduction, business and regulatory requirements, and multi-tenancy. One of the biggest challenges of multi-cluster is the communication between clusters: especially for hybrid or multi-cloud deployments, where the network is necessarily heterogeneous, how does traffic flow between clusters in a way that is safe and simple?

Enter Linkerd. The new multi-cluster feature in Linkerd 2.8 is designed to be simple and elegant, and allow Linkerd to connect Kubernetes services across cluster boundaries in a way that is secure, fully transparent to the application, and independent of network topology. This multi-cluster capability is designed to provide:

  1. A unified trust domain. The identity of source and destination workloads are validated at every step, both in and across cluster boundaries.
  2. Separate failure domains. Failure of a cluster allows the remaining clusters to function.
  3. Support for heterogeneous networks. Since clusters can span clouds, VPCs, on-premises data centers, and combinations thereof, Linkerd does not introduce any L3/L4 requirements other than gateway connectivity.
  4. A unified model alongside in-cluster communication. The same observability, reliability, and security features that Linkerd provides for in-cluster communication extend to cross-cluster communication.

There are two components that we add in addition to the Linkerd control plane to enable cross-cluster communication and service routing. One is the Linkerd gateway. It’s a load balancer resource that exposes an API endpoint for intercluster communication. When we deploy the gateway, we leverage mTLS provided by Linkerd to assign certificates to the gateway so that only communication over the public internet between the gateways isallowed. The second component is the service mirror and it monitors the configured gateways and Kubernetes events to create mirrored services.

Those two resources—the gateway and the service mirror—are stored in a namespace called linkerd-multicluster. When we use the link command of the Linkerd CLI, the service account is exported from one account (say, “K8s West”) to the other (say, “K8s East”) as a Kubeconfig file. The service mirror uses this secret to communicate with the server on the other side.

In the video of the session, you’ll see a demo of how all this gets set up in a demo scenario:

If you need a simple, secure, and speedy service mesh that supports multi-cluster deployments, Linkerd 2.8 is the way to go. Grab the code and join the community at linkerd.io/community.