Buoyant’s Linkerd Production Runbook
A guide to running the world's most advanced service mesh in production

Last update: August 31, 2023 / Linkerd 2.14.0
No items found.

Going to production

With our preparations out of the way, let’s get into the details. In this section, we cover the basic recommendations for preparing Linkerd for production use.

Your preflight checklist

So you’re ready to take Linkerd into production. Great! Here are the basic steps we suggest for your production deploy:

We’ll go over each of these in turn.

Preparing your environment

In this section, we cover how to configure your Kubernetes environment for Linkerd production use. The good news is that much of his preparation can be verified automatically.

Run the automated environment verification

Linkerd automates as much as possible. This includes verifying the pre-installation and post-installation environments. The linkerd check command will automatically validate most aspects of the cluster against Linkerd’s requirements, including operating system, Kubernetes deployment, and network requirements.

To run the automated verification, follow these steps:

  1. Validate that you have the linkerd CLI binary installed locally and that its version matches the version of Linkerd you want to install, by running linkerd version and checking the “Client version” section.
  2. Validate that the kubectl command in your terminal is configured to use the Kubernetes cluster you wish to validate.
  3. Run linkerd check --pre.
  4. Correct any issues in your environment highlighted in the output. Each failing check should contain a URL pointing to a more detailed explanation with possible solutions.

Once linkerd check --pre is passing, you can start to plan the control plane installation using the information in the following sections.

Validate certain exceptional conditions

There are some conditions that may prevent Linkerd from operating properly that linkerd check --pre cannot currently validate. The most common such conditions are those that prevent the Kubernetes API server from contacting Linkerd’s tap and proxy-injector control plane components.

Environments that can exhibit these conditions include:

  1. Private clusters on GKE or AKS clusters
  2. EKS clusters with custom CNI plugins

If you are in one of these environments, you may wish to do a “dry run” installation first to flesh out any issues. Remediations include updating internal firewall rules to allow certain ports (see the example in the Cluster Configuration documentation), or, for EKS clusters, switching to the AWS CNI plugin.

Mirror the Linkerd images

Production deployments should not pull from Linkerd's open source image repositories. These repositories, while hosted by GitHub, are occasionally unreachable, and your production deployments should not have a runtime dependency on GitHub.

Many cloud providers offer private image registries. Determine where these images will be hosted and mirror them locally.

Ensure sufficient system resources for Linkerd

Production users should use the high availability, or “HA”, installation path. We’ll get into all the details about HA mode later, in Configuring Linkerd for Production Use. In this section, we’ll describe our recommendations for resource allocation to Linkerd installed in HA mode. These values represent best practices, but you may need to tune them based on the specifics of your traffic and workload.

Control plane resources

The control plane of an HA Linkerd installation requires three separate nodes for replication. As a rule of thumb, we suggest planning for 512mb and 0.5 cores of consumption on each such node.

Of all control plane components, the linkerd-destination component, which handles service discovery information, is the one that will scale in terms of memory consumption with the size of the mesh. This components requires monitoring of consumption, and adjusting of resource requests and limits as appropriate.

In version 2.9 and earlier of Linkerd, the core control plane contained a Prometheus instance. In Linkerd 2.10 and beyond, this instance has been moved to the linkerd-viz plugin. This Prometheus instance is worth some extra attention as it stores metrics information from proxies, and thus can have wildly variable resource requirements. As a rough starting point, expect 5mb of memory per meshed pod, but this can vary very wildly basic on traffic patterns. We suggest planning for 512mb minimum for this instance and to take an empirical approach and monitor carefully.

(There are configurations of Linkerd that avoid running this control plane Prometheus instance. We’ll discuss these below as well.)

Data plane resources

Generally speaking, for Linkerd’s data plane proxies, resource requirements are a function of network throughput. Our conservative rule of thumb is that, for each 1,000 RPS of traffic expected to an individual proxy (i.e. to a single pod), you should ensure that you have 0.25 CPU cores and 20mb memory. Very high throughput pods (>5k RPS per individual pod), such as ingress controller pods, may require setting custom proxy limit/requests (e.g. via the --proxy-cpu-* configuration flags).

In practice, proxy resource consumption is affected by the nature of the traffic, including payload sizes, level of concurrency, protocol, etc. Our guidance here, again, is to take an empirical approach and monitor carefully.

Restrict NET_ADMIN permissions if necessary

By default, the act of creating pods with the Linkerd data plane proxies injected requires NET_ADMIN capabilities. This is because, at pod injection time, Linkerd uses a Kubernetes InitContainer to automatically reroute all traffic to the pod through the pod’s Linkerd data plane proxy. This rerouting is done via iptables, which requires the NET_ADMIN capability.

In some environments this is undesirable, as NET_ADMIN privileges grant access to all network traffic on the host. As an alternative, Linkerd provides a CNI plugin which allows Linkerd to run iptables commands within the CNI chain (which already has elevated privileges) rather than in InitContainers.

Using the CNI plugin adds some complication to installation, but may be required by the security context of the Kubernetes clusters. See the CNI documentation for more.

Ensure that time is synchronized across nodes in the cluster

Clock drift is surprisingly common in cloud environments. When the nodes in a cluster have different timestamps, the output won’t line up between log lines, metrics, etc. Clock skew can also break Linkerd’s ability to validate mutual TLS certificates, which can cause a critical outage of Linkerd’s ability to service requests.

Ensuring clock synchronization in networked computers is outside the scope of this document, but we will note that cloud providers often provide their own dedicated services on top of NTP to help with this problem, e.g. AWS’s Amazon Time Sync Service. A production environment should use those services. Similarly, if you are running on a private cloud, we suggest ensuring that all the servers use the same NTP source.

Ensure the kube-system namespace can function without proxy-injector

One sometimes surprising detail of Linkerd’s HA mode is that it configures the proxy-injector control plane component, which adds the proxy to scheduled pods, to be required before any pod on the cluster can be scheduled. HA mode adds this restriction in order to guarantee that all application pods have access to mTLS—creation of any application pods that are unable to have an injected proxy could lead to an insecure system.

However, this means that, in HA mode, if all proxy-injector instances are down, no pod anywhere on the cluster can be scheduled. To avoid catastrophic scenariou in the presence of complete proxy-injector failure, you must apply config.linkerd.io/admission-webhooks: disabled Kubernetes label to the kube-system namespace. This will allow system pods to be scheduled even in the absence of a functioning proxy-injector.

See the Linkerd HA documentation for more.

Ensure the linkerd namespace is not eligible for auto-injection

Linkerd's control plane, which runs in the linkerd namespace, provides its own data plane proxies and should not by eligible for proxy auto-injection. In Helm and CLI installations, we automatically annotate this namespace with the linkerd.io/inject: disabled annotation. However, if you create this namespace yourself, you may need to set that annotation explicitly.

Configuring Linkerd for Production Use

Having configured our environment for Linkerd, we now turn to Linkerd itself. In this section, we outline our recommendations for configuring Linkerd for production environments.

Choose your deployment tool

Linkerd supports two basic installation flows: a Helm chart or the CLI. For a production deployment, we recommend using Helm charts, which better allow for a repeatable, automated approach. See the Helm docs and the CLI installation docs for more details.

Decide on your metrics pipeline

Every data plane proxy provides a wealth of metrics data for the traffic its seen, by exposing it in a Prometheus-compatible format on a port. How you consume that data is an important choice.

Generally speaking, you have several options for aggregating and reporting these metrics. (Note that these options can be combined.)

  1. Ignore it. Linkerd’s deep telemetry and monitoring may simply be unimportant for your use case.
  2. Aggregate it on-cluster with the linkerd-viz extension. This extension contains an on-cluster Prometheus instance configured scrape all proxy metrics, as well as a set of CLI commands, and an on-cluster dashboard, that expose those metrics for short-term use. By default, this Prometheus instance holds only 6 hours of data, and will lose metrics data if restarted. (If you are relying on this data for operational reasons, this may be insufficient.)
  3. Aggregate it off-cluster with Prometheus. This can be done either by Prometheus federation from the linkerd-viz on-cluster Prometheus, or simply by using an off-cluster Prometheus to scrape the proxies directly, aka “Bring your own Prometheus”.
  4. Use a third-party metrics provider such as Buoyant Cloud. In this option, the metrics pipeline is offloaded entirely. Buoyant Cloud will work out of the box for Linkerd data; other metrics providers should also be able to easily consume data from Prometheus or from the proxies.

of Buoyant Cloud's hosted Linkerd metric


Screenshot of Buoyant Cloud's hosted Linkerd metrics


Enable HA mode

Production deployments of Linkerd should use the high availability, or HA, configuration. This mode enables several production-grade behaviors of Linkerd, including:

  • Running three replicas of critical control plane components.
  • Setting production-ready CPU and memory resource requests on control plane components.
  • Setting production-ready CPU and memory resource requests on data plane proxies
  • Requiring that the proxy auto-injector be functional for any pods to be scheduled.
  • Setting anti-affinity policies on critical control plane components so that are scheduled on separate nodes and in separate zones.

This mode can be enabled by using the --ha flag to linkerd install or by using the values-ha.yaml Helm file. See the HA docs for more.

Set up your mTLS certificate rotation and alerting

For mutual TLS, Linkerd uses three basic sets of certificates: the trust anchor, which is shared across clusters; the issuer certificate, which is set per cluster; and the proxy certificates, which are issued per pod. Each certificate also has a corresponding private key. (See our Kubernetes engineer’s guide to mTLS for more on this topic.)

Proxy certificates are automatically rotated every 24 hours without any intervention on your part. However, Linkerd does not rotate the trust anchor or the issuer certificate. Additionally, multi-cluster communication requires that the trust anchor be the same across clusters.

By default the Linkerd CLI will generate one-year self-signed certificates for both trust anchor and issuer certificates and will discard the trust anchor key. This allows for an easy installation, but is likely not be the configuration you want in production.

If either trust anchor or issuer certificate expires without rotation, no Linkerd proxy will be able to communicate with another Linkerd proxy. This can be catastrophic. Thus, for production environments, we recommend the following approach:

  1. Create a trust anchor with a 10-year expiration period, and store the key in a safe location. (Trust anchor rotation is possible, but is very complex, and in our opinion best avoided unless you have very specific requirements.)
  2. Set up automatic rotation for issuer certificates with cert-manager.
  3. Set up certificate monitoring with Buoyant Cloud. Buoyant Cloud will automatically alert you if you certificate is close to expiring.

Certificate management can be subtle. Be sure to read through Linkerd’s full mTLS documentation, including the sections on rotating certificates.

Set up automatic rotation of webhook credentials

Linkerd relies on set of TLS credentials for webhooks. (These credentials are independent of the ones used for mTLS.) These credentials are used when Kubernetes calls the webhooks endpoints of Linkerd’s control plane, which are secured with TLS.

As above, by default these credentials expire after one year, at which point Linkerd becomes inoperable. To avoid last-minute scrambles, we recommend using cert-manager to automatically rotate the webhook TLS credentials for each cluster. See the Webhook TLS documentation.

Configure certain protocols

Linkerd proxies perform protocol detection to automatically identify the protocol used by applications. However, there are some protocols which Linkerd cannot automatically detect, for example, because the client does not send the initial bytes of the connection. If any these protocols are in use, you need to configure Linkerd to handle them.

As of Linkerd 2.10, a connection that cannot be properly handled by protocol detection will incur a 10-second timeout, then be proxied as raw TCP. In earlier versions, the connection would fail after the timeout rather than continuing.

The exact configuration necessary depends not just on the protocol but on whether the default ports are used, and whether the destination service is in the cluster or outside the cluster. For example, any of the following protocols, if the destination is outside the cluster, or if the destination is inside the cluster but on a non-standard port, will require configuration:

  • SMTP
  • MySQL
  • PostgresQL
  • Redis
  • ElasticSearch
  • Memcache

See the Protocol Detection documentation for guidance on how to configure this, if necessary.

Secure your tap command if necessary

Linkerd’s tap command allows users to view live request metadata, including request and response gRPC and HTTP headers (but not bodies). In some organizations, this data may contain sensitive information that should not be available to operators.

If this applies to you, follow the instructions in the Securing linkerd tap documentation to restrict or remove access to tap.

Validate your installation

After installing the control plane, we recommend running linkerd check to ensure everything is set up correctly. Correct any issues in your environment highlighted in the output. Each failing check should contain a URL pointing to a more detailed explanation with possible solutions.

Once linkerd check passes, congratulations! You have successfully installed Linkerd.