skip to Main Content

In this post, we describe how linkerd, our service mesh for cloud-native applications, can be used to transparently “wrap” HTTP and RPC calls in TLS, adding a layer of security to applications without requiring modification of application code.

NOTE: we have an updated version of this post.

linkerd includes client-side load balancing as one of its core features. In its basic form, outgoing HTTP and RPC calls from a service are proxied through linkerd, which adds service discovery, load balancing, instrumentation, etc., to these calls.

However, as a service mesh, linkerd can additionally be used to handle inbound HTTP and RPC calls. In other words, linkerd can act as both a proxy and a reverse proxy. This is the full service mesh deployment model, and it has some nice properties—in particular, when linkerd is deployed on a host, or as a sidecar in systems like Kubernetes, it allows linkerd to modify or upgrade the protocol over the wire. One particularly exciting use case for a service mesh is to automatically add TLS across host boundaries.

Adding TLS directly to an application can be difficult, depending on the level of support for it in an application’s language and libraries. This problem is compounded for polyglot multi-service applications. By handling TLS in linkerd, rather than the application, you can encrypt communication across hosts without needing to modify application code. Additionally, for multi-service applications, you get a uniform application-wide layer for adding TLS—helpful for configuration changes, monitoring, and security auditing.

In the example below, we’ll “wrap” a simple Kubernetes application in TLS via linkerd. We’ll take advantage of the fact that Kubernetes’s pod model colocates containers in a pod on the same host, ensuring that the unencrypted traffic between your service and its sidecar linkerd process stays on the same host, while all traffic across pods (and thus across machines) is encrypted.

Of course, encryption is only one part of TLS–authentication is also important. linkerd supports several TLS configurations:

  • no validation (insecure)
  • a site-wide certificate for all services
  • per-service or per-environment certificates

In this example, we will focus on the certificate per-service setup, since this is most appropriate for production use cases. We will generate a root CA certificate, use it to generate and sign a certificate for each service in our application, distribute the certificates to the appropriate pods in Kubernetes, and configure linkerd to use the certificates to encrypt and authenticate inter-pod communication.

We’ll assume that you already have linkerd deployed to Kubernetes. If not, check out our Kubernetes guide first.


To begin, we’ll need a root CA certificate and key that we can use to generate and sign certificates for each of our services. This can be generated using openssl (the commands below assume that you have an openssl.cnf config file in the directory where you’re running them — see this gist for a sample version of that file). Create the root CA certificate.

This will generate your CA key (cakey.pem) and your CA certificate (cacertificate.pem). It is important that you store the CA key in a secure location (do not deploy it to Kubernetes)! Anyone who gets access to this key will be able to generate and sign certificates and will be able to impersonate your services.

Once you have your root CA certificate and key, you can generate a certificate and key for each service in your application.

Here we use the Kubernetes service name as the TLS common name.


Now that we have certificates and keys, we need to distribute them to the appropriate pods. Each pod needs the certificate and key for the service that is running there (for serving TLS) as well as the root CA certificate (for validating the identity of other services). Certificates and keys can be distributed using Kubernetes secretsjust like linkerd configs.

Example secret:


Finally, we need to configure linkerd to use the certificates. To set this up, start with a service mesh deployment. Add a server tls config to the incoming router and a boundPath client tls module to the outgoing router:

The server TLS section configures the incoming router to serve TLS using the service’s certificate and key. The boundPath client TLS section configures the outgoing router to validate the identity of services that it talks to. It pulls the service name from the destination bound path, uses that as the TLS common name, and uses the CA certificate to verify the legitimacy of the remote service. To see how that works, let’s walk through an example:

Suppose that ServiceA wants to send a request to ServiceB. To do this, ServiceAsends the request to the outgoing router of its sidecar linkerd which is listening on localhost:4141ServiceA also sends a Host: ServiceB header to indicate where the request should be routed. When linkerd receives this request, it generates /svc/ServiceB as the destination. Applying the dtab, this gets rewritten to /ns/prod/router/serviceB. This is called the bound path. Since this matches the prefix we specified in the boundPath TLS module, linkerd will send this request using TLS. The k8s namer will then resolve /ns/prod/router/serviceB to a list of concrete endpoints where the incoming routers of ServiceB’s sidecar linkers are listening (and are configured to receive TLS traffic).

That’s it! Inter-service communication run through will now be secured using TLS and no changes to your application are necessary. And, of course, just as in non-TLS configurations, linkerd adds connection pooling, load balancing, uniform instrumentation, and powerful routing capabilities to your services, helping them scale to high traffic, low latency environments.


Thanks to Sarah Brown and Greg Campbell for feedback on earlier drafts of this post.