skip to Main Content

Linkerd is designed to make service-to-service communication internal to an application safe, fast and reliable. However, those same goals are also applicable at the edge. In this post, we’ll demonstrate a new feature of Linkerd which allows it to act as a Kubernetes ingress controller, and show how it can handle ingress traffic both with and without TLS.

This is one article in a series of articles about Linkerd, Kubernetes, and service meshes. Other installments in this series include:

  1. Top-line service metrics
  2. Pods are great, until they’re not
  3. Encrypting all the things
  4. Continuous deployment via traffic shifting
  5. Dogfood environments, ingress, and edge routing
  6. Staging microservices without the tears
  7. Distributed tracing made easy
  8. Linkerd as an ingress controller (this article)
  9. gRPC for fun and profit
  10. The Service Mesh API
  11. Egress
  12. Retry budgets, deadline propagation, and failing gracefully
  13. Autoscaling by top-line metrics

In a previous installment of this series, we explored how to receive external requests by deploying Linkerd as a Kubernetes DaemonSet and routing traffic through the corresponding Service VIP. In this post, we’ll simplify this setup by using Linkerd as a Kubernetes ingress controller, taking advantage of features introduced in Linkerd 0.9.1.

This approach has the benefits of simplicity and a tight integration with the Kubernetes API. However, for more complex requirements like on-demand TLS cert generation, SNI, or routing based on cookie values (e.g. the employee dogfooding approach discussed in Part V of this series), combining Linkerd with a dedicated edge layer such as NGINX is still necessary.

What is a Kubernetes ingress controller? An ingress controller is an edge router that accepts traffic from the outside world and forwards it to services in your Kubernetes cluster. The ingress controller uses HTTP host and path routing rules defined in Kubernetes’ ingress resources.

INGRESS HELLO WORLD

Using a Kubernetes config from the linkerd-examples repo, we can launch Linkerd as a dedicated ingress controller. The config follows the same pattern as our previous posts on k8s daemonsets: it deploys an l5d-config ConfigMap, an l5d DaemonSet, and an l5d Service.

STEP 1: DEPLOY LINKERD

First let’s deploy Linkerd. You can of course deploy into the default namespace, but here we’ve put Linkerd in its own namespace for better separation of concerns:

You can verify that the Linkerd pods are up by running:

And take a look at the admin dashboard (This command assumes your cluster supports LoadBalancer services, and remember that it may take a few minutes for the ingress LB to become available.):

Or if external load balancer support is unavailable for the cluster, use hostIP:

Let’s take a closer look at the ConfigMap we just deployed. It stores the config.yamlfile that Linkerd mounts on startup.

You can see that this config defines an HTTP router on port 80 that identifies incoming requests using ingress resources (via the io.l5d.ingress identifier). The resulting namespace, port, and service name are then passed to the Kubernetes namer for resolution. We’ve also set clearContext to true in order to remove any incoming Linkerd context headers from untrusted sources.

STEP 2: DEPLOY THE HELLO WORLD APPLICATION

Now it’s time to deploy our application, so that our ingress controller can route traffic to us. We’ll deploy a simple app consisting of a hello and a world service.

You can again verify that the pods are up and running:

At this point, if you try to send an ingress request, you’ll see something like:

STEP 3: CREATE THE INGRESS RESOURCE

In order for our Linkerd ingress controller to function properly, we need to create an ingress resource that uses it.

Verify the resource:

This “hello-world” ingress resource references our backends (we’re only using world-v1 and world-v2 for this demo):

The resource

  • Specifies world-v1 as the default backend to route to if a request does not match any of the rules defined.
  • Specifies a rule where all requests with the host header world.v2 will be routed to the world-v2 service.
  • Sets the kubernetes.io/ingress.class annotation to “linkerd”. Note, this annotation is only required if there are multiple ingress controllers running in the cluster. GCE runs one by default; you may choose to disable it by following these instructions.

That’s it! You can exercise these rules by curling the IP assigned to the l5d service loadbalancer.

While this example starts with totally new instances, it’s just as easy to add an ingress identifier router to a pre-existing linked setup. Also, although we employ a DaemonSet here (to be consistent with the rest of the Service Mesh for Kubernetes series), utilizing a Kubernetes Deployment for a Linkerd ingress controller works just as well. Using Deployments is left as an exercise for the reader. 🙂

INGRESS WITH TLS

Linkerd already supports TLS for clients and servers within the cluster. Setting up TLS is described in much more detail in Part III of this series. In this ingress controller configuration, Linkerd expects certs to be defined in a Kubernetes secret named ingress-certs and to follow the format described as part of the ingress user guide. Note that there’s no need to specify a TLS section as part of the ingress resource: Linkerd doesn’t implement that section of the resource. All TLS configuration happens as part of the l5d-config ConfigMap.

The Linkerd config remains largely unchanged, save updating the server port to 443and adding TLS file paths:

The l5d DaemonSet now mounts a secret volume with the expected name: ingress-certs

And the updated Service config exposes port 443.

A reminder that the certificates we’re using here are for testing purposes only! Create the Secret, delete the DaemonSet and ConfigMap, and re-apply the ingress controller config:

You should now be able to make an encrypted request:

CONCLUSION

Linkerd provides a ton of benefits as an edge router. In addition to the dynamic routing and TLS termination described in this post, it also pools connectionsload balances dynamicallyenables circuit breaking, and supports distributed tracing. Using the Linkerd ingress controller and the Kubernetes configuration referenced in this post, you gain access to all these features in an easy to use, Kubernetes-native approach. Best of all, this method works seamlessly with the rest of the service mesh, allowing for operation, visibility, and high availability in virtually any cloud architecture.
Note: there are a myriad of ways to deploy Kubernetes and different environments support different features. Learn more about deployment differences here.

The ingress identifier is new, so we’d love to get your thoughts on what features you want from an ingress controller. You can find us in the Linkerd community Slack or on the linkerd discourse.

ACKNOWLEDGEMENTS

Big thanks to Alex Leong and Andrew Seigner for feedback on this post.