Installing the Emissary ingress with the Linkerd service mesh

Getting the best of both worlds

Today we’re going to learn about how to combine Emissary ingress (formerly known as the Ambassador API Gateway) with Linkerd to build a scalable system that combines the context-aware routing of Emissary with the observability and security of Linkerd.

Emissary is a newly adopted CNCF project from the folks over at Ambassador Labs. It’s an ingress that combines standard ingress functionality with some features traditionally associated with an API gateway. Emissary is powerful, fast, and offers a ton of configuration options. It also shares an important property with Linkerd: it’s easy to learn and use.

By the end of this article you should:

  • Understand the value of Linkerd and the Emissary ingress and how they complement each other
  • Be able to deploy Emissary with Linkerd
  • Understand the important configuration options for the integration

Better together: service mesh + ingress + API gateway

For anyone who isn’t familiar with Linkerd, it’s a lightweight, simple, and Kubernetes-native service mesh. Linkerd provides users with security, observability, and reliability benefits by injecting a proxy that handles app-to-app communication. It is highly performant and allows platform owners to rapidly diagnose application issues and restore service when issues occur.

Emissary, a newly incubating CNCF project, aims to simplify the process of using and configuring Envoy. Emissary works as an ingress and API gateway with features targeted at microservice developers. Specifically, Emissary aims to deliver observability and reliability features that will allow developers to move faster and hit higher availability targets.

Both tools aim to solve similar problems for different audiences and their features are extremely complimentary. If you like Emissary but want easy to use end-to-end encryption, Linkerd can handle that. If you want to route users to different services based on user names, use Emissary to handle smarted routing at the ingress.

Getting started

Now that you see the value in using Linkerd with Emissary, let’s dive into how this works.

The setup

For the purpose of this article, we’re going to deploy everything on a kind cluster. It’s important to understand that when dealing with an ingress we generally expect a Kubernetes service to have access to some kind of external load balancer. When working in a production, or production-like environment, you’ll want to use Emissary with a serviceType: LoadBalancer and integrate it with your DNS service. In our examples, we use localhost due to the constraints imposed on us by kind.

Our tool set

  • Kind version 0.10.0
  • Linkerd version 2.10.1
  • Ambassador version 1.13.2
    • Soon to be moved to Emissary but still in the process of being renamed
  • Kubernetes version 1.20.2

Installing the Emissary ingress

Before doing anything we need to deploy our kind cluster:

kind create cluster --name amb

After that, we install the Emissary ingress.

Note: You’ll see a lot of references to Ambassador in this demo as what used to be called the Ambassador API gateway is being renamed to Emissary.

# We'll be using helm to install Emissary so we need to pre create the namespace
kubectl create namespace ambassador
# Below we override default values in the ambassador helm chart to suit a kind deployment
helm install ambassador --namespace ambassador datawire/ambassador --set replicaCount=1 --set service.type=ClusterIP
# This command waits on Ambassador to be ready
kubectl -n ambassador wait --for condition=available --timeout=90s deploy -lproduct=aes

Buoyant Cloud: The best way to run Linkerd in mission-critical environments

Free Linkerd monitoring, alerting, and multi-cluster team dashboard, from the creators of Linkerd.

Try it now

Installing the Linkerd service mesh

If you don’t already have Linkerd installed you can pull down the latest stable:

curl -sL https://run.linkerd.io/install | sh

export PATH=$PATH:$HOME/.linkerd2/bin

Then run the pre-check and install commands:

linkerd check --pre
linkerd install | kubectl apply -f -
linkerd check

This will ensure your kind cluster can run linkerd, install the service mesh, and validate that the linkerd install is healthy.

If you want, you can add the Linkerd dashboard:

linkerd viz install | kubectl apply -f -
linkerd viz check

Add Emissary to your service mesh

Once your mesh is up and running, you can integrate it with Emissary.

To do this, you must add the ambassador ingress itself. You can also optionally add the agent and Redis instances. I’ll add instructions for that at the end of this section.

Adding the ingress

We’re going to grab the ambassador deployment and add it to the mesh. You should note we use two flags to modify the standard injection annotations:

kubectl get deploy -n ambassador ambassador -o yaml | linkerd inject  --skip-inbound-ports "80,443" --ingress - | kubectl apply -f -

We deliberately skip the inbound ports 80 and 443 on the ingress. We do this for two reasons:

  • Linkerd’s proxy doesn’t have any information about the traffic coming into an ingress so it doesn’t add value there
  • Emissary is better positioned to manage inbound traffic if it isn’t modified by Linkerd

Optional: add the agent and Redis

The Agent doesn’t require any special configuration. It serves traffic on port 8877 but it’s standard http traffic which Linkerd handles without issue:

kubectl get deploy -n ambassador ambassador-agent -o yaml | linkerd inject - | kubectl apply -f -

Redis isn’t currently included in Linkerd’s default opaque ports so you’ll need to tell Linkerd to treat the Redis traffic as a TCP connection:

kubectl get deploy -n ambassador ambassador-redis -o yaml | linkerd inject --opaque-ports 6379 - | kubectl apply -f -

By adding the entirety of Ambassador’s control plane you’ll have more detailed information about how it works and simplify debugging issues as they come up.

Mappings and modules and you

Modules and mappings help you configure your Emissary instance and route traffic to applications.

Modules

Read the official docs for a more in depth explanation, but on a high-level, modules provide cluster-wide configuration. In our example, we use the following mapping to tell Emissary to forward Linkerd headers on all routes:

---
apiVersion: getambassador.io/v2
kind: Module
metadata:
  name: ambassador
  namespace: ambassador
spec:
  config:
  add_linkerd_headers: true

Mappings

Mappings are route-specific rules, similar to an ingress object. They allow you to route traffic to a given application and provide Emissary specific logic to control things like circuit breaking or header based routing.

We will use the following mapping resource to route traffic to the Emojivoto application in the next section:

---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
  name: emoji
spec:
  prefix: /
  service: web-svc.emojivoto.svc.cluster.local
  rewrite: ""

Routing to Emojivoto

Now that we have Emissary and Linkerd installed, we can deploy Emojivoto and route some traffic to our ingress.

Start by deploying and injecting our app:

curl -sL https://run.linkerd.io/emojivoto.yml | linkerd inject - | kubectl apply -f -

We can view the dashboard to check on the progress or use Linkerd check to test the proxy:

linkerd viz dashboard

# or

linkerd check --proxy -n emojivoto

With that done, we configure Emissary and create our mapping. Save the following files and content:

linkerd_module.yaml

---
apiVersion: getambassador.io/v2
kind: Module
metadata:
  name: ambassador
  namespace: ambassador
spec:
  config:
  add_linkerd_headers: true

linkerd_mapping.yaml

---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
  name: emoji
spec:
  prefix: /
  service: web-svc.emojivoto.svc.cluster.local
  rewrite: ""

Then apply them to your Kubernetes cluster:

kubectl apply -f linkerd_module.yaml
kubectl apply -f linkerd_mapping.yaml

With that done traffic should be able to flow from the ingress to Emojivoto. Let’s try it out! Forward traffic from port 8443 on your local machine to the Emissary ingress:

kubectl port-forward svc/ambassador -n ambassador 8443:443

Then use a private window to browse to https://localhost:8443. You’ll see a warning like this:

Invalid certificate warning

At this point you should immediately shut down your computer and unplug your telephone lines to avoid any security problems. Just kidding! You can safely ignore the invalid certificate in this case as we deliberately avoided creating a real certificate for our ingress and aren’t using a DNS service to create an externally valid host entry.

After allowing your browser to continue you’ll see Emojivoto pop up. You can use the app normally and observe the traffic through the ingress via the Linkerd viz dashboard or the Linkerd CLI.

Emojivoto via Emissary

Important Details

Congrats! You’re about done with setting up Linkerd with Emissary. By skipping inbound traffic on ports 80 and 443 you’ve ensured that Emissary will work as designed and you can enable more advanced routing rules like enabling web sockets. On that note, if you’re looking to route traffic to the Linkerd dashboard, you’ll want to set a couple things:

  • Host rewrites: Linkerd’s dashboard prevents DNS rebinding attacks by limiting the valid URLs it can be called from. You can use Emissary’s host_rewrite field to fix that
    • Alternatively you can update the valid dashboard URLs via the Linkerd install
  • Web sockets: The Linkerd dashboard uses web sockets to show real-time traffic data

Here’s an example dashboard configuration:

---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
  name: linkerd-viz
spec:
  prefix: /
  host: linkerd.example.com
  host_rewrite: web.linkerd-viz.svc.cluster.local:8084
  service: web.linkerd-viz.svc.cluster.local:8084
  rewrite: ""
  allow_upgrade:
  - websocket

Wrap up

Emissary has a ton of functionality that is extremely valuable to developers and platform owners, as does Linkerd. While Emissary is focused on the ingress to your cluster, Linkerd is able to handle the inter-app communication within your cluster. When you pair them together they become more than the sum of their parts and are a key building block to making a developer focused Kubernetes platform.

I hope this has been useful, informative, and worth your time!

Similar posts

Apr 8, 2021 | Jason Morgan

Hello folks and welcome! Today we’re going to talk about GitOps and how it relates to the Linkerd service mesh. GitOps is a buzzy topic, and that’s in no small part because it can have a really positive impact on teams working with Kubernetes. Teams that have established effective GitOps pipelines …


Jan 11, 2021 | Risha Mars

In this article we’re going to show you how to accomplish a basic Kubernetes observability task: getting “golden metrics” (or “golden signals”) from the applications running on your Kubernetes cluster. We’ll do this without changing any code or doing any configuration by installing Linkerd, an open …


Dec 23, 2020 | Zahari Dichev

Applying L4 network policies with a service mesh In this tutorial, you’ll learn how to run Linkerd and Cilium together and how to use Cilium to apply L3 and L4 network policies to a cluster running Linkerd. Linkerd is an ultralight, open source service mesh. Cilium is an open source CNI layer for …