Dec 23, 2020
(Note: if you’d like to read more about eBPF and sidecars, you might like our blog post on eBPF, sidecars, and the future of the service mesh by William Morgan.)
In this tutorial, you’ll learn how to run Linkerd and Cilium together and how to use Cilium to apply L3 and L4 network policies to a cluster running Linkerd.
Linkerd is an ultralight, open source service mesh. Cilium is an open source CNI layer for Kubernetes. While there are several ways to combine these two projects, in this guide we’ll do something basic: we’ll use Cilium to enforce L3/L4 network policies on a Linkerd-enabled cluster.
Kubernetes network policies are controls over which types of network traffic are allowed to happen within a Kubernetes cluster. You might put these in place for reasons of security, or simply as a safeguard against accidents.
The terms “L3” and “L4” refer to layers 3 and 4 of the OSI network model, and refer to the policies that can be expressed in terms of IP addresses (layer 3) and ports (layer 4). For example, “requests between 192.0.2.42:9376 and 192.0.2.43:80 are forbidden” is a layer 4 policy. In orchestrated environments like Kubernetes, policies about individual IP addresses are quite brittle, so typically these policies will instead be expressed in terms of label selectors, e.g. “any pod with label app=egressok can send packets from port 80”. Under the hood, Cilium will track the pod assignments that Kubernetes does and translate this from label selectors to IP addresses.
L3 and L4 policies stand in contrast to L7 policies, which are expressed in terms of protocol-specific information. For example, “Pods with label env=prod are allowed to make HTTP GET requests to the /foo endpoint of pods with the label env=admin” is a layer 7 policy, because it requires parsing the protocol sent over the wire.
Linkerd supports rich L7 policies that take into account things like service identity. So we’ll use Cilium to implement L3 / L4 policies, and leave the advanced stuff to Linkerd. Let’s see how this works in practice.
What you’ll need to follow along:
As a first step, we’ll configure our kind cluster via a configuration file. Make sure you disable the default CNI and replace it with Cilium:
Now that we’ve got the cluster up and running, let’s install Cilium. Here are the steps:
To monitor the progress of the installation, use kubectl -n kube-system get pods –watch.
To showcase what Cilium can do, we’ll use Podinfo and Slowcooker to simulate a client issuing requests to a backend API service:
Now that we have a server, it’s time to install the client:
The workloads are running now, so let’s go ahead and apply a Layer 4 ingress policy based on labels that restrict the packets that reach our Podinfo workload.
This policy has two ingress rules that apply to services labeled app: podinfo:
The second rule is essential for the correct operation of Linkerd. Much of the functionalities like tap and top rely on the control plane components connecting to the proxy sidecar that runs in the meshed workloads. If this connectivity is blocked by Cilium rules, some of the Linkerd features might not work as expected.
Our Podinfo server is now conforming to the network policies. To allow our client pod to only communicate with the Podinfo backend we can use an Egress Cilium policy:
This policy has three egress rules that apply to workloads labeled with app: client:
Here again, outgoing traffic to Linkerd components is essential. The Linkerd proxy uses the identity and destination services to obtain TLS certificates and perform service discovery. If this connectivity is blocked, the proxy will not be able to work correctly, rendering the meshed workload unusable.
Now that our traffic is obeying ingress and egress policies, we can go ahead and install Linkerd following the installation guide. Ready? Then, let’s mesh our workloads:
kubectl get deploy -n cilium-linkerd podinfo -oyaml | linkerd inject - | kubectl apply -f -
Once workloads are meshed, we can see the requests being issued to Podinfo:
Similarly, we can observe the live stream of all requests going out of the client workload:
Note that the tls=true indicator shows that mTLS is applied to all traffic between workloads. To ensure that the policies work:
Reaching an alternative destination from the ones allowed is not possible:
Congrats! At this point you’ve successfully enforced L3/L4 policies using Cilium on a Linkerd-enabled cluster.
In this post, we’ve demonstrated how to use Cilium and Linkerd together, and how to apply L3/L4 policies in a Linkerd-enabled cluster. Everything in this blog post can be used in production today. In upcoming releases, Linkerd will add L7 support, we’ll be able to extend these same ideas to protocol-specific policies as well. Until then, go forth and make L3/L4 policies with Linkerd and Cilium!
Buoyant is the creator of Linkerd and of Buoyant Cloud, the best way to run Linkerd in mission-critical environments. Today, Buoyant helps companies around the world adopt Linkerd, and provides commercial support for Linkerd as well as training and services. If you’re interested in adopting Linkerd, don’t hesitate to reach out!