Service Mesh Academy
Live logo

Secure Multi-cluster Kubernetes with Linkerd

Live on
May 5, 2022

Whether for disaster recovery, multi-tenancy, or user-facing latency, deploying applications across multiple clusters is an increasingly common approach to Kubernetes. Unfortunately, while Kubernetes gives you many reasons to run multiple clusters, it provides you with very little help in doing so. In this workshop, we’ll dive into Linkerd’s powerful multi-cluster capabilities and see how you can establish secure communication between clusters, even across the open Internet, in a way that’s fully transparent to the application.

Transcript

(Note: this transcript has been automatically generated with light editing. It may contain errors! When in doubt, please watch the original talk!)

Welcome and logistics

Jason: Alejandro, thank you for the patience during the introductions. Do you want to go ahead and start the deck? We’re here talking about Secure multi-cluster Kubernetes with Linkerd.

Alejandro: Thank you so much, Jason. A little bit about myself, I’m a software engineer at Buoyant and one of the maintainers of the Linkerd project. These are some of my social media accounts. It’s usually alpeb at Twitter, GitHub, and on Slack it’s Alejandro at Buoyant. Feel free to contact me even after the workshop to ask me any questions, I will be more than glad to answer. For today’s agenda, the beef of it will be a demo, a hands-on demo. The idea is for you folks to follow along. Let me know if my cadence is too fast. I pasted it in the Slack channel, this URL for the GitHub repo, which contains the list of commands that I will type for these. You need two clusters ready. If you don’t have them ready, you can take advantage.

First, I will go over an overview of Linkerd’s implementation of multi-cluster. That will take about 15 minutes, so if you don’t have your clusters ready, you can do it now. In the demo, I’ll be using two GKE clusters because we will have two clusters, west and east. The east one needs to be able to create load balancer services accessible from the internet or at least from the west cluster. The easiest is to do that through public clouds. I’m using three nodes per cluster because we’re going to be installing Linkerd viz as well, but that’s not required really. It’s just for one command. If three nodes are too much for you, it’s okay, two should be good as well. Usually, in local testing, I use K3d…

Hey, that’s Jason’s face! Nice to see you.

I use K3d, but that requires some extra flags. It’s easier just to go with a public cloud for these demos.

Multi-cluster overview

Let’s get started with the brief multi-cluster overview. First of all, let’s talk a little bit about why do we want multi-cluster? We want to frame these around Linkerd’s three fundamental pillars, which are security, reliability, and observability. What are the use cases for multi-cluster? First, we might want to deploy the same services across different multi-clusters, across different clusters for failover purposes. If services or the entire clusters start failing in some region, we kind of failover other regions. That deals with the reliability aspect of things. The other use case is to actually spread different services among different clusters. If we want, for example, to segmentate services because we have different teams in our organization, and we want to give the responsibility of each team to provision and maintain their own separate clusters.

Multi-cluster will give you this functionality in a secure way. Also, you might need for regulatory reasons to have services in different jurisdictions, so you can isolate them in separate clusters, etc. There are multiple ways to scan the multi-cluster path. Linkerd’s implementation is very opinionated because we want to fulfill these three fundamental goals. First, we want to continue being fully transparent to apps, as we’ve always been. The idea for multi-cluster is for apps to not have to change to benefit from all the good things that service mesh provides. We want to keep the same when using multi-cluster. The only change we would have to make is if you want to connect to a service in an external cluster. Obviously, since they are in different networks, you cannot use the same FQDN and would have to connect to the mirrored service in your local cluster.

We will see what that means in the demo. That would be the only thing you would have to change in your apps. Secondly, we want to be independent of network topology. One possible implementation of multi-cluster is just to deploy your different clusters using the same network IP address so that each pod can communicate with any other pod in the clusters without having to rely on a service mesh. We didn’t go down that road because we want to keep clusters to be totally separate, among other reasons, so that they have the different failure domains that are not at all… you don’t need a common configuration between them, not only for simplicity but also yes, to keep different failure domains.

The only thing you need is to have the clusters being able to expose a gateway that we would see how that works in a moment. Finally, we want to avoid a global state. This means that you don’t have to install some centralized control plane coordinating all the clusters. In our approach, each cluster has a separate Linkerd control plane. They don’t have a common configuration. The only common thing is a common trust route. I’ll talk that about that in a moment.

Jason: What’s the advantage of avoiding a global state or a global config for this thing?

Alejandro: So that you can bring extra clusters without barely any configuration. That’s mainly the reason, I think.

Jason: Awesome. Thank you.

Certificates and trust anchors

Alejandro: As I said, the only requirement is for clusters to share the same trust anchor. You see that here at the bottom. In this example of west and east, each cluster… Well, Linkerd’s security model relies on a hierarchical chain of certificates. At the bottom, we have the common trust anchor, and then each cluster can provide their own separate issuer certificate. That certificate is used by the Linkerd control plane to issue short-lived certificates to each one of the clusters and each one of the proxies. Those short-lived certificates live about 24 hours, I think. For example, you see on the left the usual case where service A wants to connect to service B, so that is done through the proxies through mTLS.That means that each one of those proxies will validate the identity of the other proxy by checking against the common trust anchor. When Linkerd injects the proxy, it also injects the trust anchor as an environment variable into the proxy. They know what that certificate is, and that’s what they use to validate connections.

Enter the 2nd cluster

If we bring an additional cluster here, east, we have to install an extra extension Linkerd multi-cluster in both clusters. That gives us an extra component, the multi-cluster gateway that is responsible for routing all the connections that come into the cluster.

One thing that I didn’t picture here is the service mirror controller. That, for example, if I want to expose service C in the west cluster, I would add a label into service C that gets detected by the controller on west. It will create a mirror service. Of course, the target pod remains on the east cluster, and the controller associates an endpoint to that service. That endpoint points to the IP of the gateway on east. If service A wants to communicate to service C, the proxy will use that endpoint, and it will reach the gateway directly on east. That gateway will route the connection to the appropriate target cluster.

If you want to know more details about how mTLS works, Jason, just pasted the link to a previous mTLS workshop we did by Matei specifically on the topic of mTLS.

Multi-cluster failover

Now, let’s talk about failover. This is provided as an additional extension called Linkerd failover that we released a few months ago. This is supported only as of Linkerd 2.11.2, which we released one or two weeks ago that relies on the traffic split CRD. That CRD is defined in the SMI project. If you know, SMI is a service mesh interface, a project that attempts to put under the same umbrella some common service mesh concepts.

In 2.11.2, we have included the definition of that CRD in the Linkerd installation, but as of Linkerd 2.12, which is not yet out, we have removed that. Instead, we require that you install the Linkerd SMI extension that contains all the SMI-related stuff. For the demo, we will be using 2.11.2 so we don’t need to install that extra extension. This is the first approach to failover. It uses the simplest possible way of doing this will be the criteria for failing over from one primary service to a list of secondary service is the pot readiness of those secondary services. So it’s very simple.

Multi-cluster failover: what to expect next

As I said, this is the first stop to this problem. There are many ways we can improve. For example, Kubernetes now provides topology awareness hints, where you can declare in endpoint slices whether you want to keep connections inside the same zone or things like that. In Linkerd failover, we use traffic split, and traffic split only cares about our services. We are not surfacing those topology awareness schemes, but we can do that through some means in the future. Another way we could improve things is instead of using pod readiness, we could use latency. Instead of waiting for a pod to completely fail before doing the failover, we could watch its latency and do the failover when it starts becoming slow.

Also, currently, if the primary service fails, we immediately failover over to the secondary services. Whenever the primary service gets back online, we switch over 100% of the traffic back to it. That might be too abrupt, so one possible improvement would be to use a circuit breaker pattern in which we watch the health of the primary service and progressively switch traffic back to it instead of doing wholesale. Anyways, if any of these cases interest you or if you have additional use cases, please let us know in Linkerd repo raise an issue. That’ll help us to properly prioritize what we’re going to do in the future for these extensions.

Hands-on workshop: Let’s begin

We’re going to start with a demo. What we’re going to do, we’re going to start with one cluster. We’re going to install Emojivoto, which is the app we always use for these kinds of demos. Then we are going to add another cluster. We’re going to route some of the services from one cluster to the other, and then we’re going to have parallel installations of Emojivoto in both clusters and introduce failover so whenever one service fails in one cluster, we can use the service on the other cluster. All right. One second. Any questions so far? Okay. We good now?

Jason: We’re looking good.

Alejandro: Awesome. I have here on my screen the top left the west cluster and top right the east cluster. I’m going to switch to the context of the west cluster. I have an alias for kube ctrl to K for speed.

We’re on west now. I’m going to install Emojivoto. We’re going to use it and see if it’s working. For that, I’m going to fork forward the front end. Let’s check it out. There you go. You can vote for your favorite emojis, and you can check the leaderboard. There you go. This has some components. Votebot, which is a traffic generator, so we can see some stats out of the box that communicates with web, which is the web front end, and web front relies on voting, which returns the list of available emojis and… sorry, that’s emoji. Emoji gives you the list of emojis, and voting is where you persist the votes. This is the architecture of Emojivoto, as I just said, Votebot calls web, and web calls emoji and voting. What we’re going to do next is remove everything but vote bot and west, and have vote bot communicate with web as service on east. For that, we’re going to rely on a gateway that is installed on east, and we’re going to create a mirror of web service on west. Votebot actually calls the mirror of web here, and that’s going to rely the connection to east.

We’re going to first delete everything but Votebot. Whoops. We’re going to install Linkerd, of course. This is the bare, default installation. It will create the trust route and the issued certificates by default, some based on random certificates. We’re going to install these as well. This is not strictly necessary. If you don’t have enough nodes in your cluster, you can avoid this step. We just need this for one command that we’ll see in a moment. This is going to take a bit. Great timing for asking questions.

Jason: Just for folks out there, if you’re following along, just be good to get a thumbs up from people if you’ve been able to access the repo. I see we have one question out there. Do you mind putting your question in the chat, either in the Slack chat or the workshop, and I’m happy to ask Alejandro. Yeah, if you can do the little thumbs up reaction icon if that’s available. I’m not sure the emojis are available. Sorry.

Please just say something if you’re having trouble following along. Yeah, I can post the refill link. There can just one sec. The repo link is there in the chat.

Alejandro: As a reminder, Linkerd is not required for the multi-cluster functionality, only for one command that we will use in a moment. We’re almost there.

Installing the multi-cluster extension

The next step is to install the multi-cluster extension. The command is Linkerd multi-cluster install. I’m going to abbreviate multi-cluster to MC for speed. The extension is coming up and now Linkerd install apply. Now I want to add the east cluster. Remember, we need to use the same trust route. I’m going to extract that, the one that Linkerd install created by default. I’m going to extract that. That’s stored in a config map called Linkerd identity trusts. What’s wrong here? Linkerd identity trust routes. I’m going to call that west route CRD, but let’s take a look at that. Obviously this is a config map. I’m only interested in this part here so I’m going to remove everything else.

Creating the certificate for east

I’m going to create now the certificates for east. We will not use defaults, but we’re going to create some. For that, we use this command from Step, which is a CLI tool. Sorry, I forgot to tell you to install that beforehand. This is the same as open SSL but easier to use. What this does is create a route certificate. It gives us back the actual public key and the private key. This is the trust route. Now I need the issuer certificate, which is an interpreted certificate that is routed at the route CRD that we just created. We get back the public key and the private key. Right?

What I’m going to do now is, when I created the certificate for the east cluster, I created a new certificate, but we need it to be the same as west. We’re just going to bundle them, that’s another way of doing things. I’m going to just use this. This is the one we extracted from west. I’m going to append it to… well, I’m going to append the one from west to the one in east and create a bundle.

I just check what it looks like, there it is. I’m going to use that bundle. First, I’m going to update the certificate on west to use this bundle and then I’m going to install east using that same bundle. For that, I just do the Linkerd upgrade identity trust. Let me see if everything’s fine here. Looks good. We see some pods repeating there, and now I’m going to switch to the east cluster. I’m going to install Linkerd using that trust bundle. Then I’m going to install a multi-cluster extension. We don’t need these in this cluster. This is going to take a moment. Time for questions, if you want.

Jason: I saw someone typing but I don’t see anything active right at the moment.

Alejandro: Christina is typing.

Jason: Christina is asking if we can show the version of Linkerd and point out the traffic split CRD on the west cluster.

Alejandro: Yes, that’s right. Yes, it’s version 2.11.12. If we go to the west cluster, actually, it’s installing east as well. The CRD, the traffic split… we do not need… Ah, sorry. That’s CRD customer resource definition traffic speed. This is what you want to see, Christina, the CRD? This is the actual CRD definition. We’re going to add one instance of that we will see in detail in a moment. That was Linkerd itself. To answer the question from Christina, what installed the CRD? The traffic split CRD got installed by the Linkerd install command because it’s included in 2.11.12. It won’t be included in the upcoming 2.12 version. For that, you will have to install the Linkerd SMI extension. We’re ready here. Now we are going to install an injected version of Emojivoto on east.

Jason: Someone has a question about how do we generate the cert. I believe they’re asking about Step. Step is a CLI tool that you can use to very easily generate certificates. I’ll post the link to that in the chat. I don’t know if you have anything to add there, Alejandro.

Alejandro: No, it’s fine. Yeah, that’s correct. Usually people do that through Up SL, but I think Step is gaining more traction. It has a much friendlier interface. Let’s see the services that we have in the multi-cluster namespace in east.

Exposing ports

We have this gateway that got created there. That’s where all the traffic incoming multi-cluster cloud traffic will come through. It exposes these external IP. We have two ports there, 41-43. That pod is a two container pod. One of the containers is our usual Linkerd proxy. The other one is just a pause container, so it doesn’t do anything. We are exposing here the ports 41-43, which is the usual Linkerd proxy port and 41-91, which exposes the readiness of the cluster so any other clusters know whether to route traffic there or not.

Let’s also check the service authorizations. This is facing the internet, this external IP, but only traffic coming from inside the mesh. That are proxies whose identities rely on the trust anchor that all these clusters should share. If not, traffic will be denied. Right. In addition to that, there’s a server authorization resource. That is the single Linkerd multi-cluster namespace authorization called Linkerd gateway. This is associated to the gateway, and the important part here is this. Again, the proxy will reject anything that doesn’t have a proper identity, but we also have on top of that this, so only traffic that has meshed identity will be allowed. By default traffic from any IP will be allowed. The recommendation in production, of course, is that you put here the list of the other clusters you want to communicate with.

Now I’m going to delete the Votebot pod on east so that we only have the Votebot from west to the requests. I’m going to export the web service on east. For that, I just add a label.

Let me make sure I did this is properly. Exported equals true. I think that’s fine. I’m going to create a link. This is what will allow us to actually link both clusters. The command is a link. I have to provide a name for this particular link. I’m calling it east and… there it is. Let’s take a closer look into what that is.

First of all, this contains a secret with config. We’ll take a closer look at that in a moment. That’s just a regular config file that allows you to connect to the cluster. This is what the west service mirror controller will use to connect to the east cluster. We also have the link CRD instance. First we have here a cluster credential secret. That’s the secret I just showed you. This is what will allow us to connect to the east cluster, the IP of the gateway that its exposed on east. Then we have a cluster role that, first of all, I generated this while being on the east cluster context, but I’m going to apply this on the west cluster, right?

This cluster role gives me the ability to do anything with endpoints and services because we need to create mirrors on the west cluster. It also allows me to list and watch namespaces because we can only create service mirrors as long as their namespace is already there, and we bind that. Service with the service account Linkerd service mirror east, so that’s the service account of the service mirror controller on west, yeah.

We have a role applying to the Linkerd multi-cluster namespace that simply allows us to retrieve the secret that I showed you first on this file and watch over any links that get created. We bind that. We have the service accounts we already mentioned and the deployment of the actual service mirror controller on west. Remember we are going to apply this on west. This is what is going to create the controller that is going to communicate, is going to connect to the Kube API on east using the kube config I showed you, so you can watch over any service that gets exported. Whenever that happens, it’s going to create a mirror on the west cluster. Finally, we have a service which is just for probing purposes. We’re going to hit that service and that, as you can see, it doesn’t have a target pod because the service mirror is going to manually create an endpoint whose IP is going to point to the gateway on east. It’s going to hit the probe port that I showed you before, just to see if the cluster is alive.

Token for the service mirror controller

The good config that I showed you is going to use these token from the Linkerd service mirror account here. I think it’s this one. It’s going to connect to this using that token. You have created a service account on east that is tied to that token that is going to be used by the service mirror controller on west. If you have multiple clusters that are connecting to east, you can create different services accounts. You can have better management. You can deny access if you want by deleting those service accounts. If you want to create all these resources for a new service account, you can use the Linkerd MC allow command and service account. You have to give a name to the service account you want to create. For example, I know this is going to fail because this turns out to be broken on 2.11.2. I just realized that. Let me switch real quick to Edge. I have some commands here to switch my Linkerd version edge 2241, and this should work. Let’s take a look at that. I’m not going to use that. This is just to show you what it does.\

We have the same cluster role service all the same RBAC I showed you before, but this applies to this particular service account. If you want to create a new service account to link different clusters, you can use this command. Then switch back to stable. Let’s go back to west and apply that link. Now let’s take a closer look into that config that just got applied here.

This is in Linkerd multi-cluster namespace, secret cluster credentials. I’m going to use some JQ magic to extract that. This is encoded in 64. What did I do here? This is a regular config file. What I wanted to show you is this token. This should be the same token associated with the default service account that got set up on east to grant us access to the API on east from west. Let’s check if the connection has been established. We use Linkerd MC gateway. It’s working fine. We have one service exported. This is the latency distribution for east as seen from west. These uses permit use under the hood. The Linkerd service mirrors, by the way, got created here. This exposes some metrics regarding the availability of east and permit use. We’ll grab that and show us those latencies here. This is why we needed to install Linkerd east on west, but it’s just for this.

Let’s check the services in Emojivoto. We have our usual leftover services from the other components and the service mirror controller created web SBC east, which is the mirror of web on east. If we see the endpoints, the associated endpoint that points directly to the gateway on east, in the proxy port. Emojivoto isn’t injected yet and it’s not pointing to the service mirror yet. It should be airing out. Verify that. Now let’s try to communicate with the east cluster using curl. For that, I’m going to create a curl pod. Okay. This is going to die because it completes, so I’m going to edit it. I bet there’s a better way to do that, but I don’t know. Let’s just edit it. What should I do here? I’m going to add the sleep. Here. Okay, restart.

It’s running now. Let’s exec into it. Okay. I will curl that mirrored endpoint as we see east in Emojivoto. The connection failed. If we check the logs of the gateway in east, we should see the failure. I’m using the east context here. Let’s see. Let’s hit it again. We see immediately the failure here, the connection close. The erase direct connections must be mutually authenticated. That’s what we expected. Now let’s try injecting the curl part of west. I can close this. It should work this time. For that, one thing we usually do is just get the deployment and pipe it to Linkerd inject and pipe it to control.

Now we have two parts here. One of them is the proxy. Let’s get into it again. Six, nine. This time we have two parts. I have to specify the pod I want to get into and let’s do the curl again. This time it works, get back our HTML. Now I want to do the same thing with Votebot. For that, let’s edit the deployment. First, I’m going to add the inject annotation. One common mistake is to add that annotation at the workload level but the right way is in the actual pod template. It’s right here below.

Let’s failvoer

Currently, Votebot doesn’t know about the mirror service. We just changed these environment variable here and we changed the target that it will hit with the mirror service. This is east I think. Now it’s restarting and let’s tail the log. Now it’s working fine. Awesome. Now let’s do failover. For that, we’re going to do something like this. We’re going to have two parallel installations of Linkerd in both clusters. We will have Votebot only on west still. We’re going to set up a traffic split whose primary service will be a web service on local on west and secondary services will be the same web service on west and another secondary, which is the mirror on east. Whenever web service on west fails, it will fail over on west on web service, on east, right through this mirror here.

We’re going to reinstall Emojivoto on west, yes, so that we can get back on our pods. This time we inject from the get-go. In the instructions I have, if you are running on 2.12 or on an edge, you’re going to have to install the Linkerd SMI extension to get access to the traffic split CRD. It’s not the case here so we’re skipping that.

I do need to install the Linkerd failover extension. One second. Not yet. Let’s first take a look at how traffic split works. I have here in my directory a traffic split resource. It’s very simple. What matters for us now is this service. We are interested in splitting the traffic of the web service, and that’s supported by two backends, web service and web service east, which is the mirror that we set up in the previous steps. Currently, all the traffic is going to web SVC. Let’s apply that. If we tailor logs of the voting pod, you should see some traffic here on west. Yeah, there it is. I’m going to open another window and tail traffic off voting of the voting service on east. We have no traffic there. Remember the Votebot is only running on west so east is not doing anything right now.

Switching all traffic to east

Now, Okay. I’m going to switch all the traffic to east. Let’s open yet another window. I’m still here on the west context. Let’s manually edit the traffic split. I’m going to switch all the way to east and save that. We see the traffic stopped on west and it started on east. It’s working as intended. Let me switch it back. The idea of the traffic split operator is to do what I did automatically, depending on the availability of those backend services. That’s it, so let me install the Linkerd failover extension. For that, I need to add the Helm record. This is done through Helm. We don’t have a CLI command for installing this. If you don’t have our link Helm installed yet, you should do this, then Helm update. Then I install the extension. This is still as an edge release so it needs the devil flag.

We should see the operator here on west thirties Linkerd failover. Let me show you one more time that traffic split. These annotations here, most importantly failover Linkerd, flag these specific target traffic splits so that the failover operator watches over it. If we don’t set this label, it will just leave it alone. We have this annotation here, primary service, that specifies which of the services listed here is the primary one and the other ones will act as secondaries. The primary is web service on west and the secondary is going to be the mirror one.

[WIP]…

Now let’s simulate a failure on west and see how traffic flows to east automatically. For that, I’m just going to scale down the web pod on west. Replica zero, the web so I’m going to scale down this guy here. You will see it getting terminated and immediately you saw the traffic going back to the secondary. It was flowing here and then it went to east. And if we check again, our traffic split, we see that automatically the weight was shifted to the mirror here and the primary got its way set to zero. Now if the primary gets available again, the operator will switch back the traffic to it, scale it back up, replicas one. This is going to take a moment while the pod becomes ready, this one here. You’ll see the traffic flowing from east to west. There it is. Okay. Awesome. I think that’s all I wanted to show you. Let’s get back to the presentation. Do you have any questions before we switch back?

Jason: Looks like things are fairly quiet. For the folks that were going through, were we able to follow along fairly well? Some thumbs up or comments in the slack or chat would go a huge way.

Jason: With that, I hope you’ve enjoyed this workshop. We have a bunch more workshops over at the Service Message Academy. Let me just get you that link real quick. A bunch more webinars from Buoyant on all sorts of things. I heard a question about Linkerd inject. There’s kind of a Linkerd overview and how it works session in there. We’ve got a session on multi-cluster on mTLS. Good one on policy and there’s a lot more going on. Thanks a ton [inaudible 00:54:44] I’m sorry if I didn’t say your name correctly, there will be a recording. We’re so grateful to all of you for attending. Let me just post the link here, both in the zoom chat and in in the workshop channel.

Jason: The next workshop coming up, June 16th, Enterprise P-K-I- in the cloud native world, with Linkerd and cert manager. It’s a personal favorite, I love cert manager. I think it’s a lot of fun. Well, fun is relative. I think it’s a cool technology that makes managing certificates way better than doing it by hand. I bet you’ll really enjoy it and I hope you get a chance to register. There’ll be a recording. Slides will get sent out. Oh, and it’ll be co presented with the folks who do cert manager to the folks over at Jet Stack. Now I think Identify.

Jason: Yeah, really great stuff beyond that. You like the presentation? You like Linkerd? You like point? Well, we want you to come join us and help make Linkerd even more awesome. Love to hear from you. One, love to hear from you on Twitter, on Slack, anywhere that you have questions or problems. If you’re looking for a role, we’d love to talk to you and please feel free to reach out to me directly on Slack if you have questions about any of these things or on LinkedIn, if you need any of that. Alejandro, any parting words?

Alejandro: Thank so much for attending. The demo went well, so thanks for the demo gods. [crosstalk 00:56:31]

Jason: Yeah, it’s always good for all things work first time.

Alejandro: We didn’t touch on some topics like we recently added headless services mirroring, so you can mirror database notes and things like that can get pretty interesting. If you are interested in that, we have docs for that in our Linkerd.io site.

Jason: Awesome, great bit. Is this documented on Linkerd.io? The failover operator?

Alejandro: Good question. I think not. There’s a good ReadMe on the Linkerd failover extension at GitHub.

Jason: Yeah, it’s on the extension page, but we don’t know that we have it in the docs. Great question [inaudible 00:57:25] let me get that pulled up. Actually, let’s just get the Linkerd failover operator pulled up. Get you all the link to that. Just give me just a quick sec. For folks in the chat, here is the link to the failover operator itself, We’ll double check on the docs thing to make sure that it is linked in the website. Obviously we’ll get that in. Great question, thank you. Fantastic. Alejandro, you want to give any parting words? I know, ask you that before.

Alejandro: No, thanks again. Thank you, Jason.

Jason: Yeah, no problem.

Alejandro: Hope to see you folks soon.

Jason: Yeah, and there’s tons of courses for Linkerd newbs, including on YouTube. There is a free course from the C-N-C-F- that will actually walk you through a lot of it. There are videos. If you’re not already tired of my voice, you can hear me talk about it an awful lot. Of course, if anyone’s going to be at KubeCon, hit me up there. I’m happy to point you at materials there as well and help you find an opportunity to earn one of our super cool Linkerd hats. I’ll try and post something, Omar, in the workshops channel before we go. If you didn’t get a chance to join, one more time I’ll share the slack link. It is slack.linkerd.io. Please join the Linkerd Slack for any more questions and you all have a great rest of your day and thank you for the time to attend with us.

Alejandro: Right, thank you. Bye-bye.

Whether for disaster recovery, multi-tenancy, or user-facing latency, deploying applications across multiple clusters is an increasingly common approach to Kubernetes. Unfortunately, while Kubernetes gives you many reasons to run multiple clusters, it provides you with very little help in doing so. In this workshop, we’ll dive into Linkerd’s powerful multi-cluster capabilities and see how you can establish secure communication between clusters, even across the open Internet, in a way that’s fully transparent to the application.

Transcript

(Note: this transcript has been automatically generated with light editing. It may contain errors! When in doubt, please watch the original talk!)

Welcome and logistics

Jason: Alejandro, thank you for the patience during the introductions. Do you want to go ahead and start the deck? We’re here talking about Secure multi-cluster Kubernetes with Linkerd.

Alejandro: Thank you so much, Jason. A little bit about myself, I’m a software engineer at Buoyant and one of the maintainers of the Linkerd project. These are some of my social media accounts. It’s usually alpeb at Twitter, GitHub, and on Slack it’s Alejandro at Buoyant. Feel free to contact me even after the workshop to ask me any questions, I will be more than glad to answer. For today’s agenda, the beef of it will be a demo, a hands-on demo. The idea is for you folks to follow along. Let me know if my cadence is too fast. I pasted it in the Slack channel, this URL for the GitHub repo, which contains the list of commands that I will type for these. You need two clusters ready. If you don’t have them ready, you can take advantage.

First, I will go over an overview of Linkerd’s implementation of multi-cluster. That will take about 15 minutes, so if you don’t have your clusters ready, you can do it now. In the demo, I’ll be using two GKE clusters because we will have two clusters, west and east. The east one needs to be able to create load balancer services accessible from the internet or at least from the west cluster. The easiest is to do that through public clouds. I’m using three nodes per cluster because we’re going to be installing Linkerd viz as well, but that’s not required really. It’s just for one command. If three nodes are too much for you, it’s okay, two should be good as well. Usually, in local testing, I use K3d…

Hey, that’s Jason’s face! Nice to see you.

I use K3d, but that requires some extra flags. It’s easier just to go with a public cloud for these demos.

Multi-cluster overview

Let’s get started with the brief multi-cluster overview. First of all, let’s talk a little bit about why do we want multi-cluster? We want to frame these around Linkerd’s three fundamental pillars, which are security, reliability, and observability. What are the use cases for multi-cluster? First, we might want to deploy the same services across different multi-clusters, across different clusters for failover purposes. If services or the entire clusters start failing in some region, we kind of failover other regions. That deals with the reliability aspect of things. The other use case is to actually spread different services among different clusters. If we want, for example, to segmentate services because we have different teams in our organization, and we want to give the responsibility of each team to provision and maintain their own separate clusters.

Multi-cluster will give you this functionality in a secure way. Also, you might need for regulatory reasons to have services in different jurisdictions, so you can isolate them in separate clusters, etc. There are multiple ways to scan the multi-cluster path. Linkerd’s implementation is very opinionated because we want to fulfill these three fundamental goals. First, we want to continue being fully transparent to apps, as we’ve always been. The idea for multi-cluster is for apps to not have to change to benefit from all the good things that service mesh provides. We want to keep the same when using multi-cluster. The only change we would have to make is if you want to connect to a service in an external cluster. Obviously, since they are in different networks, you cannot use the same FQDN and would have to connect to the mirrored service in your local cluster.

We will see what that means in the demo. That would be the only thing you would have to change in your apps. Secondly, we want to be independent of network topology. One possible implementation of multi-cluster is just to deploy your different clusters using the same network IP address so that each pod can communicate with any other pod in the clusters without having to rely on a service mesh. We didn’t go down that road because we want to keep clusters to be totally separate, among other reasons, so that they have the different failure domains that are not at all… you don’t need a common configuration between them, not only for simplicity but also yes, to keep different failure domains.

The only thing you need is to have the clusters being able to expose a gateway that we would see how that works in a moment. Finally, we want to avoid a global state. This means that you don’t have to install some centralized control plane coordinating all the clusters. In our approach, each cluster has a separate Linkerd control plane. They don’t have a common configuration. The only common thing is a common trust route. I’ll talk that about that in a moment.

Jason: What’s the advantage of avoiding a global state or a global config for this thing?

Alejandro: So that you can bring extra clusters without barely any configuration. That’s mainly the reason, I think.

Jason: Awesome. Thank you.

Certificates and trust anchors

Alejandro: As I said, the only requirement is for clusters to share the same trust anchor. You see that here at the bottom. In this example of west and east, each cluster… Well, Linkerd’s security model relies on a hierarchical chain of certificates. At the bottom, we have the common trust anchor, and then each cluster can provide their own separate issuer certificate. That certificate is used by the Linkerd control plane to issue short-lived certificates to each one of the clusters and each one of the proxies. Those short-lived certificates live about 24 hours, I think. For example, you see on the left the usual case where service A wants to connect to service B, so that is done through the proxies through mTLS.That means that each one of those proxies will validate the identity of the other proxy by checking against the common trust anchor. When Linkerd injects the proxy, it also injects the trust anchor as an environment variable into the proxy. They know what that certificate is, and that’s what they use to validate connections.

Enter the 2nd cluster

If we bring an additional cluster here, east, we have to install an extra extension Linkerd multi-cluster in both clusters. That gives us an extra component, the multi-cluster gateway that is responsible for routing all the connections that come into the cluster.

One thing that I didn’t picture here is the service mirror controller. That, for example, if I want to expose service C in the west cluster, I would add a label into service C that gets detected by the controller on west. It will create a mirror service. Of course, the target pod remains on the east cluster, and the controller associates an endpoint to that service. That endpoint points to the IP of the gateway on east. If service A wants to communicate to service C, the proxy will use that endpoint, and it will reach the gateway directly on east. That gateway will route the connection to the appropriate target cluster.

If you want to know more details about how mTLS works, Jason, just pasted the link to a previous mTLS workshop we did by Matei specifically on the topic of mTLS.

Multi-cluster failover

Now, let’s talk about failover. This is provided as an additional extension called Linkerd failover that we released a few months ago. This is supported only as of Linkerd 2.11.2, which we released one or two weeks ago that relies on the traffic split CRD. That CRD is defined in the SMI project. If you know, SMI is a service mesh interface, a project that attempts to put under the same umbrella some common service mesh concepts.

In 2.11.2, we have included the definition of that CRD in the Linkerd installation, but as of Linkerd 2.12, which is not yet out, we have removed that. Instead, we require that you install the Linkerd SMI extension that contains all the SMI-related stuff. For the demo, we will be using 2.11.2 so we don’t need to install that extra extension. This is the first approach to failover. It uses the simplest possible way of doing this will be the criteria for failing over from one primary service to a list of secondary service is the pot readiness of those secondary services. So it’s very simple.

Multi-cluster failover: what to expect next

As I said, this is the first stop to this problem. There are many ways we can improve. For example, Kubernetes now provides topology awareness hints, where you can declare in endpoint slices whether you want to keep connections inside the same zone or things like that. In Linkerd failover, we use traffic split, and traffic split only cares about our services. We are not surfacing those topology awareness schemes, but we can do that through some means in the future. Another way we could improve things is instead of using pod readiness, we could use latency. Instead of waiting for a pod to completely fail before doing the failover, we could watch its latency and do the failover when it starts becoming slow.

Also, currently, if the primary service fails, we immediately failover over to the secondary services. Whenever the primary service gets back online, we switch over 100% of the traffic back to it. That might be too abrupt, so one possible improvement would be to use a circuit breaker pattern in which we watch the health of the primary service and progressively switch traffic back to it instead of doing wholesale. Anyways, if any of these cases interest you or if you have additional use cases, please let us know in Linkerd repo raise an issue. That’ll help us to properly prioritize what we’re going to do in the future for these extensions.

Hands-on workshop: Let’s begin

We’re going to start with a demo. What we’re going to do, we’re going to start with one cluster. We’re going to install Emojivoto, which is the app we always use for these kinds of demos. Then we are going to add another cluster. We’re going to route some of the services from one cluster to the other, and then we’re going to have parallel installations of Emojivoto in both clusters and introduce failover so whenever one service fails in one cluster, we can use the service on the other cluster. All right. One second. Any questions so far? Okay. We good now?

Jason: We’re looking good.

Alejandro: Awesome. I have here on my screen the top left the west cluster and top right the east cluster. I’m going to switch to the context of the west cluster. I have an alias for kube ctrl to K for speed.

We’re on west now. I’m going to install Emojivoto. We’re going to use it and see if it’s working. For that, I’m going to fork forward the front end. Let’s check it out. There you go. You can vote for your favorite emojis, and you can check the leaderboard. There you go. This has some components. Votebot, which is a traffic generator, so we can see some stats out of the box that communicates with web, which is the web front end, and web front relies on voting, which returns the list of available emojis and… sorry, that’s emoji. Emoji gives you the list of emojis, and voting is where you persist the votes. This is the architecture of Emojivoto, as I just said, Votebot calls web, and web calls emoji and voting. What we’re going to do next is remove everything but vote bot and west, and have vote bot communicate with web as service on east. For that, we’re going to rely on a gateway that is installed on east, and we’re going to create a mirror of web service on west. Votebot actually calls the mirror of web here, and that’s going to rely the connection to east.

We’re going to first delete everything but Votebot. Whoops. We’re going to install Linkerd, of course. This is the bare, default installation. It will create the trust route and the issued certificates by default, some based on random certificates. We’re going to install these as well. This is not strictly necessary. If you don’t have enough nodes in your cluster, you can avoid this step. We just need this for one command that we’ll see in a moment. This is going to take a bit. Great timing for asking questions.

Jason: Just for folks out there, if you’re following along, just be good to get a thumbs up from people if you’ve been able to access the repo. I see we have one question out there. Do you mind putting your question in the chat, either in the Slack chat or the workshop, and I’m happy to ask Alejandro. Yeah, if you can do the little thumbs up reaction icon if that’s available. I’m not sure the emojis are available. Sorry.

Please just say something if you’re having trouble following along. Yeah, I can post the refill link. There can just one sec. The repo link is there in the chat.

Alejandro: As a reminder, Linkerd is not required for the multi-cluster functionality, only for one command that we will use in a moment. We’re almost there.

Installing the multi-cluster extension

The next step is to install the multi-cluster extension. The command is Linkerd multi-cluster install. I’m going to abbreviate multi-cluster to MC for speed. The extension is coming up and now Linkerd install apply. Now I want to add the east cluster. Remember, we need to use the same trust route. I’m going to extract that, the one that Linkerd install created by default. I’m going to extract that. That’s stored in a config map called Linkerd identity trusts. What’s wrong here? Linkerd identity trust routes. I’m going to call that west route CRD, but let’s take a look at that. Obviously this is a config map. I’m only interested in this part here so I’m going to remove everything else.

Creating the certificate for east

I’m going to create now the certificates for east. We will not use defaults, but we’re going to create some. For that, we use this command from Step, which is a CLI tool. Sorry, I forgot to tell you to install that beforehand. This is the same as open SSL but easier to use. What this does is create a route certificate. It gives us back the actual public key and the private key. This is the trust route. Now I need the issuer certificate, which is an interpreted certificate that is routed at the route CRD that we just created. We get back the public key and the private key. Right?

What I’m going to do now is, when I created the certificate for the east cluster, I created a new certificate, but we need it to be the same as west. We’re just going to bundle them, that’s another way of doing things. I’m going to just use this. This is the one we extracted from west. I’m going to append it to… well, I’m going to append the one from west to the one in east and create a bundle.

I just check what it looks like, there it is. I’m going to use that bundle. First, I’m going to update the certificate on west to use this bundle and then I’m going to install east using that same bundle. For that, I just do the Linkerd upgrade identity trust. Let me see if everything’s fine here. Looks good. We see some pods repeating there, and now I’m going to switch to the east cluster. I’m going to install Linkerd using that trust bundle. Then I’m going to install a multi-cluster extension. We don’t need these in this cluster. This is going to take a moment. Time for questions, if you want.

Jason: I saw someone typing but I don’t see anything active right at the moment.

Alejandro: Christina is typing.

Jason: Christina is asking if we can show the version of Linkerd and point out the traffic split CRD on the west cluster.

Alejandro: Yes, that’s right. Yes, it’s version 2.11.12. If we go to the west cluster, actually, it’s installing east as well. The CRD, the traffic split… we do not need… Ah, sorry. That’s CRD customer resource definition traffic speed. This is what you want to see, Christina, the CRD? This is the actual CRD definition. We’re going to add one instance of that we will see in detail in a moment. That was Linkerd itself. To answer the question from Christina, what installed the CRD? The traffic split CRD got installed by the Linkerd install command because it’s included in 2.11.12. It won’t be included in the upcoming 2.12 version. For that, you will have to install the Linkerd SMI extension. We’re ready here. Now we are going to install an injected version of Emojivoto on east.

Jason: Someone has a question about how do we generate the cert. I believe they’re asking about Step. Step is a CLI tool that you can use to very easily generate certificates. I’ll post the link to that in the chat. I don’t know if you have anything to add there, Alejandro.

Alejandro: No, it’s fine. Yeah, that’s correct. Usually people do that through Up SL, but I think Step is gaining more traction. It has a much friendlier interface. Let’s see the services that we have in the multi-cluster namespace in east.

Exposing ports

We have this gateway that got created there. That’s where all the traffic incoming multi-cluster cloud traffic will come through. It exposes these external IP. We have two ports there, 41-43. That pod is a two container pod. One of the containers is our usual Linkerd proxy. The other one is just a pause container, so it doesn’t do anything. We are exposing here the ports 41-43, which is the usual Linkerd proxy port and 41-91, which exposes the readiness of the cluster so any other clusters know whether to route traffic there or not.

Let’s also check the service authorizations. This is facing the internet, this external IP, but only traffic coming from inside the mesh. That are proxies whose identities rely on the trust anchor that all these clusters should share. If not, traffic will be denied. Right. In addition to that, there’s a server authorization resource. That is the single Linkerd multi-cluster namespace authorization called Linkerd gateway. This is associated to the gateway, and the important part here is this. Again, the proxy will reject anything that doesn’t have a proper identity, but we also have on top of that this, so only traffic that has meshed identity will be allowed. By default traffic from any IP will be allowed. The recommendation in production, of course, is that you put here the list of the other clusters you want to communicate with.

Now I’m going to delete the Votebot pod on east so that we only have the Votebot from west to the requests. I’m going to export the web service on east. For that, I just add a label.

Let me make sure I did this is properly. Exported equals true. I think that’s fine. I’m going to create a link. This is what will allow us to actually link both clusters. The command is a link. I have to provide a name for this particular link. I’m calling it east and… there it is. Let’s take a closer look into what that is.

First of all, this contains a secret with config. We’ll take a closer look at that in a moment. That’s just a regular config file that allows you to connect to the cluster. This is what the west service mirror controller will use to connect to the east cluster. We also have the link CRD instance. First we have here a cluster credential secret. That’s the secret I just showed you. This is what will allow us to connect to the east cluster, the IP of the gateway that its exposed on east. Then we have a cluster role that, first of all, I generated this while being on the east cluster context, but I’m going to apply this on the west cluster, right?

This cluster role gives me the ability to do anything with endpoints and services because we need to create mirrors on the west cluster. It also allows me to list and watch namespaces because we can only create service mirrors as long as their namespace is already there, and we bind that. Service with the service account Linkerd service mirror east, so that’s the service account of the service mirror controller on west, yeah.

We have a role applying to the Linkerd multi-cluster namespace that simply allows us to retrieve the secret that I showed you first on this file and watch over any links that get created. We bind that. We have the service accounts we already mentioned and the deployment of the actual service mirror controller on west. Remember we are going to apply this on west. This is what is going to create the controller that is going to communicate, is going to connect to the Kube API on east using the kube config I showed you, so you can watch over any service that gets exported. Whenever that happens, it’s going to create a mirror on the west cluster. Finally, we have a service which is just for probing purposes. We’re going to hit that service and that, as you can see, it doesn’t have a target pod because the service mirror is going to manually create an endpoint whose IP is going to point to the gateway on east. It’s going to hit the probe port that I showed you before, just to see if the cluster is alive.

Token for the service mirror controller

The good config that I showed you is going to use these token from the Linkerd service mirror account here. I think it’s this one. It’s going to connect to this using that token. You have created a service account on east that is tied to that token that is going to be used by the service mirror controller on west. If you have multiple clusters that are connecting to east, you can create different services accounts. You can have better management. You can deny access if you want by deleting those service accounts. If you want to create all these resources for a new service account, you can use the Linkerd MC allow command and service account. You have to give a name to the service account you want to create. For example, I know this is going to fail because this turns out to be broken on 2.11.2. I just realized that. Let me switch real quick to Edge. I have some commands here to switch my Linkerd version edge 2241, and this should work. Let’s take a look at that. I’m not going to use that. This is just to show you what it does.\

We have the same cluster role service all the same RBAC I showed you before, but this applies to this particular service account. If you want to create a new service account to link different clusters, you can use this command. Then switch back to stable. Let’s go back to west and apply that link. Now let’s take a closer look into that config that just got applied here.

This is in Linkerd multi-cluster namespace, secret cluster credentials. I’m going to use some JQ magic to extract that. This is encoded in 64. What did I do here? This is a regular config file. What I wanted to show you is this token. This should be the same token associated with the default service account that got set up on east to grant us access to the API on east from west. Let’s check if the connection has been established. We use Linkerd MC gateway. It’s working fine. We have one service exported. This is the latency distribution for east as seen from west. These uses permit use under the hood. The Linkerd service mirrors, by the way, got created here. This exposes some metrics regarding the availability of east and permit use. We’ll grab that and show us those latencies here. This is why we needed to install Linkerd east on west, but it’s just for this.

Let’s check the services in Emojivoto. We have our usual leftover services from the other components and the service mirror controller created web SBC east, which is the mirror of web on east. If we see the endpoints, the associated endpoint that points directly to the gateway on east, in the proxy port. Emojivoto isn’t injected yet and it’s not pointing to the service mirror yet. It should be airing out. Verify that. Now let’s try to communicate with the east cluster using curl. For that, I’m going to create a curl pod. Okay. This is going to die because it completes, so I’m going to edit it. I bet there’s a better way to do that, but I don’t know. Let’s just edit it. What should I do here? I’m going to add the sleep. Here. Okay, restart.

It’s running now. Let’s exec into it. Okay. I will curl that mirrored endpoint as we see east in Emojivoto. The connection failed. If we check the logs of the gateway in east, we should see the failure. I’m using the east context here. Let’s see. Let’s hit it again. We see immediately the failure here, the connection close. The erase direct connections must be mutually authenticated. That’s what we expected. Now let’s try injecting the curl part of west. I can close this. It should work this time. For that, one thing we usually do is just get the deployment and pipe it to Linkerd inject and pipe it to control.

Now we have two parts here. One of them is the proxy. Let’s get into it again. Six, nine. This time we have two parts. I have to specify the pod I want to get into and let’s do the curl again. This time it works, get back our HTML. Now I want to do the same thing with Votebot. For that, let’s edit the deployment. First, I’m going to add the inject annotation. One common mistake is to add that annotation at the workload level but the right way is in the actual pod template. It’s right here below.

Let’s failvoer

Currently, Votebot doesn’t know about the mirror service. We just changed these environment variable here and we changed the target that it will hit with the mirror service. This is east I think. Now it’s restarting and let’s tail the log. Now it’s working fine. Awesome. Now let’s do failover. For that, we’re going to do something like this. We’re going to have two parallel installations of Linkerd in both clusters. We will have Votebot only on west still. We’re going to set up a traffic split whose primary service will be a web service on local on west and secondary services will be the same web service on west and another secondary, which is the mirror on east. Whenever web service on west fails, it will fail over on west on web service, on east, right through this mirror here.

We’re going to reinstall Emojivoto on west, yes, so that we can get back on our pods. This time we inject from the get-go. In the instructions I have, if you are running on 2.12 or on an edge, you’re going to have to install the Linkerd SMI extension to get access to the traffic split CRD. It’s not the case here so we’re skipping that.

I do need to install the Linkerd failover extension. One second. Not yet. Let’s first take a look at how traffic split works. I have here in my directory a traffic split resource. It’s very simple. What matters for us now is this service. We are interested in splitting the traffic of the web service, and that’s supported by two backends, web service and web service east, which is the mirror that we set up in the previous steps. Currently, all the traffic is going to web SVC. Let’s apply that. If we tailor logs of the voting pod, you should see some traffic here on west. Yeah, there it is. I’m going to open another window and tail traffic off voting of the voting service on east. We have no traffic there. Remember the Votebot is only running on west so east is not doing anything right now.

Switching all traffic to east

Now, Okay. I’m going to switch all the traffic to east. Let’s open yet another window. I’m still here on the west context. Let’s manually edit the traffic split. I’m going to switch all the way to east and save that. We see the traffic stopped on west and it started on east. It’s working as intended. Let me switch it back. The idea of the traffic split operator is to do what I did automatically, depending on the availability of those backend services. That’s it, so let me install the Linkerd failover extension. For that, I need to add the Helm record. This is done through Helm. We don’t have a CLI command for installing this. If you don’t have our link Helm installed yet, you should do this, then Helm update. Then I install the extension. This is still as an edge release so it needs the devil flag.

We should see the operator here on west thirties Linkerd failover. Let me show you one more time that traffic split. These annotations here, most importantly failover Linkerd, flag these specific target traffic splits so that the failover operator watches over it. If we don’t set this label, it will just leave it alone. We have this annotation here, primary service, that specifies which of the services listed here is the primary one and the other ones will act as secondaries. The primary is web service on west and the secondary is going to be the mirror one.

[WIP]…

Now let’s simulate a failure on west and see how traffic flows to east automatically. For that, I’m just going to scale down the web pod on west. Replica zero, the web so I’m going to scale down this guy here. You will see it getting terminated and immediately you saw the traffic going back to the secondary. It was flowing here and then it went to east. And if we check again, our traffic split, we see that automatically the weight was shifted to the mirror here and the primary got its way set to zero. Now if the primary gets available again, the operator will switch back the traffic to it, scale it back up, replicas one. This is going to take a moment while the pod becomes ready, this one here. You’ll see the traffic flowing from east to west. There it is. Okay. Awesome. I think that’s all I wanted to show you. Let’s get back to the presentation. Do you have any questions before we switch back?

Jason: Looks like things are fairly quiet. For the folks that were going through, were we able to follow along fairly well? Some thumbs up or comments in the slack or chat would go a huge way.

Jason: With that, I hope you’ve enjoyed this workshop. We have a bunch more workshops over at the Service Message Academy. Let me just get you that link real quick. A bunch more webinars from Buoyant on all sorts of things. I heard a question about Linkerd inject. There’s kind of a Linkerd overview and how it works session in there. We’ve got a session on multi-cluster on mTLS. Good one on policy and there’s a lot more going on. Thanks a ton [inaudible 00:54:44] I’m sorry if I didn’t say your name correctly, there will be a recording. We’re so grateful to all of you for attending. Let me just post the link here, both in the zoom chat and in in the workshop channel.

Jason: The next workshop coming up, June 16th, Enterprise P-K-I- in the cloud native world, with Linkerd and cert manager. It’s a personal favorite, I love cert manager. I think it’s a lot of fun. Well, fun is relative. I think it’s a cool technology that makes managing certificates way better than doing it by hand. I bet you’ll really enjoy it and I hope you get a chance to register. There’ll be a recording. Slides will get sent out. Oh, and it’ll be co presented with the folks who do cert manager to the folks over at Jet Stack. Now I think Identify.

Jason: Yeah, really great stuff beyond that. You like the presentation? You like Linkerd? You like point? Well, we want you to come join us and help make Linkerd even more awesome. Love to hear from you. One, love to hear from you on Twitter, on Slack, anywhere that you have questions or problems. If you’re looking for a role, we’d love to talk to you and please feel free to reach out to me directly on Slack if you have questions about any of these things or on LinkedIn, if you need any of that. Alejandro, any parting words?

Alejandro: Thank so much for attending. The demo went well, so thanks for the demo gods. [crosstalk 00:56:31]

Jason: Yeah, it’s always good for all things work first time.

Alejandro: We didn’t touch on some topics like we recently added headless services mirroring, so you can mirror database notes and things like that can get pretty interesting. If you are interested in that, we have docs for that in our Linkerd.io site.

Jason: Awesome, great bit. Is this documented on Linkerd.io? The failover operator?

Alejandro: Good question. I think not. There’s a good ReadMe on the Linkerd failover extension at GitHub.

Jason: Yeah, it’s on the extension page, but we don’t know that we have it in the docs. Great question [inaudible 00:57:25] let me get that pulled up. Actually, let’s just get the Linkerd failover operator pulled up. Get you all the link to that. Just give me just a quick sec. For folks in the chat, here is the link to the failover operator itself, We’ll double check on the docs thing to make sure that it is linked in the website. Obviously we’ll get that in. Great question, thank you. Fantastic. Alejandro, you want to give any parting words? I know, ask you that before.

Alejandro: No, thanks again. Thank you, Jason.

Jason: Yeah, no problem.

Alejandro: Hope to see you folks soon.

Jason: Yeah, and there’s tons of courses for Linkerd newbs, including on YouTube. There is a free course from the C-N-C-F- that will actually walk you through a lot of it. There are videos. If you’re not already tired of my voice, you can hear me talk about it an awful lot. Of course, if anyone’s going to be at KubeCon, hit me up there. I’m happy to point you at materials there as well and help you find an opportunity to earn one of our super cool Linkerd hats. I’ll try and post something, Omar, in the workshops channel before we go. If you didn’t get a chance to join, one more time I’ll share the slack link. It is slack.linkerd.io. Please join the Linkerd Slack for any more questions and you all have a great rest of your day and thank you for the time to attend with us.

Alejandro: Right, thank you. Bye-bye.