KubeCon North America in Detroit wrapped a few weeks ago, and we’ve been pretty much full steam ahead since then with everything we took away from it! We were fortunate enough again to have a lot of great Linkerd content presented at the conference – if you missed the conference or weren’t able to see all these Linkerd talks, we have everything you need to get caught up below.
KubeCon is also a great opportunity to talk not just with Linkerd users, but also with our colleagues from across the industry, and to get a better sense of what folks have on their minds. Some common threads from my point of view:
You might’ve noticed something new on the KubeCon stage: instead of the co-chairs presenting project updates, graduated projects were invited to produce a short video and tell the audience directly about what's new with their projects. Linkerd’s update - hosted by Linkerd’s mascot Linky, with Buoyant CEO William Morgan assisting! - got rave reviews. If you missed it, be sure to check out the video!
And at the Linkerd booth, Buoyant debuted our first-ever limited edition KubeCon Linky stickers:
We’re looking forward to continuing this new tradition with something new at every KubeCon – be sure to swing by our booth in Amsterdam before the second-edition sticker runs out!
KubeCon Detroit is finished, but of course, KubeCon Amsterdam feels like it’s just around the corner – we’ve already submitted talk abstracts and started planning ahead for next spring. Hope to see you there!
Kasper Nissen, Lead Platform Architect at Lunar, will share how Lunar built a scalable, multi-cloud bank with cloud native tech, allowing for rapid product iteration while simplifying compliance with strict regulatory requirements. The flexible technical setup also allows them to rapidly absorb newly acquired startups, ensuring they start generating value for the bank quickly. Lunar started by centralizing its log and release management tooling in a single cluster connected to multiple Kubernetes clusters across GCP, Azure, and AWS — all connected through a service mesh. This allowed them to remove state and complexity from edge clusters and manage infra services centrally while exposing these central services to edge clusters. This transformation is part of a strategy to treat the platform as a product and provide the same set of platform features across cloud providers. Attendees will learn how Lunar implemented multi-cluster communication across clouds and how it all fits together with GitOps as a multi-cloud management layer to comply with regulations on the audit trail of all changes, following the principles of least privilege, and the ability to perform cluster failovers.
In this talk, Kevin and Doug will trace a packet through its journey between a meshed client and server. They'll explore how the path of a packet changes after installing a service mesh, the additional hops it introduces, and which networking changes ensure the application's behavior isn't affected. First they'll observe the networking rule changes that allow for a proxy to intercept traffic. Once we understand what changes about how a packet travels through the kernel, we'll better understand how to observe it in the following steps. Next, in order to observe this packet on its journey they'll take a dive into the Kubernetes networking debugging space. How do you properly use debug containers to observe traffic between other containers? Once you have debugging capabilities, what tools can we use to observe the traffic? Using these tools, attendees will understand what is happening behind the scenes of a service mesh and how a packet travels within it.
In this hands-on workshop, participants will learn the basics of adopting a zero-trust approach to Kubernetes network security using a service mesh. Topics will include encryption, authentication, and authorization of traffic within the cluster; PKI considerations and setup for in-cluster and cross-cluster mutual TLS; applying a deny-by-default / principle of least privilege approaches to authorization; the relationship between zero-trust and perimeter security; and more. Participants will learn the elements of overall Kubernetes security that must be in place before a service mesh can be effective, including a basic threat model for Kubernetes clusters as a whole. This workshop will use Linkerd, cert-manager, and Kyverno but the techniques will be applicable to many different projects.
In this talk, members of the Emissary-Ingress and Linkerd teams will show the painless way to get four CNCF projects (Emissary, Linkerd, Kubernetes, and Envoy) running smoothly together to provide resilience and reliability for both end user requests and service-to-service application calls. They'll guide you through the best practices for using Linkerd and Emissary to give you capabilities like rate limiting, retries, and timeouts. Join the talk for 1) A tour of each project and discussion of how they complement each other and make a great addition to your production infrastructure stack; 2) an overview of best practices and antipatterns related to resilience. For example, retry budgets are essential within a deep microservice call chain, and 3) live demonstration of a reliability-focused reference architecture for running Linkerd and Emissary together.
In this session, you’ll learn about Flagger, Linkerd, and the Gateway API specification. You’ll also learn how to use Flagger and Linkerd to enable automated progressive delivery. The Gateway API specification is gaining momentum in the Kubernetes space as it attempts to change how users manage traffic. Both Flagger and Linkerd were able to standardize on the Gateway API to enable their users to simplify how they define traffic management within, and between, their clusters. Join Jason and Sanskar to discuss how each project independently implemented the Gateway API, how those implementations benefitted their respective projects, and how this allowed them to work together without any explicit configuration.
Since the introduction of the new Gateway APIs, created by the SIG Network community, Linkerd maintainers have been working on leveraging a new pattern known as policy attachment in Linkerd’s authorization mechanism. In this talk, Alex, a Linkerd maintainer, will briefly cover the collection of Gateway APIs, what policy attachment represents, and how it works in practice, and uncover how Linkerd’s authorization policies have been revised with the policy attachment pattern in mind. Policy attachment, as outlined by the SIG Network community, allows platform-level policies, such as timeouts, retries, and custom health checks, to attach to any arbitrary Kubernetes type. This enables users to create custom policies that extend, and plug into the API instead of being a concrete part of it.
In this session, Linkerd maintainer Eliza Weisman will discuss the Linkerd team's experience using Rust, why they chose it for their data plane, and, most recently, how Linkerd has extended the use of Rust into the control plane as well. The Rust programming language has rapidly grown in popularity. It offers several features that help write reliable, fault-tolerant, and efficient software — all desirable properties for a Kubernetes controller. Linkerd, the graduated CNCF service mesh, has been using Rust for its data plane proxies since the release of Linkerd 2 in 2018. The data plane has to be as fast and secure as possible, so Rust was a natural choice. However, like much of the Kubernetes ecosystem, the Linkerd control plane — which manages the behavior of the data plane — has generally been implemented in Go. Linkerd 2.11 introduced the new policy controller, Linkerd's first control plane component implemented in Rust. Join this session as Eliza shares the team's challenges, benefits, and lessons learned using the Rust.
In this talk, maintainers from the Linkerd project will present an overview of the project and an update on upcoming releases. They’ll cover what Linkerd is and how it compares to other service meshes; what the latest features and functionality are; what to expect in upcoming releases; and how you can get involved in one of the CNCF’s most talked-about projects. This talk will cover Linkerd's recent adoption of the Gateway API and the many new features that move unlocks.
Container Network Interface (CNI) plugins such as Calico or Cilium are typically used to provide container network connectivity and network policy. However, service meshes such as Linkerd and Istio also use CNI plugins to configure the networking rules that allow their sidecar proxies to intercept incoming and outgoing traffic. This means that it is increasingly common to have more than one CNI plugin installed at a time, which can lead to race conditions where the CNI plugins overwrite each other's configuration. In this talk, Alex Leong will demonstrate how to detect and resolve these problems and suggest a set of best practices for CNI plugins to ensure compatibility with other plugins. She'll also explore some potential changes to the CNI plugin specification, which could solve these problems at a structural level.
That's a lot of great Linkerd content! We hope you enjoyed it as much as we did. For more Linkerd, sign up for Buoyant's Service Mesh Academy!