KubeCon Detroit 2022 Wrapup

KubeCon Detroit 2022 Wrapup

Flynn

Nov 30, 2022

KubeCon North America in Detroit wrapped a few weeks ago, and we’ve been pretty much full steam ahead since then with everything we took away from it! We were fortunate enough again to have a lot of great Linkerd content presented at the conference – if you missed the conference or weren’t able to see all these Linkerd talks, we have everything you need to get caught up below.

Part of the team getting ready for day one!

KubeCon is also a great opportunity to talk not just with Linkerd users, but also with our colleagues from across the industry, and to get a better sense of what folks have on their minds. Some common threads from my point of view:

  • Security and zero trust as a concept came up a lot. On the one hand, this is nothing new: there are a number of folks at KubeCon every year from industries where security is a big deal. On the other hand, I feel like there are new external pressures (for example, the US Federal zero-trust mandate) that are placing a bit more focus here than in years before.
  • I continue to feel that we, as an industry, could - and should! - be doing a better job of outreach. I talked to more than a few newcomers to Kubernetes who seemed more than a little bewildered by the learning curve they were suddenly facing, and I feel like we should be able to help make that experience less painful.
  • Operational pain was a recurring theme. I talked to people who complained about it, and wanted it to go away, and I saw a lot of booths pitching products that claimed to remove it, usually for specific things, like logging or alerting. There are some real traps and opportunities here: it’s very easy to make functionally correct things that are painful to actually use, and it’s very difficult - but incredibly rewarding! - to build things that are truly graceful to use. Avoiding the traps always starts with really understanding the point of view of the users, and I think we as an industry can always be doing better there.
  • I was repeatedly struck by how effectively CNCF projects can provide something greater than the sum of their parts when the maintainers put a bit of effort into thinking about how the projects can fit together, and collaborating to make it happen. This is always a joy to see when it happens.
  • I saw a lot of attention paid to what I’m going to call workload lifecycle management: GitOps, CI/CD, etc. I think this is great, though I think it’s critical that we remember the four-person startup use case too! We need to be thinking not just about the perfect world where everything is managed to a fare-thee-well by a team of wonderful SREs, but also about how to get there from the messy duct-tape-and-baling-wire world that many of our favorite projects started from.
  • Finally, it was truly wonderful to get to catch up in person with my colleagues from Buoyant - and from the Emissary-ingress, Envoy Gateway, and GAMMA projects! - after mostly seeing them via Zoom. This is always one of my favorite parts of KubeCon.

New in Detroit 

You might’ve noticed something new on the KubeCon stage: instead of the co-chairs presenting project updates, graduated projects were invited to produce a short video and tell the audience directly about what's new with their projects. Linkerd’s update - hosted by Linkerd’s mascot Linky, with Buoyant CEO William Morgan assisting! - got rave reviews. If you missed it, be sure to check out the video!

And at the Linkerd booth, Buoyant debuted our first-ever limited edition KubeCon Linky stickers:

First ever Linky limited edition sticker! 

We’re looking forward to continuing this new tradition with something new at every KubeCon – be sure to swing by our booth in Amsterdam before the second-edition sticker runs out! 

Looking Ahead

KubeCon Detroit is finished, but of course, KubeCon Amsterdam feels like it’s just around the corner – we’ve already submitted talk abstracts and started planning ahead for next spring. Hope to see you there! 

Linkerd Talks

Building a Scalable, Compliant, Multi-Cloud Bank with a Service Mesh - Kasper Nissen, Lunar

Kasper Nissen, Lead Platform Architect at Lunar, will share how Lunar built a scalable, multi-cloud bank with cloud native tech, allowing for rapid product iteration while simplifying compliance with strict regulatory requirements. The flexible technical setup also allows them to rapidly absorb newly acquired startups, ensuring they start generating value for the bank quickly. Lunar started by centralizing its log and release management tooling in a single cluster connected to multiple Kubernetes clusters across GCP, Azure, and AWS — all connected through a service mesh. This allowed them to remove state and complexity from edge clusters and manage infra services centrally while exposing these central services to edge clusters. This transformation is part of a strategy to treat the platform as a product and provide the same set of platform features across cloud providers. Attendees will learn how Lunar implemented multi-cluster communication across clouds and how it all fits together with GitOps as a multi-cloud management layer to comply with regulations on the audit trail of all changes, following the principles of least privilege, and the ability to perform cluster failovers.

Whose Packet Is It Anyway? Life of a Packet Through a Service Mesh - Kevin Leimkuhler & Doug Jordan, Airbnb

In this talk, Kevin and Doug will trace a packet through its journey between a meshed client and server. They'll explore how the path of a packet changes after installing a service mesh, the additional hops it introduces, and which networking changes ensure the application's behavior isn't affected.  First they'll observe the networking rule changes that allow for a proxy to intercept traffic. Once we understand what changes about how a packet travels through the kernel, we'll better understand how to observe it in the following steps. Next, in order to observe this packet on its journey they'll take a dive into the Kubernetes networking debugging space. How do you properly use debug containers to observe traffic between other containers? Once you have debugging capabilities, what tools can we use to observe the traffic? Using these tools, attendees will understand what is happening behind the scenes of a service mesh and how a packet travels within it.

Hands-on Workshop: Zero Trust Networking in Practice with a Service Mesh Workshop - Jason Morgan, Buoyant & Ashley Davis, Jetstack 

In this hands-on workshop, participants will learn the basics of adopting a zero-trust approach to Kubernetes network security using a service mesh. Topics will include encryption, authentication, and authorization of traffic within the cluster; PKI considerations and setup for in-cluster and cross-cluster mutual TLS; applying a deny-by-default / principle of least privilege approaches to authorization; the relationship between zero-trust and perimeter security; and more. Participants will learn the elements of overall Kubernetes security that must be in place before a service mesh can be effective, including a basic threat model for Kubernetes clusters as a whole. This workshop will use Linkerd, cert-manager, and Kyverno but the techniques will be applicable to many different projects.

Emissary + Linkerd Resilience Patterns: Rate Limits, Retries & Timeouts - Flynn, Buoyant & Daniel Bryant, Ambassador Labs

In this talk, members of the Emissary-Ingress and Linkerd teams will show the painless way to get four CNCF projects (Emissary, Linkerd, Kubernetes, and Envoy) running smoothly together to provide resilience and reliability for both end user requests and service-to-service application calls. They'll guide you through the best practices for using Linkerd and Emissary to give you capabilities like rate limiting, retries, and timeouts. Join the talk for 1) A tour of each project and discussion of how they complement each other and make a great addition to your production infrastructure stack; 2) an overview of best practices and antipatterns related to resilience. For example, retry budgets are essential within a deep microservice call chain, and 3) live demonstration of a reliability-focused reference architecture for running Linkerd and Emissary together.

Flagger, Linkerd, And Gateway API: Oh My! - Jason Morgan, Buoyant & Sanskar Jaiswal, Weaveworks

In this session, you’ll learn about Flagger, Linkerd, and the Gateway API specification. You’ll also learn how to use Flagger and Linkerd to enable automated progressive delivery. The Gateway API specification is gaining momentum in the Kubernetes space as it attempts to change how users manage traffic. Both Flagger and Linkerd were able to standardize on the Gateway API to enable their users to simplify how they define traffic management within, and between, their clusters. Join Jason and Sanskar to discuss how each project independently implemented the Gateway API, how those implementations benefitted their respective projects, and how this allowed them to work together without any explicit configuration.

What We Learned From the Gateway API: Designing Linkerd’s New Policy CRD - Alex Leong, Buoyant

Since the introduction of the new Gateway APIs, created by the SIG Network community, Linkerd maintainers have been working on leveraging a new pattern known as policy attachment in Linkerd’s authorization mechanism. In this talk, Alex, a Linkerd maintainer, will briefly cover the collection of Gateway APIs, what policy attachment represents, and how it works in practice, and uncover how Linkerd’s authorization policies have been revised with the policy attachment pattern in mind. Policy attachment, as outlined by the SIG Network community, allows platform-level policies, such as timeouts, retries, and custom health checks, to attach to any arbitrary Kubernetes type. This enables users to create custom policies that extend, and plug into the API instead of being a concrete part of it.

Lightning Talk: Writing Service Mesh Control Planes in Rust - Eliza Weisman, Buoyant

In this session, Linkerd maintainer Eliza Weisman will discuss the Linkerd team's experience using Rust, why they chose it for their data plane, and, most recently, how Linkerd has extended the use of Rust into the control plane as well. The Rust programming language has rapidly grown in popularity. It offers several features that help write reliable, fault-tolerant, and efficient software — all desirable properties for a Kubernetes controller. Linkerd, the graduated CNCF service mesh, has been using Rust for its data plane proxies since the release of Linkerd 2 in 2018. The data plane has to be as fast and secure as possible, so Rust was a natural choice. However, like much of the Kubernetes ecosystem, the Linkerd control plane — which manages the behavior of the data plane — has generally been implemented in Go. Linkerd 2.11 introduced the new policy controller, Linkerd's first control plane component implemented in Rust. Join this session as Eliza shares the team's challenges, benefits, and lessons learned using the Rust.

Overview And State Of Linkerd - Alex Leong, Buoyant

In this talk, maintainers from the Linkerd project will present an overview of the project and an update on upcoming releases. They’ll cover what Linkerd is and how it compares to other service meshes; what the latest features and functionality are; what to expect in upcoming releases; and how you can get involved in one of the CNCF’s most talked-about projects. This talk will cover Linkerd's recent adoption of the Gateway API and the many new features that move unlocks.

Stretching CNI Boundaries with Service Meshes, a Roadmap for the Future - Alex Leong, Buoyant

Container Network Interface (CNI) plugins such as Calico or Cilium are typically used to provide container network connectivity and network policy. However, service meshes such as Linkerd and Istio also use CNI plugins to configure the networking rules that allow their sidecar proxies to intercept incoming and outgoing traffic. This means that it is increasingly common to have more than one CNI plugin installed at a time, which can lead to race conditions where the CNI plugins overwrite each other's configuration. In this talk, Alex Leong will demonstrate how to detect and resolve these problems and suggest a set of best practices for CNI plugins to ensure compatibility with other plugins. She'll also explore some potential changes to the CNI plugin specification, which could solve these problems at a structural level.

That's a lot of great Linkerd content! We hope you enjoyed it as much as we did. For more Linkerd, sign up for Buoyant's Service Mesh Academy!

book
Further reading
book
Further reading
book
Further reading
book
Further reading
book
Further reading