The Creators of Linkerd
Real-world enterprise service mesh use is increasing, and early adopters of Linkerd say its user-friendliness has been key to their success so far with the complex tech.
The JVM-based Linkerd 1, first launched by Buoyant.io in 2016, was the first service mesh product available, and its creators actually coined the ‘service mesh’ term.
But enterprise interest in service mesh, a high-scale network architecture that distributes policy enforcement among data proxies, only really took off with the rise of Kubernetes over the last three years. Service mesh lends itself well to microservices environments that run on containers and require detailed network management far beyond that required for VM-based workloads.
Kubernetes creator Google jumped on this trend when it, IBM and Lyft launched Istio in 2017. Linkerd 2, which arrived in 2018, began to catch up to Istio’s level of Kubernetes support, but by then Istio had captured much of the buzz around service mesh. The race was on.
Istio especially dazzled users in early discussions with more technically sophisticated features for container environments, such as built-in mutual TLS for security. It further captured attention via a connection with the Envoy proxy, a darling of the CNCF world that is widely regarded as the industry standard. Linkerd, by contrast, uses its own proxy.
In terms of company size, Buoyant is tiny compared to Google and IBM – the privately-held company, founded in 2015, has raised $24 million in funding, according to Crunchbase. IBM’s annual report for 2019 recorded $77.1 billion in revenue; Google’s parent company Alphabet reported $162 billion.
The size discrepancy between the commercial backers of each service mesh project is reflected in the community participation they’ve mustered so far. Linkerd has the backing of the CNCF, and the Linkerd website says the project has more than 100 contributors. But Stackalytics analysis shows Buoyant engineers are by far the most active contributors to the project, with 991 commits, representing 80.4% of the total number; 204 independent contributions follow, representing 16.5%.
As for Istio, while Google is the top contributor to the project in Stackalytics analysis, with 1705 commits, this represents 53% of the total, and independent contributors made 25.3% of commits to the project, followed by Red Hat and IBM, with 7.4% and 7.1%, respectively.
Still, this year, Istio’s dominance in mostly theoretical early industry discussions began to falter, largely because Google chose not to donate it to CNCF or any other established open source foundation, though the Istio project did broaden its steering committee to include four new members from companies other than Google and IBM in September.
But also, as service mesh went from the demo stage to real-world deployments, Linkerd swayed early adopters with ease of use, even before it could match all of Istio’s features.
“We started out with containers and Kubernetes on GKE and deployed Istio two and a half years ago, mostly for our machine learning and data science back ends,” said Matt Young, principal cloud architect at online insurance marketplace EverQuote in Cambridge, Mass. “It worked as advertised … but the complexity of running it was pretty steep.”
EverQuote grew from an emerging company to a $1 billion valuation after its IPO in 2018. It now has a sprawling environment and more than 150 developers, but a team of only about five engineers to manage its cloud-native infrastructure.
When the infrastructure team began experimenting with Istio, it managed a single Kubernetes cluster in GKE. That has since grown into a multi-cluster environment that includes Amazon EKS and Kubernetes multi-tenancy managed through HashiCorp Terraform infrastructure as code.
Istio’s like a Bugati – you need a couple of them because one’s always in the garage. And we just needed to get groceries down a dirt road.Matt YoungPrincipal cloud architect, EverQuote
Istio’s like a Bugati – you need a couple of them because one’s always in the garage. And we just needed to get groceries down a dirt road.
Matt YoungPrincipal cloud architect, EverQuote
Still, continuing with Istio even at a much smaller scale two years ago would have required at least five more infrastructure engineers, or a comparable increase in professional services costs from Google Cloud, Young estimated. That’s when he began to investigate Linkerd.
“There were fewer custom resource definitions and a much more straightforward deployment model,” Young said. “Istio’s like a Bugati – you need a couple of them because one’s always in the garage. And we just needed to get groceries down a dirt road.”
Specifically, EverQuote needed gRPC load balancing as its network traffic grew, eventually more than eightfold.
“We actually didn’t get through deploying all of Istio,” Young said. “We were only using the ingress gateway and just enough of the control plane to manage load balancing, but we cut bait when we realized this would take a lot more learning – even diagnosing and troubleshooting was still emerging, and we quickly realized it was not going to be sustainable.”
Another IT pro at a growing company in Europe, fintech software provider Finleap Connect, had a similar experience during the early days of Kubernetes service mesh in 2017.
“We installed Istio when what would become Linkerd wasn’t released yet,” said Christian Hüning, director of cloud technology at Finleap Connect. “But we found out that developers would have to do too many things in order to get Istio working.”
Istio has become simpler since then, including a shift to a monolithic control plane architecture from the more complex microservices version as of Istio 1.5. But Linkerd has also had time to develop its own differentiated features, such as built-in observability tools.
As EverQuote’s service mesh use expanded from relatively simple load balancing into fine-grained network latency management, Young found Linkerd’s easy integration with open source observability tools such as Prometheus, including built-in dashboards, helpful as well.
“The native integration Linkerd has running Prometheus in its control plane and exposing metrics out of the box gave us a huge productivity boost,” Young said.
Then there’s mutual TLS (mTLS), Istio’s initial marquee advantage over Linkerd. It’s a mechanism by which server and client endpoints mutually authenticate over a network; in the past, such authentication focused on clients verifying the certificate of the server, but not vice versa.
Administering mutual authentication is one of the chief selling points for service mesh in container environments, which have many clients and complex network connections that make it both necessary and extremely difficult to deploy manually. It first appeared in experimental form in Linkerd 2.2 in early 2019.
In the 18 months since that release, Linkerd has caught up in mTLS. It’s is now built in to every connection within the Linkerd service mesh by default, without requiring manual setup (Istio mTLS can also be automatically installed as of version 1.4). In version 2.9, which shipped in November, Linkerd’s mTLS extends beyond http and gRPC connections to support the TCP protocol used by stateful applications and databases.
Meanwhile, over the last six months, a new crop of service mesh products has emerged, including an open source service mesh project for Kubernetes launched by Microsoft in August, and in October, newly stable versions of service meshes such as Kong Kuma and F5’s Nginx Service Mesh. Kong Kuma and Microsoft’s Open Service Mesh both tout open source governance under CNCF and the ease of use that Istio lacks – value propositions Linkerd has been using for years.
“This is where we see everybody going,” said Brad Casemore, analyst at IDC. “As [vendors] move into this early mainstream era for the market, they want to make this a lot more consumable and simpler to manage.”
While Linkerd has opened a lead in the ease of use category, it’s still refining some of its more sophisticated features for large-scale environments. For example, Hüning’s team must wait for a pull request it submitted to be completed before it can upgrade beyond Linkerd 2.7, that will allow Finleap to customize Prometheus resource limits. Until then, Linkerd defaults pose performance challenges for Finleap’s large, densely packed Kubernetes clusters.
Buoyant CEO and founder William Morgan tends to bristle when the subject of Envoy comes up. Prospective service mesh customers often bring up Linkerd’s lack of integration with the CNCF proxy project when they do competitive assessments, but he believes strongly this comes from a lack of understanding, rather than a real need for Linkerd to integrate with the Envoy proxy.
“There’s a set of people who say they need Envoy, but it’s not a real requirement,” he said in an interview last month. “Part of the confusion around service mesh is that there’s so much noise around implementation details and not enough focus on the actual problems people are trying to solve.”
Like mTLS, Linkerd’s proxy has also been refined over time. Linkerd’s maintainers rolled out Linkerd2-proxy in July, rewritten in the Rust programming language to boost its performance. Linkerd version 2.9 last month also added multi-core proxy support.
Some industry watchers believe trying to compete with Envoy at this point is a lost cause, but Morgan strongly disagreed.
“In the end, Linkerd’s requirements around resource footprint and security were simply too restrictive for Envoy to be a realistic choice,” he wrote in a July blog post. “Envoy was a Swiss Army knife, when what we needed was a needle.”
Analysts are split on the Envoy question when it comes to Linkerd. IDC’s Casemore he’s still not sure it will be worthwhile for Linkerd to fight Envoy’s momentum as service mesh goes mainstream, even if it has superior tech.
“It’s hard to differentiate at the data plane layer – customers aren’t really noticing relatively minor differences in one or the other,” Casemore said. “It’ll be interesting whether they can really show, ‘Hey, we’re so much better than Envoy,’ especially when Envoy has such a tremendous amount of support, and not only in the Istio community.”
Conversely, Gartner analyst Fintan Ryan expects the Envoy debate to fade as service mesh deployments grow.
“Most organizations don’t need to be aware of the complexity under the hood,” he said.