What is a service mesh?

William Morgan

William Morgan

Feb 9, 2015

A service mesh is a configurable infrastructure layer that transparently adds security, reliability, and observability to the communication between microservices, without requiring any code change.

As modern IT organizations move to the cloud, the fundamentals of how their applications must be architected and operated change dramatically. The meteoric rise of the cloud native stack, with Docker, Kubernetes, and the service mesh at its core, reflects the tools and best practices necessary to meet this shift head-on.

In this new, cloud native world, the communication between microservices (sometimes called “east-west traffic”) happens at a scale that was rarely approached before. The runtime behavior and security of an application are now tightly coupled to this network communication, as each request must wend its way through a complex application topology at runtime. Especially when coupled with a polyglot application stack, this leads to significant challenges in operability and manageablility.

The service mesh solves these challenges by providing a uniform layer of security, reliability, and observability across the application, regardless of language, framework, or runtime environment. By installing a “data plane” of ultralight, low-latency, transparent proxies, the service mesh provides critical features such as mutual TLS (mTLS), request retries, gRPC load balancing, and “golden metrics” instrumentation in a way that can be centrally managed and controlled.

The service mesh gives you features that are critical for running modern server-side software in a way that’s uniform across your stack and decoupled from application code.

What can you do with a service mesh?

A service mesh can solve a variety of challenges in operating cloud native applications.

For ops, devops, and SRE teams: the service mesh’s ability to provide zero config “golden metrics” dashboards, uniform instrumentation, and sophisticated canarying and blue-green traffic shifting tools offer powerful mechanisms for reliability and observability, with no code changes required.

For security teams: the service mesh’s ability to provide mutual TLS between services, cryptographically-secured service identity, and access policy enforcement, provide fundamental building blocks for adopting a zero-trust model of application security.

For architects: the service mesh’s ability to provide seamless cross-cluster communication and failover mechanisms allow for a variety of hybrid and multi-cloud approaches.

For developers: the service mesh frees developers to focus on business logic by removing the necessity to build these features into the application itself.

Whether you own the platform and need to provide resiliency and security for your applications, or you’re a developer who needs to provide a better way of application testing and debugging, a service mesh makes it easier to control and manage microservices.

Why Linkerd?

Open source, open governance, and CNCF hosted.

Linkerd is the leading open source, open governance service mesh that runs anywhere Kubernetes does, whether on prem or in the cloud, whether on one cluster or one hundred clusters. Linkerd is ultralight, ultra fast, and built for security from the ground up.

Being open, secure and easy to use makes Linkerd the ideal service mesh in the new cloud native world. To learn more about Linkerd, read Buoyant founder William Morgan’s meshifesto: https://servicemesh.io/

Similar posts

Sign up for Buoyant's service mesh newsletter