What is a service mesh?

A service mesh is a pre-configured application service that allows services to talk to each other, sharing data and consistency across an application lifecycle. They are used to manage microservices using the thin, writeable layer of a container. Built to be easily set up and deployed, a service mesh unlocks the value of microservices, enabling businesses to easily discover new services and manage them as API products.

In order to implement a service mesh, you need a modern integration strategy fit for the digital era. It must be agile, foster innovation, and meet business requirements. Efficiency is also important but not a top priority.

Benefits of service mesh

Standardization of microservices-based applications. Behaviors of distributed applications vary depending on the network that supports it. Different behaviors can create a challenge for a configuration management system. A service mesh makes a unique data center look less cumbersome to the orchestrator.

Monitoring and improving the behavior of distributed applications. A good service mesh is designed to place highly requested components in a location on the application control plane where they can be most easily accessible. That said, components need to operate smoothly and more efficiently. With a service mesh, data is shared, which means developers can see what needs to be improved on in the next iteration.

Increased transparency into complicated interactions. Often, it’s difficult to follow the flow of information in a cloud-native environment. Service mesh brings transparency to the way in which vital application services are delivered, enabling you to track their behavior.

Encryption. A service mesh manages keys, certificates, and TLS configuration to ensure continual encryption that doesn’t fail on you. Users no longer need to implement encryption or manage certificates. These responsibilities are instead moved from the app developer to the framework layer.

How does a service mesh architecture work?

A service mesh provides a collection of lightweight proxies that work alongside containers. Each proxy acts as a gateway to interactions that occur between the containers. The proxy facilitates the request across the service mesh to the appropriate downstream containers that service the request—essentially taking the logic governing service-to-service communication out of individual services and abstracting it to a layer of infrastructure.

To do this, a service mesh is built into an app as a collection of network proxies. The controller in the control plane orchestrates the connections between proxies. The controller provides access to the control policies and collects metrics from containers. In a service mesh, requests are routed between microservices through proxies in their own infrastructure layer. For this reason, individual proxies that make up a service mesh are sometimes called “sidecars,” since they run alongside each service, rather than within them. In conjunction, these “sidecar” proxies—decoupled from each service—form a mesh network.

Components of a service mesh

Service discovery

Proxies provide the route for the communication between microservices and other applications. Discovery happens dynamically as new replicas are added or removed.

Service mesh routing

Service mesh lightweight proxies have inbuilt smart routing mechanisms, which helps provide the best routes for requests. Routing is done dynamically between services.

Service mesh observability

Modern service meshes have components deployed in the control plane, which help with logging, tracing request, and response calls between services, monitoring, and alerting. Failure patterns are detected through dashboards.

Service mesh security

Service meshes provide authentication, authorization, and encryption of communication between services.

Providing an optimized experience

When creating a service mesh, you want to build separate, fit-for-purpose apps that align the design, function, and capabilities of the app to fit the workflows of individual personas and modalities. This provides the opportunity to create purpose-built applications that are easier to design, develop, and deploy. Additionally, the application can be more focused on the channel that the user prefers when performing a task, whether it’s a web-based application running in a laptop, a mobile app on a smartphone, or a conventional interface.

Designing a service mesh that supports multiple functions and experiences requires a flexible backend to support the different capabilities and workflows for every application in use. The backend must offer a continuous experience, so when implementing, it needs to be composed to support the specific needs of an optimized fit-for-purpose app. One particular type of backend, Backends for Frontends (also known as BFF) supports custom workflows for these optimized apps. The services also need to align with a specific UX. This model enables development teams to rapidly implement new frontends to support new personas or devices without impacting other apps or services.

Use cases for service mesh

Blue-green deployments

When it comes to websites, especially ones meant for eCommerce, every second of downtime has a direct revenue impact. With Blue-Green Deployments, you can perform complex updates to your applications without creating expensive service outages. While there are many types of Blue-Green Deployments, they all follow the same type of pattern:

  1. Website Blue is running with live traffic.
  2. An updated version of the website (Website Green) is deployed and tested while traffic is still going to Website Blue.
  3. Deployment is performed; a small amount of live traffic is diverted to Website Green (while you observe that everything is functioning correctly.)
  4. The amount of traffic to Website Green is gradually increased as the traffic going to Website Blue is decreased. This is continued until ALL traffic goes to Website Green.
  5. Website Blue is taken down.

By gradually increasing traffic to the new site instead of updating everything at once, you give your operations team a chance to roll back changes before there are system-wide consequences. This is especially useful in cases where the service being updated has complex interdependencies with other services.

A service mesh is an especially well-suited technology to perform blue-green deployments because it has control over all inter-service traffic and a centralized place to manage deployments and observe global system health.

Optimize communication

Every new service added to an app, or new instance of an existing service running in a container, complicates the communication environment and introduces new points of failure. Within a complex microservices architecture, it can become cumbersome and nearly impossible to locate where problems have occurred without a service mesh.

This is because a service mesh also captures every aspect of service-to-service communication as performance metrics. Over time, data made visible by the service mesh can be applied to the rules for interservice communication, resulting in more efficient and reliable service requests.

For example, If a given service fails, a service mesh can collect data on how long it took before a retry succeeded. As data on failure times for a given service aggregates, rules can be written to determine the optimal wait time before retrying that service, ensuring that the system does not become overburdened by unnecessary retries.