A service mesh for microarchitecture components
Exchange
One of the disruptive innovations of recent years, which shifted into focus only in the wake of cloud computing and containers, is undoubtedly the microarchitecture approach to computing (see the "Finally Agile" box). Microarchitecture application developers, however, are faced with the task of operating all of the separate microservices and components and maintaining them as part of their application.
Finally Agile
When only virtual machines (e.g., KVM, Qemu) were available for virtualization, it would have been unwise to operate microcomponents on individual virtual systems. The resource overhead that would have resulted from many almost empty virtual machines (VMs) would have wasted CPU and RAM capacity on a grand scale.
Containers, however, do not burn up CPU cycles just because they are running. At the end of the day, a container is only a combination of several isolation techniques at the Linux kernel level that enclose an application in a virtual environment. Because the Linux kernel is running anyway, additional resources are hardly needed.
Developers welcomed the container approach, which lets them work in a more agile way than before. When the task of the day was bundling as much workload as possible into a VM, it did not make sense to distribute the load across many small components.
Containers, though, make development possible by bundling individual components of an application into different containers and operating and managing them there. Where developers independently maintain and design the components of an app, the concept of micro-releases lives on.
Release early, release often – a requirement that was often forgotten in the context of large monoliths – is now realizable. The components can be replaced regularly during operation run time without causing problems. After all, high availability is an implicit part of the design of modern container applications and not something that is subsequently grafted on by the cluster manager.
Microarchitecture applications are also easier to troubleshoot: After all, developers only have to deal with the code they write and maintain, not with a jumble of different parts that have to be integrated somehow. Mini-releases also make it easier to detect errors creeping in early – not months later, when the giant application makes its way onto production systems in the scope of a maintenance window.
This is where the Istio developers enter the scene. They promise container application developers a central network interface in which the parts of an application talk. The required services (i.e., load balancing, filtering, discovery of existing services, etc.) are provided by Istio. To be able to use all these features, you only have to integrate Istio into your application.
Communication Needed
For the individual parts of a microservice application to work together successfully, communication between components is essential. How can a developer make life easier if they are working on a store application, for example, where a customer navigates through different parts of the program during a purchase and has to be forwarded from one component to the next?
The subject of connecting containers is not trivial per se. Because a container usually runs in its own namespace, it is not so easy to access from the main network. Kubernetes, for example, comes with a proxy server on-board, without which connections from the outside world to applications would simply not be possible. Istio extends an existing Kubernetes setup based on pods by adding APIs and central controls.
What is a Service Mesh?
Istio [1] forms a "service mesh" between the individual components of a microservices application by removing the need for the admin to plan the network connections below the individual components. In the Istio context, service mesh means the totality of all components that are part of a microservice application.
In the previous example from the web store, this could mean all the components involved in the ordering process. The application that provides the website, the one that monitors the queue of incoming orders, the one that generates email for orders, and the one that dispatches the email – there are basically no limits to what you could imagine the mesh doing. What all these components have in common is that they communicate with each other on a network. In the worst case, it involves a huge amount of development overhead that isn't actually necessary.
Clearly developers need to deal with the many aspects of connectivity, starting with the simple question of how the application that dispatches email knows how to reach the application that generates email. In a legacy environment, both functions would be part of the same program – or the dispatcher would simply be connected to the mail generator through a static IP address.
This arrangement is not possible in container applications based on a microservice architecture. After all, the individual parts of the application need to be able to scale seamlessly to reflect the load. When all hell breaks loose on the web store, more email senders and email generators need to be fired up, but they will all have dynamic IP addresses. So how do they discover each other?
Discovering Services
Other solutions to this problem, such as Consul or Etcd, have service directories in which services register to shout, "Here I am!" to other services on the network. If you integrate Consul or Etcd into your application, you have solved the autodiscovery problem, but that's not your only worry.
How is load balancing possible across all instances of a particular microservice? If routing is required, what implements the corresponding rules? Will there be any traffic that you want to stop? If so, how do you do that? All these questions have separate answers: Load balancing is handled by a HAProxy and filtering by nftables; routing rules can be defined manually. However, these services can more easily be provided by Istio.
Buy this article as PDF
(incl. VAT)