Cloud Foundry realizes a service mesh
Crossroads
The main benefit of the Cloud Foundry service mesh is the weighted routing option. Routes can now be assigned to more than one application, a feature many members of the Cloud Foundry community have had on their wish list for several years. The growing popularity of the service mesh idea created a critical mass that led to the idea finally being implemented. (See also the "Microservices and the Service Mesh" box.) The function can be used to define a weighting that distributes the load among the applications involved. The weighting factor of each application statistically determines how many incoming requests it receives on average.
Microservices and the Service Mesh
Many developers are now involved with microservice architectures. The new paradigm is causing a departure from monolithic applications and movement toward distributed application systems.
One of the advantages of microservice architectures is that developers of individual applications can now freely select both the programming language and, for example, the database on a case-by-case basis and in line with the purpose of the application. This capability creates local autonomy, leads to better design decisions, and ultimately promotes software quality. On the flip side, whereas previously a single application depended on a central database, you now might have to support a dozen applications and several databases, message queues, and analytics servers. This significantly greater system complexity drives corresponding increases in operating costs.
Not coincidentally, the spread of the microservice architecture approach has coincided with the advent of cloud computing: These trends are mutually dependent. Furthermore, the symbiosis of the two technologies also favors the emergence of service meshes such as Istio [1] and Envoy [2], which support robust separation of responsibilities. Functions such as load balancing, automatic failover, encryption, authentication, and intelligent routing can now be controlled outside the application, if required. Service mesh technologies allow for centralized management of rules and data, which benefits application developers and platform operators alike.
Cloud Foundry
For historical reasons, Cloud Foundry is often equated with the Cloud Foundry Application Runtime (CFAR) platform (and its famous cf push
user experience), which includes many projects accumulated under the umbrella of the Cloud Foundry Foundation.
CFAR distinguishes itself from the Cloud Foundry Container Runtime (CFCR), a BOSH-powered variant of Kubernetes. BOSH is an orchestration tool independent of the infrastructure and operating system that allows automation of stateful, distributed applications throughout the application life cycle. It is suitable for coping with large application loads using CFAR or CFCR, as well as for the automated operation of database systems.
Compared with newcomer Kubernetes, Cloud Foundry is almost a prehistoric beast among modern application platforms. The efficient operation of a large number of applications has always been a priority for Cloud Foundry. For example, it was already a service mesh before the term became popular, and Cloud Foundry has always sent incoming requests from an application through a central router and distributed them from there to the application instances assigned to the request. Without such a routing function, self-healing of failed application instances in distributed container clusters (aka Diego cells) would be impossible.
Assigning requests to application instances requires a regular exchange between the router and the container hosts. Applications in Cloud Foundry have always been available from internal and external URLs that abstract the underlying application instances and automatically distribute the load of incoming requests. These application URLs also allow clean communication between the services of a microservice architecture. This setup already resembles the idea of service discovery. Centralized storage of log output is one of Cloud Foundry's native resources. Other tools are only needed to merge the information from application and data services.
Cloud Foundry with Service Mesh
Although CFAR is successful and proven, other technologies are developing progressively. The most prominent representative is certainly Kubernetes, which has adapted the service mesh concept quickly and profoundly. Kubernetes makes using Istio and Envoy child's play: They can be deployed easily in the form of sidecar containers to support applications. This capability increases the pressure on Cloud Foundry to offer a corresponding service mesh functionality, as well. What is surprising about the introduction of Istio and Envoy in Cloud Foundry is that it is only now being implemented; the service mesh subsystem is currently still beta.
The service mesh in Cloud Foundry is designed as an optional extension and can be added to a CFAR as needed. The installation creates an additional routing subsystem in addition to the existing versions for HTTP and TCP traffic. New system domains for Istio (*.istio
) and mesh routing (*.mesh
) are added. As with all Cloud Foundry routing systems, a load balancer must be installed upstream to distribute incoming requests to the router instances. The load is distributed in two stages by classic, and more static, load balancers and dynamic routers with app awareness (Figure 1).
Integration into the Cloud Foundry command-line interface is still incomplete; cf
commands are not available for all functions. Instead, users have to run curl
commands against the APIs. Because the Cloud Foundry service mesh is still in beta, not every Cloud Foundry provider supports it yet. However, if you have the provider's own Cloud Foundry environment, you can test the system. The service mesh developers are happy to receive feedback.
Weighted Routing
Weighted routes open up a wide range of new use cases for developers, including blue-green deployments, A/B testing, and canary releases. Blue-green deployments let you deploy an application twice. A separate route is assigned to each deployment, and the assigned weighting factors can be used to distribute the traffic among the applications. One of the two installations can be removed temporarily from load distribution, such as for updates. During the update, the other application installation serves the incoming requests. After the update, the traffic is switched to the new installation. In the case of an unexpected error, however, you can immediately switch back to the unchanged, older application version. If the system behaves as expected, you can then update the second application.
Similarly, distributing traffic to multiple application installations can be used for multivariate testing. Where two variants exist, an A/B test is created that confronts different user groups with different application versions to draw conclusions about acceptance from the observed user behavior. In simple tests (e.g., where two different back-end implementations are competing against each other), weighted routes can help in the testing. However, they only partially cover the requirements of such A/B tests: Many cases require a conscious assignment of users to the individual variants, for which other tools are needed.
In the case of canary releases, a new function is not made available to the entire user community at once. Like A/B testing, the user group is split. One section of the community gathers initial experiences that can be evaluated by the testers before everyone benefits from the new version.
Buy this article as PDF
(incl. VAT)