Keeping container updates under control
Hazardous Goods
Containers Are More Complex
The question of the operating concept you have chosen for your setup is of great importance. Several options depend to a large extent on the components that you have to integrate in your setup. Basically, a distinction needs to be made between cloud-ready applications designed for operation in the cloud and conventional applications that can be operated in containers but require a certain amount of external protection. A simple example quickly makes this clear.
Practically every installation today is based on a database somewhere at its core. After all, even applications with microcomponent architecture ultimately process external data that needs to be stored in some way. When it comes to databases, most application developers and admins opt for tried-and-trusted components with familiar behavior that can be evaluated. However, these components might not be designed for operation in the cloud, at least in their original form, or may have some restrictions, which is something to keep in mind when you roll applications out in the form of containers: A single container with only MariaDB is not a good idea.
If you have to restart the database for security reasons, the rest of the setup is left in the lurch. For example, a container might be restarted, not at your behest, but because the provider needs to reboot the node for a security update (e.g., of the kernel).
From the point of view of an admin running workloads in the cloud, without clusters, implicit redundancy is utterly impossible at the application level. In terms of the database example, one result could be that MariaDB never comes alone as an application in a container, but the admin always rolls it out as a cluster with the Kubernetes API; then, one of the (at least) three database nodes can restart at any time without any noticeable downtime for end users.
Cloud-Ready Applications
The situation is somewhat more relaxed when it comes to security updates for applications developed specifically for use in the cloud, and they usually follow the microarchitecture paradigm. In other words, a large number of small services communicate with each other over defined interfaces, and each performs only a single task. Because the design even envisages individual services disappearing and reappearing dynamically, replacing containers is a far more relaxed experience than when dealing with conventional solutions. In most cases, you can replace the containers with something more up to date without end users noticing.
Mesh Solutions Help
Mesh solutions, which handle communication in container-based environments, also make a contribution. Mesh solutions, such as the top dog Istio, have already been the subject of several articles in ADMIN [1].
Solutions like Istio flexibly broker the connections between the individual components of an application, act as load balancers, and have their own monitoring system for the configured targets. If you use Istio in the MariaDB scenario described above, not only does the database continue to run smoothly while the individual containers restart, thanks to a mesh solution like Istio, the database currently being restarted no longer receives client requests from the load balancer.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.