« Previous 1 2 3 4 Next »
Keeping the software in Docker containers up to date
Inner Renewal
Automation Is Essential
A Dockerfile alone does not constitute a functioning workflow for handling containers in a meaningful way. Imagine, for example, that dozens of containers in a distributed setup are exposed to the public with a vulnerability like the SSL disaster Debian caused a few years ago [2]. Because the problem existed in the OpenSSL library itself, admins needed to update SSL across the board. In case of doubt, containers would not be exempt, creating a huge amount of work.
In the worst case, admins face the challenge of using different DIY containers within the setup that are reduced to the absolutely essential components. All containers might be based on Ubuntu, but one container has Nginx, another has MariaDB, and a third runs a custom application.
If it turns out that an important component has to be replaced in all these containers, you would be breaking a sweat despite having DIY containers based on Dockerfiles. Even if new containers with updated packages can be created ad hoc from the existing Dockerfiles, the process will take a long time if it is executed manually.
Whether you are handling application updates or security updates, it makes more sense to create an automatic workflow that generates Docker images at the push of a button. This approach has been mentioned in a previous ADMIN article with the focus on the community edition of GitLab [3]. GitLab has its own Docker registry, and with its continuous integration/continuous delivery (CI/CD) functions, the process of building containers can be fully automated. At the end of the process, GitLab provides the new images from the registry so that you just need to download them to the target system and replace the old containers with new – done (Figures 3 and 4).
Orchestration Is Even Better
If you handle considerable parts of your workflow with containers, you can hardly avoid a fleet manager like Docker Swarm or Kubernetes. It would be beyond the scope of this article to go into detail on how the various Kubernetes functions facilitate the process of updating applications in containers – in particular, the built-in redundancy functions that both Kubernetes and Swarm bring with them. However, the operation of large container fleets can soon become a pain without appropriate help, so you need to bear this topic in mind.
Persistent Data and Rolling Updates
The last part of this article is reserved for a sensitive topic that, based on my own experience, many admins find difficult in everyday life – namely, persistent data from applications running in Docker containers. Persistent data also plays an important role in the context of application updates, and depending on the type of setup, they either make life much easier for the administrator or make it unnecessarily difficult.
Technically, the details are quite clear: The simplest way to bind persistent storage to a Docker container is to use Docker volumes. If the default configuration of Docker is running, it will create new files in /var/lib/docker
. Created with the correct parameters, a Docker volume is persistent and survives the deletion of the container to which it originally belonged.
Different storage plugins can be used to connect other types of storage to Docker. For example, if you create a volume of the rbd-volume
type in Docker, you can post it directly on a connected Ceph cluster (Figure 5). Even with the standard version in local files, though, distribution updates can be considerably simplified, which I explain in the next section using MariaDB as an example.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.