« Previous 1 2 3 4 Next »
Keeping the software in Docker containers up to date
Inner Renewal
Option 2: Fast Shot with Glue
Intended more as a cautionary example, I'll also mention option 2, which is based directly on option 1: updating containers on the fly, followed by creating an additional container layer. Docker does offer this option. If you change a container on the fly and want to save the changes permanently, the docker commit
command creates a new local container image that is used to start a new container.
If you install updates in a container as described and then run docker commit
, you have saved the changes permanently in the new image. Theoretically, this also avoids the problem of being unable to restart the service in the container without exiting the container – you stop the old container and start the new one, so the downtime is very short.
From an administrative point of view, however, this scenario is a nightmare, because it makes container management almost completely impossible. An updated single container differs from the other containers, and in the worst case, it will behave differently. The advantage of a uniform container fleet that can be centrally controlled is therefore lost.
This situation gets even worse if you perform the described stunt several times: The result is a chaotic container conglomerate that can no longer be recreated from well-defined sources. The entire process is totally contrary to the operational concept of containers because it torpedoes the principle of the immutable underlay (i.e., the principle that the existing setup can be automated and cleanly reproduced from its sources at any time).
Although option 1 can still be used in emergencies, admins are well advised not even to consider option 2 in their wildest dreams.
Prebuilt Containers
Those of you who don't want to build your Docker containers yourselves – as described in option 3 – might find a useful alternative in the Docker containers offered by the large projects. Like Ubuntu, Canonical, and others, it is now common practice for providers of various programs to publish their own work on the Internet as ready-to-use, executable Docker containers. Although not perfect, because the admin has to trust the authors of the respective software, this variant does save some work compared with doing it yourself.
Importantly, certain quality standards have emerged, and they are given due consideration in most projects. One example is the Docker container offered by the Prometheus project, which only contains the Prometheus service and can be controlled by parameters passed in at startup.
The sources, especially the Dockerfile on which the container is based, are available for download from the Prometheus GitHub directory. In case of doubt, you can download this Dockerfile, examine it, and, if necessary, adapt it to your requirements (Figure 1).
Admins should definitely keep their fingers off container images on Docker Hub whose origin cannot be determined. These infamous black boxes may or may not work, and they are practically impossible to troubleshoot. Anyone who relies on prebuilt containers needs to be sure of the quality of the developers' work and only then take action.
This approach has another drawback: Not all software vendors update their docker images at the speed you might need. Although distributors roll out updates quickly, several days can pass before for an updated Docker image is available for the respective applications.
Option 3: Build-Your-Own Container
If you want to be happy with your containers in the long term, you should think about your operating concept early on; then, option 3 is ideal for application and security updates: building containers yourself.
Creating a Docker container is not particularly complicated. The docker build
command in combination with a Dockerfile are the only requirements. The syntax of the Dockerfile is uncomplicated: Essentially, it contains elementary instructions that tell Docker how the finished container should look. The Docker guide is useful and explains the most important details [1].
While Docker is processing the Dockerfile, it executes the instructions it describes and creates various image layers, all of which will later be available in read-only mode. A change of the image later on is only possible if you either build a new image or go for the very nasty hack of option 2. When preparing Dockerfiles, you should be very careful (Figure 2).
Big distributions can help: They usually offer their systems as basic images, so if you want to build your own container based on Ubuntu, you don't have to install an Ubuntu base system with debootstrap
and then squeeze it into a container. It is sufficient to point to the Ubuntu base image in the Dockerfile, and Docker does the rest on its own.
Unlike many images provided by Docker users on Docker Hub, you do not increase your security risk: If you trust a regular Ubuntu installation, you have no reason to mistrust the finished container packetized by Ubuntu – they are the same packages.
Another good reason to rely on basic images from distributors is that they are optimized for the disk space they need, so only the necessary packages are included. If you use debootstrap
, however, you automatically get many components that cannot be used sensibly in the context of a container. The difference between a basic DIY image and the official Ubuntu image for Docker can quickly run to several hundred megabytes.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)