When are Kubernetes and containers not a sensible solution?
Curb Complexity
The Myth of Dependency Hell
The dependency hell many admins want to escape by deploying containers is actually a relic of the past (Figure 2). If you only use packages from the manufacturer, from properly maintained backport directories or (often) from application vendors on your systems, you will rarely face issues at installation and update time because Red Hat, SUSE, Debian, and others have done their homework and today even support cross-version updates (e.g., from Ubuntu 20.04 to Ubuntu 22.04 at Canonical). If you run and are familiar with a defined set of services on your systems, you will reach your objectives far faster with classic packages than with a complete CI/CD environment.
Automation is a Good Thing
Automation is necessary, especially for scenarios in which classic automation by Ansible and similar tools plays a significant role. Another advantage that container advocates like to mention is that CI/CD systems offer great automation resources, and systems can be restarted quickly if something goes catastrophically wrong. After all, so the story goes, all you need to do it is roll out the container, including the configuration, on another system and start it there again.
Unfortunately, this story ignores the fact that comparable setups might not have containers and instead reside on bare metal. Physical hardware, for example, can easily be managed by open source software such as Foreman (Figure 3). As the administrator, you then have access to comprehensive life-cycle management features that let you reboot remotely, reinstall, and completely wipe computers. If you can, and want to, commit specifically to one manufacturer, Red Hat (Red Hat Satellite), SUSE (SUSE Manager), and Canonical (Landscape) also have boxed products on the shelf that can be rolled out in a short time.
Besides, today's administrators have a full arsenal of well-developed, perfectly tested automation tools at their disposal. Whether you go for Puppet, Chef, Salt, or Ansible, all of these tools can handle simple tasks, such as installing a service and rolling out a template-based configuration file quickly – without detouring through CI/CD.
If a server suffers a hardware failure, a combination of Foreman and Ansible will cleanly reinstall it, along with its required services and their configurations, in a matter of minutes and without any involvement from Docker or Podman. Of course, you have to establish a valid automation story, but even that will be faster and easier than working with Jenkins and the like, in many cases.
More Complicated Handling
Another factor that clearly speaks against containers is something I briefly mentioned at the beginning, but which deserves closer consideration: Containers running on systems are unfamiliar to sys admins in their everyday work and are usually more complex to handle, precisely because the administrator is dealing with the runtime environment for containers. Access to the services is by way of this environment.
Your worries start here. If you use Red Hat Enterprise Linux 8 or one of its clones, you will only be able to access the classic Docker engine by roundabout paths and unofficial sources. Not only can this become a problem in terms of security and compliance, it's something that can really get on your nerves, because Red Hat has long since said goodbye to Docker and is instead going its own way in the form of Podman, assuring you that they are completely compatible with Docker at the command line. In everyday life, however, you will quickly discover that the purported compatibility between Podman and Docker is not great in many places.
If you look at the other vendors, the situation is not much better: The Community Edition of Docker for SUSE, Debian, and Ubuntu might still be available unchanged, but Canonical would prefer its own users to rely on Snap (i.e., Canonical's own format). If you manage systems from Red Hat, Debian, and Canonical, you will have to deal with three different runtime environments, two completely different CLIs, and various compatibility problems.
Buy this article as PDF
(incl. VAT)