When are Kubernetes and containers not a sensible solution?
Curb Complexity
Containers, Kubernetes, Rancher, OpenShift, Docker, Podman – if you have not yet worked in depth with containers and Kubernetes (aka K8s), you might sometimes get the impression that you have to learn a completely new language. Container-based orchestrated fleets with Kubernetes have clearly proven that they are not a flash in the pan, but all of this omnipresent lauding of containers is getting on the nerves of many. Whichever provider portfolio you assess, it seems that the decisive factor for success is answering a simple question: How do I get my entire portfolio cloud-ready as quickly as possible?
The problems and challenges inherent in containers and K8s are too often forgotten. I address this article to all admins who still view the container hype with a healthy pinch of skepticism. In concrete terms, the question is what are the use cases in which containers do not offer a meaningful alternative to existing or new setups. When does it make more sense to avoid the technical overhead of migration because a particular case will (hardly) benefit from migrating anyway?
Ceph as a Negative Example
If you are in an environment far removed from a greenfield, you can face some disadvantages. Ceph, the distributed object storage solution, is one example. If you believe the vendor Red Hat, containerized deployment is now the best way to roll out Ceph. All of the Ceph components come in the form of individual containers, which cephadm
then force-fits on systems.
This arrangement is all well and good, but if you are used to working with Ceph and then try to run a simple ceph -w
at the command-line interface (CLI) to discover the cluster status, you are in for a nasty surprise: Command not found
is what you will see in response. Logically, if all of the Ceph components are in containers, so are Ceph's CLI tools (Figure 1).
The cephadm shell
command does let you access a virtual environment where you can run the appropriate commands, but userland software does not benefit. Programs that need the Librados API for low-level access to the RADOS service, or even depend on /etc/ceph/ceph.conf
existing and containing the correct addresses for the MON servers, will not work.
In other words, Red Hat forces you to plumb the depths of containerization, whether you want to or not. Of course, from the manufacturer's point of view, this scenario makes perfect sense. Red Hat only has to maintain one version of Ceph per major release to have executable programs for Red Hat Enterprise Linux in all versions and their derivatives, and even for Ubuntu or SUSE. Wherever a runtime environment for containers is available, these containers will eventually work in an identical way. However, administrators are quite right to resist being told what's best for them.
Containers Are Not Always Needed
Up and down the IT space, vendors promote containers as the universal panacea for all problems. Applications must be cloud-ready; if you still don't rely on containers in your environment, you are – at the very least – behind the times, if not actively blocking innovation. Anyone who hasn't learned the ideal architecture of a Kubernetes cluster by rote must have been asleep for the last 25 years.
One fundamental problem with the entire container debate is that the distinction is small between technical strategies that don't necessarily belong together – even if they might complement each other. To begin, you need to distinguish clearly between containerization strategies, as such, and container orchestration.
Applications in containers may well have their uses outside of Kubernetes, and not just from a provider perspective; however, if all the terms and descriptions end up in one big pot and are applied arbitrarily, at the end of the day, no one knows what the debate is actually about. Therefore, this article looks at containerization and Kubernetes separately.
Containers Are Practical
Containers are practical in many ways. Time and time again, advocates of the principle argue that the dependency hell of classic package management almost no longer exists with containers. Ultimately, each container doesn't just come with the application itself, but also with the userland, which works autonomously. This setup also makes updating easier. If in doubt, the admin simply stops a running container, checks out the new version of the image, starts it with the old configuration and data, and the job is done.
Containers also initially offer more in terms of security than the established competitors. Under the hood, containers are ultimately no more than a combination of various Linux kernel tools that isolate processes from the rest of the system and from each other. If a vulnerable web server is running in the container and an attacker gains unauthorized access, they will be trapped in the container and not normally be able to get out. This one additional layer of security, at least, is missing in applications without a container layer.
Buy this article as PDF
(incl. VAT)