Troubleshooting Kubernetes and Docker with a SuperContainer
Super Powers
We're often advised that containers should be as small as possible. Containers are designed for microservices – the decoupling of components into small, manageable applications. And by applications I mean processes . In other words, your containers should run one main process with a few supporting child processes to assist the parent on its travels.
A smaller container means better security, with less code, fewer packages, and a smaller attack surface. From a performance perspective, large, bloated, and unwieldy images lead to slower execution times. Another issue is image registry costs. If developers are frequently altering chunky binaries within their builds, which don't get absorbed into the sophisticated image layering process (designed to save only differences or diffs between the last saved version of an image and the newly pushed image), then with each subsequent tag update, you can increase an image's size by another couple of hundred megabytes per push without much effort. That might sound trivial, but what about hundreds of developers pushing to a registry several times a day? Cloud storage costs soon add up.
But what happens when you try to troubleshoot the container using your favorite admin tools? In this article I'm going to help you create what I'll call a SuperContainer. The following approach suits both Kubernetes pods running the Docker run time and, indeed, straightforward Docker containers.
The concept of this powerful container was inspired by the ideas of others. The idea for SuperContainers comes from a well-written blog post by Justin Garrison [1], who in turn attributes the mechanics to the frighteningly clever Justin McCormack [2] of Docker and LinuxKit fame. In this case, I'm using Docker as an example container run time for a SuperContainer. If you use this approach with Kubernetes, a good starting point is to run it directly on the minion (or node), which is running a troublesome container.
The SuperContainer described in this article will be able to access the filesystem, query the process table, and use the network stack of a neighboring container without tainting the target container.
Hopefully you agree that sounds intriguing and highly useful. A SuperContainer can give you full access to your everyday admin tools, without the need to install them inside your troubled container and without affecting your precious services. Bear in mind that SuperContainers are extremely powerful, and you have to use them carefully – you should definitely test them in a sandbox before using them in production (you've been warned!) – and don't leave them lying around when you are finished with them – hackers may take advantage of the the admin tools.
Super User
When you are running the smallest containers that you can create, especially in enterprise environments, you're sometimes left in a difficult position if you need to troubleshoot that container without affecting its packages and codebase. Conventional wisdom says you shouldn't edit or add to a running container because they're ephemeral and short-lived. However, it is often necessary to debug an issue directly from inside a container to view a problem from the container's perspective.
The docker exec
command lets you access a running container, but once you're inside, you might find that your favorite admin tools are missing because of a desire to keep the container's footprint as small as possible. To compound the issue, a command like
$ docker exec -it chrisbinnie-nginx sh
sometimes opens up an old fashioned shell, not even a Bash shell. Sometimes I don't even find the ps
command present to query the process table and check which processes are running.
Don the Cape
To make sure I have the tools I need, I constructed the Dockerfile shown in Listing 1.
Listing 1
SuperContainer Dockerfile
FROM debian:stable LABEL author=ChrisBinnie LABEL e-mail=chris@binnie.tld RUN apt update && \apt install -y netcat telnet traceroute libcap-ng-utils curl \wget tcpdump ssldump rsync procps fping lsof nmap htop \strace net-tools && apt clean CMD bash
If you look at Listing 1, I'm sure you'll spot all the usual suspects. Look out for a couple of details, too, such as my preference for Debian (even though Alpine would be sleeker) and the procps
package, which provides access to the often-taken-for-granted ps
command.
Also if you look at the RUN command in the Dockerfile in Listing 1 I've managed to keep the container's layers to a minimum by squeezing an "apt update" into one line, installing my packages into another and then adding a double ampersands before the "apt clean". This avoid many layers for multiple packages.
It should go without saying that you can add a heap of stuff into that Dockerfile if you feel the need; once built, the image in Listing 1 sits at a whopping 276MB the last time I checked. Simply prune it to your needs for quicker pull access and build times.
Save the Day
Inside the directory of your Dockerfile you can build the Dockerfile in Listing 1 super-simply with the following command:
$ docker build -t supercontainer .
Next, tag and push the newly created image to a registry, so that it is accessible later. My remote registry's repository is called chrisbinnie
, so I use the following commands:
$ docker tag supercontainer chrisbinnie/supercontainer:latest $ docker login username: chrisbinnie password: not-telling-you $ docker push chrisbinnie/supercontainer:latest
You now have a troubleshooting SuperContainer (as long as you have network access to run a docker pull
from the registry). The next step is to put that SuperContainer to good use.
Buy this article as PDF
(incl. VAT)