« Previous 1 2
Isolate workloads from Docker and Kubernetes with Kata Containers
Sealed Off
Docker Tuning
Now that the prerequisites have been met, the container framework needs to be equipped with the new run time. Of course, the classic Docker first has to be installed. Listing 1 shows how to install the software on Ubuntu 18.04 LTS from the vendor repositories. The script should be called by a normal user, who uses sudo
later on to escalate privileges for the install.
Listing 1
Installing Docker
#!/bin/sh echo "Configure Docker repo ..." sudo -E apt-get -y install apt-transport-https ca-certificates wget software-properties-common curl -sL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - arch=$(dpkg --print-architecture) sudo -E add-apt-repository "deb [arch=${arch}] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo -E apt-get update echo "Install Docker ... "; sudo -E apt-get -y install docker-ce echo "Enable user 'ubuntu' to use Docker ... "; sudo usermod -aG docker ubuntu
Conscientious admins will want to compare the GPG checksum output from the first curl
command with the reference from the Docker website. The script grants the Ubuntu user the right to use Docker from that point on, which indirectly corresponds to assigning root privileges. If you want other users to be able to use Docker, they also need to be included in the docker
group; then, the users have to log off and back on again or update their group memberships with newgrp -
.
If docker info
does not report any errors, the container software is installed with the classic runC run time. Kata Containers can be installed in a similar way. The major distributions have installation packages, available from SUSE's build service.
Listing 2 shows the installation steps. Again you need to compare the GPG signature with the information on the Kata website to be on the safe side.
Listing 2
Installing Kata Containers
#!/bin/sh echo "Configure Kata repo ..." sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/katacontainers:/release/xUbuntu_$(lsb_release -rs)/ /'>/etc/apt/sources.list.d/kata-containers.list" curl -sL http://download.opensuse.org/repositories/home:/katacontainers:/release/xUbuntu_$(lsb_release -rs)/Release.key | sudo apt-key add - sudo -E apt-get update echo "Install Kata components ..." sudo -E apt-get -y install kata-runtime kata-proxy kata-shim echo "Create new systemd unit ..." sudo mkdir -p /etc/systemd/system/docker.service.d/ cat <<EOF | sudo tee /etc/systemd/system/docker.service.d/kata-containers.conf [Service] ExecStart=/usr/bin/dockerd -D --add-runtime kata-runtime=/usr/bin/kata-runtime --default-runtime=kata-runtime EOF echo "Restart Docker with new OCI driver ..." sudo systemctl daemon-reload sudo systemctl restart docker
Kata Containers comes in three packages: kata-runtime
, kata-proxy
, and kata-shim
(Figure 1). The first handles communication with the container manager; the Docker kata-proxy
command translates commands so that a VM also understands them; and kata-shim
is an intermediate layer that is responsible for many or the convenience features built into Docker, including, for example, managing the standard output as a logging mechanism or firing off signals (e.g., docker stop
or docker kill
).
Everything necessary for the use of the Kata Containers is now present, and only the Docker subsystem has to be restarted, which can be done quite brutally by rebooting or in a targeted way by the system daemon (i.e., systemd in almost all current distributions). The line in Listing 2 after [Service]
adds a user-defined system unit that overwrites the run-time default with kata-runtime
. The script then restarts the Docker Unit in the last two lines.
To check whether the new run time is set up, simply enter:
# docker info | grep Runtime Runtimes: kata-runtime runc Default Runtime: kata-runtime
If you now start a new container (e.g., with docker run -it ubuntu
), it is now running in Kata Containers. If you want to start a container with the classic run time again,
docker run -it --runtime kata-runtime ubuntu
is all it takes.
When the runC driver starts a new container, it prepares a directory from the image, creates an overlay filesystem, creates a new process, and applies new namespaces. The new container then starts up.
Kata, on the other hand, boots a separate kernel for each container. For this purpose, the installation packages contain a kernel and a VM image, both of which are reduced to the max and ultimately start the kata-agent
process in the new VM, which in turn uses CMD
or ENTRYPOINT
to run the desired command defined in the image.
Performance Comparison
Performance is remarkably fast. When I called dmesg
in a Kata container while writing this article, kernel version 4.14.51 started in significantly less than one second.
I ran the test on bare metal flavor physical.o2.medium on the Open Telekom Cloud, which provides a 2x8-core CPU Broadwell EP Xeon E5-2667v4 at 3.2GHz and with 256GB of RAM. This quite decent server can manage many containers. In this way, rounding effects during measurement can be reduced.
Important system parameters are memory requirements and CPU time. To compare how Kata Containers performs against a runC run time, the script in Listing 3 starts 100 containers from the nginx
image and stores a short, individual and static file, each containing the number of the respective container. The time required for this is measured by the script. For this purpose, the nginx
image should be loaded in cache in advance, so a test run is recommended before the actual measurement, to warm up both the image cache and the Linux cache.
Listing 3
Benchmark
#!/bin/bash N=100 time for i in {1..$N}; do CID=$(docker run --name server-$i -d nginx) docker exec server-$i /bin/sh -c "echo I am number $i > /usr/share/nginx/html/index.html" done # Check every container once: time for i in {1..$N}; do IP=$(docker inspect --format '{{.NetworkSettings.IPAddress}}' server-$i) curl http://${IP}/ & done
After starting the containers, the script can deliver the stored file from each instance over HTTP to prove that the services in the containers really are available. The time is also measured by the script. If you add both measured values and divide them by the number of containers, you get an average provisioning time per web server in one container.
For the Kata Containers, this was 1.6 seconds. The classic runC run time needs 0.6 seconds for the same task, and this value is stable even if the number of passes that can be configured in the script varies by using the N
variable.
Upper limits for N
are about 1,000 containers, because more than 1,024 IP addresses will not work with the virtual bridge interface that Docker uses. In the case of Kata Containers, 700 instances are the end of the line, because their lightweight VMs with 160MB of RAM have a far larger main memory requirement than a normal Linux process – no wonder, since each Kata container starts its own kernel.
If you want to recreate such a test yourself, you should do so on a dedicated system because it requires considerable resources. However, the load and the response behavior were no problem. Afterward, the containers are best discarded with a similar script.
Special Cases
Installing Kata Containers is easy. Even if you already have a Docker environment in use, you can simply extend it with the additional run time and select it as required with the --runtime
option. If you change the default permanently, all new containers automatically run on VMs.
Whether this makes sense for usual operations is a different matter. The additional memory requirement is noticeable, but the slightly longer startup time will only have an effect in very highly frequented microservice architectures. From a security point of view, VM isolation of course offers a completely different level of protection from that provided by namespaces and the like.
Kata Containers in the Docker Framework is a good alternative in cases when an additional isolation layer has to be introduced, which can be useful, for example, when processing particularly sensitive personal or business data.
Infos
- Hypervisor-based run time for OCI: https://github.com/hyperhq/runv
- Intel Clear Containers: https://github.com/clearcontainers/runtime
- Kata Containers: https://katacontainers.io/
- Attribution 4.0 International (CC BY 4.0): https://creativecommons.org/licenses/by/4.0/
« Previous 1 2
Buy this article as PDF
(incl. VAT)