Alternative container runtimes thanks to the Open Container Initiative
Power Runners
Even though containers in the Linux environment are not new and have been available in the form of Linux Containers (LXC) for more than a decade, the big hype about containers only started with the release of Docker in 2013. Docker was the first comprehensive solution for operating application containers. The engine was implemented as a comprehensive API daemon with many tasks. Of course, its primary job is to start and stop containers, but part of the scope is also the management of the images, which are necessary for the operation of the containers.
Cryptographic verification of container images was added to the list of the Docker engine's tasks in version 1.10. Because containers have an IP address, the daemon needs its own network segment from which IP addresses can be assigned to the containers. If you want to manage the containers and images with the docker
command-line tool, you also have to communicate with the Docker daemon. If the service is not available at some point, the requests do not receive responses.
All this already shows the big problem with the initial implementations of the Docker engine. It couldn't do anything without the Docker service. Even if this central approach might still seem sensible for the deployment of containers, it no longer meets modern requirements, in which issues such as process isolation or privilege separation also play a major role.
The problem was identified at an early stage, and various projects have been developed to establish alternative container runtimes. Docker itself has also gone through a certain development process, driven by the Open Container Initiative (OCI) [1] under the auspices of the Linux Foundation. This is a merger of well-known companies from the container environment, including Docker Inc., the company behind Docker. OCI has developed two specifications that define the exact tasks of the container runtime [2] and the format of the container images [3].
The OCI specifications therefore ensure which tasks the runtime must perform, how the container images are defined, and how they are made available to a container as a back-end store. The advantage of conforming to these specifications is that OCI-compliant images can be used by different runtimes. To stick with the example of Docker: The containerd
daemon has been around since version 1.11, and it is responsible for providing the desired image to start a container. Docker now uses a runtime named runC to run the container. This prototype of an OCI-compliant container runtime [4] was developed by Docker; however, one major problem has still not been solved by the introduction of the OCI specifications: the integration of container runtimes into the Kubernetes orchestration framework. Thus, the well-known Docker service still exists in current Docker versions and communicates with containerd
through a gRPC interface to instruct the desired actions, such as starting or stopping a container. Kubernetes, however, also has an independent service for this task, known as kubelet
, which runs on every Kubernetes cluster node and starts the desired container if required.
Docker has been integrated into Kubernetes for a long time now and was once the only container runtime that supported the orchestration framework. It was later followed by the rkt
runtime developed by CoreOS, which was to serve as one of the first alternatives to Docker. However, the integration of new runtimes into the Kubernetes framework always required a great deal of effort, because no uniform interface existed for a long time.
New Runtime Interface in Kubernetes
To solve this problem, the Container Runtime Interface (CRI) was released parallel to Kubernetes v1.15. CRI is a unified plugin interface that container runtime developers can use to integrate with Kubernetes. This interface allows the Kubernetes Kubelet service to use a CRI shim (actually, a simple gRPC server) to send instructions to the runtime (Figure 1). Thanks to the CRI shim, existing runtimes do not necessarily have to be completely redeveloped but can be adapted to the CRI requirements with an additional component.
The first CRI-compatible runtime, CRI-O [5], saw the light of day in 2016. The crio
daemon provides a gRPC server for communication with the kubelet service. The daemon also manages the container images with the use of two libraries: containers/image
[6] and containers/storage
[7]. These two Go libraries are also used by skopeo
[8], a tool used on the Atomic host platform to work with container images and make it possible, for example, to examine an image more closely without first having to load it from the registry.
If a container is to be started with the help of CRI-O, an OCI-compliant image must first be loaded from a registry to create a root filesystem for the container. With the OCI run-time tools [9], a JSON file is then created, which describes the start procedure for the container. Finally, the crio
daemon calls the desired runtime to start the container. Even if all OCI-compliant runtimes are supported, all tests against the two runtimes, runC and Clear Containers [10], are performed. Clear Containers is primarily developed by Intel and relies on virtual machines to operate containers. For network configuration, CRI-O uses the Container Network Interface (CNI) [11]. Another tool for managing containers and images is kpod
. Unlike the docker
command-line tool, kpod
does not require an active daemon; instead, it uses the same components as the crio
daemon.
Conclusions
Today, modern container runtimes have a modular structure. Thanks to the OCI specifications, basic standards are now available that developers of new runtimes must observe. If integration into Kubernetes is required, the framework is connected through the CRI interface. Which runtime is actually used on the individual nodes is then irrelevant, as long as it complies with the OCI specifications.
If you would rather try CRI-O instead of Docker in your Kubernetes cluster, you will find detailed instructions online [12]. In addition to CRI-O, another CRI-compliant implementation with and OCI runtime is now available in the form of cri-containerd
[13]. A initial alpha version appeared shortly after I finished writing this article [and before press time, had transitioned from a standalone binary to a plugin within containerd
-ed].
Infos
- Open Container Initiative: https://www.opencontainers.org
- OCI runtime specification: https://github.com/opencontainers/runtime-spec
- OCI image specification: https://github.com/opencontainers/image-spec
- runC container runtime: https://github.com/opencontainers/runc
- CRI-O: https://cri-o.io
- Container image library: https://github.com/containers/image
- Container storage library: https://github.com/containers/storage
- skopeo: https://github.com/projectatomic/skopeo
- OCI runtime tools: https://github.com/opencontainers/runtime-tools
- Intel Clear Containers: https://clearlinux.org/containers
- CNI networking: https://github.com/containernetworking/cni
- CRI-O in Kubernetes: https://github.com/kubernetes-incubator/cri-o/blob/master/kubernetes.md
- CRI-compliant containerd runtime: https://github.com/containerd/cri
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.