CRI-O and Kubernetes Security
Critical Tool
Container runtimes continue to evolve at a fast rate, and Red Hat has shifted their focus in the direction of their own Podman container runtime in favor of Docker. Red Hat's open source contributions to Kubernetes also means that CRI-O has been adopted by their OpenShift enterprise platform as the runtime, and Red Hat Enterprise Linux will offer Podman for user interaction with containers in the future.
Although Docker still continues to be used as the universal container runtime, the popular Kubernetes container orchestrator moved over to CRI-O as a lightweight default runtime alternative a number of versions ago. CRI-O (Container Runtime Interface) began life at Red Hat in 2016 and was passed on to Kubernetes in 2019. Red Hat described the runtime when it was launched as benefiting from a "narrow focus [that] drives stability, performance and security features down the stack, allowing the cloud native ecosystem to reliably focus at the Kubernetes layer and above" [1].
Because CRI-O is likely to be entrenched in Kubernetes for the foreseeable future, learning about this relatively nascent and critical component is a useful exercise. For further information on alternative Kubernetes runtimes, you can refer to a document about container runtimes on the Kubernetes website [2].
In this article, I look at CRI-O in more detail and how the use of a troubleshooting tool in combination with an unexpected debugging companion enables developers to access Kubernetes resources without having to make changes directly to the Kubernetes cluster itself.
Jigsaw Puzzle
The precise definition of a container runtime is surrounded by confusion and ambiguity. The CRI-O runtime [3] is an Open Container Initiative (OCI)-compliant container runtime. Both runC and Kata Containers are currently supported and effectively sit on top of CRI-O to provide the necessary functionality required to run Pods. Both components are often referred to as runtimes. One of the key benefits for Kubernetes when it comes to CRI-O is that its development cycle is tied closely to both the major and minor releases of Kubernetes itself, which makes upgrade compatibility much cleaner.
The components that comprise CRI-O include:
- OCI-compatible runtime
- Storage
- Image
- Networking
- Container monitoring
All have their own GitHub repositories [4]. The CRI-O site has more information on these repositories, as well.
To keep CRI-O minimal in size, you cannot interact with it directly in the same way you might with Docker. A number of applications (Table 1) interface with CRI-O to build images, for example, or run Pods.
Table 1
CRI-O Container Tools
Tool | Capability |
---|---|
runC | Run containers. You might be familiar with this binary because it is used in many container runtimes and usually is the last component in the runtime chain. |
Podman | Stop, start, attach, enter, and run containers. Daemonless by design, no container engine is needed to use Podman to run Pods or containers. |
Buildah | Build, push, and sign images. |
Skopeo | Copy, remove, inspect, or sign images. |
Red Hat resources [5] [6] can help you understand fully how CRI-O is used within Kubernetes and OpenShift. Note the useful graphics that explain the difference between a user interacting with containers and how Kubernetes introduces CRI-O into the mix, looking out for the ubiquitous runC in both examples. The "Standing Alone" box explains how to install CRI-O as a standalone installation.
Standing Alone
CRI-O is pretty easy to install as a standalone test installation with minikube [7] instead of Kubernetes. My installation was v1.17 on Ubuntu 18.04 (with Linux Mint 19 on top):
$ CRIO_VERSION=1.17 $ . /etc/os-release
Because I'm using Linux Mint and avoiding Ubuntu naming clashes, I manually add a line to the /etc/apt/sources.list.d/devel\:kubic\:libcontainers\:stable.list
file:
deb http://download.opensuse.org/ repositories/devel:/kubic:/ libcontainers:/stable/xUbuntu_18.04/ /
Next, you need to trust the package repository just added to the Apt package manager:
$ wget https://download.opensuse.org/ repositories/devel:/kubic:/ libcontainers:/stable/xUbuntu_18.04/ Release.key -O- | sudo apt-key add -
The OK response is all you need to know that it has been added successfully. Next, update your packages and install the CRI-O package (abbreviated output follows):
$ apt update $ apt install cri-o-${CRIO_VERSION} The following NEW packages will be installed cri-o-1.17 0 to upgrade, 1 to newly install, 0 to remove and 0 not to upgrade. Need to get 17.3 MB of archives. After this operation, 86.0 MB of additional disk space will be used.
As the output demonstrates, you are only adding 86MB of disk footprint when installing CRI-O. As mentioned, however, you are not going to get much mileage out of the runtime on its own. The status command reveals that it's not running:
$ crio-status info Get http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info: dial unix /var/run/crio/crio.sock: connect: no such file or directory
Also, systemd shows the service is disabled:
$ systemctl status -l crio * crio.service - Container Runtime Interface for OCI (CRI-O) Loaded: loaded (/usr/lib/systemd/system/crio.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: https://github.com/cri-o/cri-o
The crio --help
command output describes the implementation:
$ crio --help crio is meant to provide an integration path between OCI conformant runtimes and the kubelet. Specifically, it implements the Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes. The scope of crio is tied to the scope of the CRI. 1. Support multiple image formats including the existing Docker and OCI image formats. 2. Support for multiple means to download images including trust & image verification. 3. Container image management (managing image layers, overlay filesystems, etc). 4. Container process lifecycle management. 5. Monitoring and logging required to satisfy the CRI. 6. Resource isolation as required by the CRI.
Cybernetics
Instead of continuing with CRI-O locally, I will look at it in action in an exceedingly sophisticated browser-based laboratory environment that saves creating a potentially complex Kubernetes cluster. Once you have made sure that your environment is running, you can use the helpful crictl
tool to test CRI-O Pods running in the cluster to get a better idea of how to debug Pods.
The lab cluster comes courtesy of the clever Katacoda website [8] run by O'Reilly. You might need to add an email address and password to access the Getting Started with Kubeadm Crio scenario. Do so now, if required. The scenario is described as offering the ability to "Learn how to deploy a CRI-O based Kubeadm cluster."
To proceed, click the Start Scenario
button on the first screen and follow the prompts throughout the tutorial to spin up a Kubernetes cluster with Kubeadm. Then, you can query the CRI-O runtime directly with the clever crictl
tool.
Figure 1 is the first screen asking for commands to execute in the lab. Katacoda offers a genuinely exceptional interface in which to run tests (or, by design, to learn greatly from). Not only can you type into the command-line interface (CLI) in the right window, but you also can click the suggested commands in the tutorial in the left window to execute those commands without having to cut and paste them into the CLI.
In the left pane, the note under Task explains that you need to restart the CRI-O runtime to fix a bug with the command:
$ systemctl restart crio
After you click that command or enter it manually in the CLI, the next task, again courtesy of Katacoda's documentation, is to initialize the cluster again to get started:
$ kubeadm init --cri-socket=/var/run/crio/crio.sock --kubernetes-version $(kubeadm version -o short)
Some comments appear in the screen output from the cluster initialization. You are interested in using the debugging tool, but to interact in a sane way with the cluster, make sure you enter the three commands in the initialization output so you can access the cluster and interact properly. To see if any Docker containers are running, enter:
$ docker ps
As suspected no containers are visible to Docker Engine. Now, try the crictl
help command to see what you can do with the tool (Listing 1). If you require debugging output, add -D
to the command. Before continuing, make sure you can access the running Pods in the cluster with the command:
$ kubectl get pods --all-namespaces
Listing 1
Abbreviated crictl Help Output
COMMANDS: attach Attach to a running container create Create a new container exec Run a command in a running container version Display runtime version information images List images inspect Display the status of a container inspecti Return the status of an image inspectp Display the status of a pod sandbox logs Fetch the logs of a container port-forward Forward local port to a pod sandbox ps List containers pull Pull an image from a registry runp Run a new pod sandbox rm Remove a container rmi Remove an image rmp Remove a pod sandbox pods List pod sandboxes start Start a created container info Display information of the container runtime stop Stop a running container stopp Stop a running pod sandbox update Update a running container config Get and set crictl options stats List container(s) resource usage statistics completion Output bash shell completion code help, h Show a list of commands or help for one command
You should be able to see Pods running for all the usual suspects: Etcd to store your configuration, the API server with which kubectl
interacts, the controller, the proxy, the DNS, and the scheduler that allocates Pods to nodes with spare capacity when they are spun up.
The lab cluster looks happy, so you can now proceed with your troubleshooting tool: Check the CRI-O configuration settings in this file,
$ cat /etc/crictl.yaml runtime-endpoint: /var/run/crio/crio.sock
and perform the simple query command to check for Pods (Figure 2):
$ crictl pods
In the STATE column, you can see that each Pod is shown as being SANDBOX_READY
. A really useful GitHub crib sheet [9] can assist with other commands if you want to learn more. The page shows how to list and inspect images with the crictl images
command. The relatively familiar Dockeresque output lists the name, tag, and hash ID.
To display any running containers (not just Pods), the usual command applies Docker-style by using the ps
rather than the pods
command (see Listing 1):
$ crictl ps
Appending -a
at the end of the command will show stopped containers, as well. You can see how the tool is configured with the info
command (Listing 2).
Listing 2
crictl info
01 $ crictl info 02 { 03 "status": { 04 "conditions": [ 05 { 06 "type": "RuntimeReady", 07 "status": true, 08 "reason": "", 09 "message": "" 10 }, 11 { 12 "type": "NetworkReady", 13 "status": true, 14 "reason": "", 15 "message": "" 16 } 17 ] 18 } 19 }
To check the current version, use:
$ crictl version Version: 0.1.0 ** RuntimeName: cri-o RuntimeVersion: 1.9.10-dev RuntimeApiVersion: v1alpha1
Back with Katacoda, you carry on through page 4 of 6 within the learning scenario by clicking the commands in the left window to set up your Kubernetes cluster. However, when you get to page 4, I found a glitch in the docs. (I reported it to Katacoda like a good citizen, so it may be fixed.) The command you should run is:
$ kubectl apply -f /opt/weave-kube
In other words, you should not use /opt/weave-kube.yaml
, as was originally suggested. Running the correct command sets up Weave networking [10] to complete the cluster build.
Now you want to create a sandbox Pod starting with a similar example shown in the Kubernetes documentation [11] (Listing 3).
Listing 3
Sample Sandbox Pod Config
01 { 02 "metadata": { 03 "name": "debug-sandbox-pod", 04 "namespace": "default", 05 "attempt": 1, 06 "uid": "hdishd83djaidwnduwk28bcsb" 07 }, 08 "logDirectory": "/tmp", 09 "linux": { 10 } 11 }
By saving the content shown in Listing 3 in a file called debug.json
, you can now run the command to create a sandbox Pod, so you can test from inside the cluster with that Pod:
$ crictl runp debug.json 4ecc5504f0453fa8dbd3f5990de980fa67d56ef629063a6ab4d47de05a2905a7
The hash ID of the sandbox Pod is returned, denoting success. Note that eventually Kubernetes will apparently delete this Pod after some housekeeping.
One advantage of crictl
when troubleshooting is that you can even query a container with the runc
companion tool when CRI-O is stopped:
$ runc ps 4ecc5504f0453fa8dbd3f5990de980fa67d56ef629063a6ab4d47de05a2905a7 UID PID PPID C STIME TTY TIME CMD
The runc
runtime command lists the running processes inside the sandbox container and can be used on other containers, as well.
A Red Hat CRI-O guide [12] encourages you to switch off the crio
systemd service to see if you can still query potential Pod issues:
$ systemctl stop crio
When you check the service's status (Listing 4), you can see the service is "dead" or "inactive," as it was when it was installed locally and didn't start it up.
Listing 4
systemctl status crio
$ systemctl status crio * crio.service - Open Container Initiative Daemon Loaded: loaded (/usr/local/lib/systemd/system/crio.service; enabled; vendor preset: enabled) Active: inactive (dead) since Sun 2020-09-20 17:01:22 UTC; 14s ago Docs: https://github.com/kubernetes-incubator/cri-o Process: 4786 ExecStart=/usr/local/bin/crio $CRIO_STORAGE_OPTIONS $CRIO_NETWORK_OPTIONS (code=exited, status=0/S Main PID: 4786 (code=exited, status=0/SUCCESS)
Now you can query another Pod, even without the CRI-O runtime being active. Scroll back up Katacoda's terminal history, choose another Pod (e.g., this time the kube-proxy
Pod), and paste its hash into the command (Listing 5). You have confirmed that CRI-O is unavailable. To check the runc
perspective, enter the command in Listing 6.
Listing 5
crictl ps
$ crictl ps | grep 37366ad82a73d 2020/09/20 17:09:58 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/crio/crio.sock: connect: no such file or directory"; Reconnecting to {/var/run/crio/crio.sock <nil>} FATA[0000] listing containers failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
Listing 6
runc list
$ runc list | grep 37366ad82a73d 37366ad82a73d4158b1c7f99762b185ee1296f1aebd30861ca790401fe4c281a 5643 running /run/containers/storage/overlay-containers/37366ad82a73d4158b1c7f99762b185ee1296f1aebd30861ca790401fe4c281a/userdata 2020-09-20T16:27:27.618813008Z root
Now that you've identified the storage location of your kube-proxy Pod from the output (userdata ), you can interrogate its configuration with the command:
$ ls -al /run/containers/storage/overlay-containers/37366ad82a73d4158b1c7f99762b185ee1296f1aebd30861ca790401fe4c281a/userdata
Listing 7 shows the contents of that directory. Only one file, config.json
, is worth investigating (the others are empty, barring the PID placeholder). Listing 8 is the heavily abbreviated output for the kube-proxy
Pod that shows some of the environment variables held internally, among other configuration settings, for the Pod's service, which could be useful for troubleshooting.
Listing 7
Content of userdata
total 40 drwx------ 2 root root 120 Sep 20 16:27 . drwx------ 3 root root 60 Sep 20 16:27 .. srwx------ 1 root root 0 Sep 20 16:27 attach -rw-r--r-- 1 root root 33708 Sep 20 16:27 config.json prw-r--r-- 1 root root 0 Sep 20 16:27 ctl -rw-r--r-- 1 root root 4 Sep 20 16:27 pidfile
Listing 8
config.json File Content
01 { 02 "ociVersion": "1.0.0", 03 "process": { 04 "user": { 05 "uid": 0, 06 "gid": 0 07 }, 08 "args": [ 09 "/usr/local/bin/kube-proxy", 10 "--config=/var/lib/kube-proxy/config.conf" 11 ], 12 "env": [ 13 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", 14 "TERM=xterm", 15 "HOSTNAME=master01", 16 "KUBE_DNS_PORT_53_TCP_PROTO=tcp", 17 "KUBE_DNS_PORT_53_TCP_PORT=53", 18 "KUBERNETES_SERVICE_PORT_HTTPS=443", 19 "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", 20 "KUBERNETES_PORT_443_TCP_PROTO=tcp", 21 "KUBE_DNS_SERVICE_HOST=10.96.0.10", 22 "KUBE_DNS_PORT_53_UDP=udp://10.96.0.10:53",
If the content shown in Listing 8 had not been abbreviated, you would see seemingly endless information about what the Pod knows about Kubernetes and vice versa.
The End Is Nigh
I have barely scratched the surface of the CRI-O runtime and the way it interacts with Kubernetes. Under usual circumstances, you might be hard-pressed to find someone who works with technology that can coherently explain all of the nuances of the container runtimes available today, but it is worth learning the basics, at the very least.
Exploring the options in the useful crictl
tool and looking at how runc
can be used to diagnose issues is a valuable exercise in itself. In your next hour of need, when a Pod or a cluster is misbehaving, turn to these applications.
Infos
- Red Hat contributes CRI-O: https://www.redhat.com/en/blog/red-hat-contributes-cri-o-cloud-native-computing-foundation
- Container runtimes: https://kubernetes.io/docs/setup/production-environment/container-runtimes
- CRI-O: https://cri-o.io
- Containers on GitHub: https://github.com/containers
- CRI-O container engine: https://docs.openshift.com/container-platform/3.11/crio/crio_runtime.html
- CRI-O and Podman: https://www.redhat.com/en/blog/why-red-hat-investing-cri-o-and-podman
- minikube: https://kubernetes.io/docs/tasks/tools/install-minikube
- Katacoda: https://www.katacoda.com/courses/kubernetes/getting-started-with-kubeadm-crio
- crictl user guide: https://github.com/containerd/cri/blob/master/docs/crictl.md
- Installing Weave Net: https://www.weave.works/docs/net/latest/kubernetes/kube-addon
- Sandbox Pod example: https://kubernetes.io/docs/tasks/debug-application-cluster/crictl
- Using the CRI-O container engine: https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html/cri-o_runtime/use-crio-engine
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.