OCI containers with Podman
Group Swim
Over the years, Docker container software has caused a metaphorical tsunami among software developers and, along with agile principles, improved productivity and DevOps working practices to speed code releases. Docker has had an unprecedented global effect on how applications are built and deployed.
Thanks in part though to an eye-watering rate of evolution in the industry, Docker is no longer the default container software. Some members of the commentariat have unfairly already read the company their last rites. However, in this article, I don't ring the death knell for Docker's fantastic container software, because in my opinion, it is far from receiving its coup de grâce. However, I do look at an alternative to Docker called Podman, its feature set, and some notable differences in its container run time relative to Docker.
Docker Dominance
The first version of Docker [1] was released March 20, 2013. Although containers had already been available on Unix-like systems for a few years, Docker's software allowed small units of code to focus on one specific process to run a single service or perform a single task. By deploying these small code units, software developers improved the portability, consistency, and therefore the predictability of the services and tasks they were running.
As Docker Inc. rode on the crest of their very large wave, containers were invariably the talk of the town. Containers helped shape how architects designed, created, and then deployed their software amid an unremitting level of cloud adoption.
As concepts evolved further, microservices gained consensus as the best way forward, which meant modern software applications were broken down into small, modular, and independently deployable chunks. Soon after, cloud native became the way to discuss complex infrastructure based on microservices deployed in containers with an orchestrator (to help manage multiple containers running at once) in the cloud.
At this point, Google drew on an unparalleled 15 years of production experience running a containerized cluster system called Borg and released the open source Kubernetes orchestrator. On an exponentially growing proliferation of popularity, Kubernetes, akin to Docker's stratospheric rise, soon became the orchestrator of choice and in turn began forging the way that container software was to be used.
Now the Docker container software, after pioneering strides, was no longer the default container software used within the orchestrator, which led to Docker focusing more on the professional services side of its business.
In this article for the purposes of simplicity, I'll assume a container run time is the container software required for actually running containers directly. (See the "Oh, I See … OCI" box.)
Oh, I See … OCI
Briefly, I'll focus on the container component that comes up often, namely the "run time." This term is highly confusing because of the large number of run times in use today, some of which offer more functionality than others.
The Cloud Native Computing Foundation (CNCF) [2], which was started in 2015 by the Linux Foundation, heralded containers, in hand with microservices, as cloud adoption rocketed. At roughly the same time, Kubernetes appeared, too, which now favors the CRI-O run time [3], thanks to its optimization for the orchestrator. (See the article on OpenShift 4 in this issue for more on CRI-O.)
CRI-O runs as a lightweight run time that should pass Kubernetes tests and include functionality such as allowing container images to be pulled from compliant image registries and having the ability to run any OCI-compliant container. Then, as CNCF pursued the need to define industry-wide container standards, the Open Container Initiative (OCI) was established in June 2015 by Docker and other leaders in the container industry. The OCI defined two specifications: "the Runtime Specification (runtime-spec) and the Image Specification (image-spec)" [4].
Essentially, once the OCI standard was established and formalized, those container run times still being developed opted for being OCI format-compliant, which helped sort the wheat from the chaff and meant that container run-time developers focused on the various functions required to be compliant.
Rougher and Tougher
The Podman website [5] greets you with a logo of marine mammals, representing the Kubernetes "pod" (one or more containers), and, hence, the run time's name.
At first glance, Podman is intriguing because it is daemonless; that is, it has no background service and the run time is executed only when requested. From a security perspective, that's good news: Podman offers less attack surface, because it's not running around the clock.
Additionally, Podman is OCI-friendly, helping to propagate the results of hard-fought standards battles in an effort to normalize what containers look like and how they act.
To my mind, however, the most valuable addition to Podman's credentials relates to something else. Unlike Docker, which has made great leaps in the right direction but historically struggled to keep all users happy with security, Podman lets users fire up the run time without being the root superuser. When Podman is run as a less privileged user, it sets up and then enters a user namespace . Once inside, Podman runs as a privileged user (but only with privileges inside that namespace) and has permission to mount certain filesystems and create a container. From a DevSecOps perspective, this means developers can run containers without having superuser permissions.
Believe me when I say that this provides a far higher degree of comfort when it comes to leaving containers running on a system – development and production environments alike – 24 hours a day. By design, Linux has a number of sophisticated mechanisms to help isolate critical functionality and keep it away from less privileged users to ensure that only the root user can affect such services.
The container image format also supports other valuable features: image verification, so you know you're using a trusted container image; container image layer management functionality; pod management (managing groups of containers); and resource isolation between containers.
For the non-Linux aficionados among us, a number of other features are in the planning, such as connecting into remote Mac and Windows systems.
Bigger and Bolder
I'm using the Debian derivative Linux Mint, which harks from Ubuntu 18.04. Detailed Podman installation instructions can be found on GitHub [6]. The instructions offer a number of options, as shown in Figure 1, which details the packages required to build Podman from scratch.
For my install, I take the path of least resistance and use the Personal Package Archive (PPA) as root or with sudo
:
$ apt-get update -qq $ apt-get install -qq -y software-properties-common uidmap $ add-apt-repository -y ppa:projectatomic/ppa $ apt-get update -qq $ apt-get -qq -y install podman
You're helpfully advised to check the build and run dependencies on the install page [7] if you encounter any issues.
If you're concerned about what those commands are doing to your machine, then simply remove the -y
(automatic "yes" to prompts) and -qq
(logging output) options for more details. To check that all is well following those commands, run:
$ podman --help
If you see lots of output and the minor complaint
WARN[0000] unable to find /etc/containers/registries.conf
don't worry, because I'll fix that problem next.
Buy this article as PDF
(incl. VAT)