Lead Image © Oleksiy Mark, 123RF.com

Lead Image © Oleksiy Mark, 123RF.com

Application virtualization with Docker

Order in the System

Article from ADMIN 29/2015
By
Half-lives of products are becoming shorter and shorter on today's virtualization market. This year, the buzz is all about Docker.

Applications are no longer only developed and run on local machines. In the cloud era, this also takes place in virtual cloud environments. Such platform as a service (PaaS) environments offer many benefits. Scalability, high availability, consolidation, multitenancy, and portability are just a few requirements that can now be implemented even more easily in a cloud than on conventional bare metal systems.

However, classic virtual machines, regardless of the hypervisor used, no longer meet the above requirements. They have simply become too inflexible and are too much of a burden. Developers do not want to mess around with first installing an operating system, then configuring it, and then satisfying all the dependencies that are required to develop and operate an application.

Containers Celebrate a Comeback

Container technology has existed for a long time – just think of Solaris Zones, BSD Jails, or Parallels Virtuozzo; however, it has moved back into the spotlight in the past two years and is now rapidly gaining popularity. Containers keep a computer's resources isolated (e.g., memory, CPU, network and block storage).

Applications run inside a container completely independent of one another, and each has its own view on the existing processes, filesystem, or network. Only the host system's kernel is shared between the individual containers, because most features in such an environment are provided by a variety of functions in the kernel. Because virtualization only takes place at the operating system level and no emulation of the hardware or a virtual machine is necessary, the container is available immediately, and it is not necessary to run through the BIOS or perform system initialization (Figures  1 and 2).

Figure 1: A system's complete hardware is emulated in virtualization and made available within a virtual machine.
Figure 2: Containers draw on resources from the host system. Resources are isolated within the containers using kernel functions.

LXC (Linux Containers) provides a solution for Linux that relies on the kernel's container functions. LXC can provide both system containers (Figure 3) – that is, completely virtualized operating systems – and application containers (Figure 4) for virtualizing a particular application, including its run-time environment. However, LXC configuration is quite extensive, particularly if you need to provision application containers, and it requires much manual work, which is why the software never made a big breakthrough. However, this changed abruptly with the appearance of Docker.

Figure 3: System containers usually share the operating system with the host system and only virtualize a single application.
Figure 4: The individual run-time environment can differ completely between a host's individual containers. It is thus possible to have different operating systems on a single host.

Docker focuses on packaging applications in portable containers that can then be run in any environment. Part of the container is also a run-time environment, that is, the operating system, including the dependencies required by the application. These containers can then be easily transported between different hosts; thus, it's easy to make a complete environment, consisting of an operating system and an application, available within the shortest possible time. The annoying overhead  – which still exists in virtual machines  – is completely eliminated here.

Container 2.0 with Docker

Docker is a real shooting star. The first public release only arrived on the market in March 2013, but today it is supported by many well-known companies. It is, for example, possible to let Docker containers execute within an Amazon Elastic Beanstalk cloud or Google Kubernetes. Red Hat has made a standalone operating system available in the form of Project Atomic, which serves as a pure host system for Docker containers. Docker also supports Linux 7 in the new Red Hat Enterprise. The Google Kubernetes project, which many companies have now joined, tries to make Docker containers loadable on all private, public, and hybrid cloud solutions. Even Microsoft already provides support for Docker within the Azure platform.

The first Docker release used LXC as the default execution environment so it could use the Linux kernel's container API. However, since version 0.9 Docker has operated independently of LXC. The libcontainer library written in the programming language Go now communicates directly with the container API in the kernel for the necessary functions when operating a Docker container.

These functions essentially cover three basic components of the Linux kernel: namespaces, control groups (cgroups) and SELinux. Namespaces have the task of abstracting a global system resource and making it available as an isolated instance to a process within the namespaces. Cgroups are responsible for isolating system resources such as memory, CPU, or network resources for a group of processes and limiting and accounting for the use of these resources.

SELinux helps isolate containers from one another, making it impossible to access another container's resources. This is ensured by giving each container process its own security label; an SELinux policy defines which resources a process with a specific label can access. Docker uses the well-known SELinux sVirt implementation for this.

The Linux kernel distinguishes between the following namespaces:

  • UTS (host and domain names): This makes it possible for each container to use its own host and domain name.
  • IPC (interprocess communication): Containers use IPC mechanisms independently of one another.
  • PIDs (process IDs): Each container uses a separate space for the process IDs. Processes can therefore have the same PID in different containers, even though they run on the same host system. Each container can, for example, contain a process with an ID of 1.
  • NS (filesystem mountpoints): Processes in different containers have an individual "view" of the filesystem and can thus access different objects. This functionality is comparable to the well-known Unix chroot.
  • NET (network resources): Each container can have its own network device with its own IP address and routing table.
  • USER (user and group IDs): Processes inside and outside a container can have user and group IDs that are independent of one another. This is useful because a process outside the container can therefore have a non-privileged ID while it runs as ID   inside the container; thus, it has complete control of the container but not of the host system. However, user namespaces have only been available since kernel version 3.8 and are therefore not yet included in all Linux distributions.

Image-Based Containers

A Docker image is a static snapshot of a container configuration at any given time. The image can contain multiple layers, although they can usually only be read but not changed. An image is only given an additional writable layer in which changes can be made when a container is started. A Docker container usually consists of a platform image containing the run-time environment and further layers for the application and its dependencies.

The Dockerfile text file describes the exact structure of an image and is always used when you want to create a new image. Docker uses the overlay filesystem AuFS, which, however, is not included in the standard Linux kernel and therefore requires special support from the distribution as a storage back end. Instead, Docker can also work together with Btrfs. (Note that the filesystem does not support SELinux Labels and thus has to do without this protection; I expressly advises against this approach.) Because of all these problems, Red Hat uses its own Docker storage back end based on Device Mapper. A thin provisioning module (dm-thinp) is used here to create a new image from the individual layers of a Docker container.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus