« Previous 1 2 3 4 Next »
Opportunities and risks: Containers for DevOps
Logistics
1. OverlayFS
One of the great disadvantages of classic container deployment is that a complete Linux system is attached to each container. Although a container might not be a standalone system per se, with a complete Linux distribution in the background, it becomes another element in need of maintenance. Disk space overhead can be ignored, because storage is one of the cheapest items; however, the maintenance overhead hurts when you consider that the system that houses (usually) multiple containers is itself a full Linux installation.
The problem can be solved elegantly by using OverlayFS (a competitor of AuFS, which you might recognize from its use in Knoppix). Programs access the original host system files through OverlayFS. If the admin creates new files or modifies existing ones on an OverlayFS mount, it isn't reflected on the host filesystem. Instead, OverlayFS manages the difference between the host and OverlayFS itself. This is a union mount: The administrator sees both the host content and the changes applied to the OverlayFS mount (Figure 2).
Anyone who opts for using containers on OverlayFS therefore needs to invest quite a bit of work in the operating system installed on the host. The first step is to create a solid basis: As well as the essential elements, the host system for OverlayFS must have the components that the containers will require later. If, for example, primary PHP applications are running as a container, the required PHP modules should be available on the host. Also note that the more components present on the host system, the smaller the respective containers need be.
The maintenance overhead is considerably reduced because the OverlayFS driver notes changes to the host and passes them on to its clients: If the administrator installs a new SSL or C library on the host, thanks to the union mount principle, OverlayFS will make sure the changes are also implemented for all OverlayFS instances. In the end, the respective containers just need to be restarted to enforce the changes.
Use of a current kernel is recommended for OverlayFS. Anyone using a long-term support (LTS) version of Ubuntu has a good start, because Canonical also provides backports from later versions of the Ubuntu kernel to the current LTS version. For example, new kernel versions were available starting with Ubuntu 14.04.2 [1], and newer kernels continued to ship by default through kernel v4.4 to LTS 14.04.5; then, the newest 16.04 LTS was released with the same kernel version. When Ubuntu reaches 16.04.2, it too will receive kernel updates by default through its lifetime. Note that other distributions with long-term support implement different strategies in terms of kernel updates.
OverlayFS on the host side is only half the battle when it comes to managing containers efficiently: The container solution needs to manage the other half. Several versions of Docker have understood how to deal with OverlayFS, but naturally one issue does arise: Anyone who becomes entangled in the morass of Docker functionality won't be able to reap the benefits imparted by using Docker. Therefore, a look at alternatives couldn't hurt.
OverlayFS and LXC
Linux containers (LXC) also has an OverlayFS driver (Figure 3) and is hardly less potent than Docker in terms of functionality. Docker and LXC use the same functions in the Linux kernel (e.g., cgroups and namespaces). LXC is the older of the two solutions and served as a model for Docker in many respects. Anyone who doesn't want to follow the Docker path can drop in on the LXC solution.
Admins aren't spared at least one task with LXC: They do need to prepare the required containers so they are useful for developers. In Docker, this function is included in the Docker tools. Anyone who uses LXC can either create the required containers manually or automate the provision of containers. Larger scale container setups need to do this anyway, because it must always be possible to expand the setup with more hypervisor hosts for new containers if the previous setup runs out of resources.
Ansible is the automation solution of the moment and has very well developed LXC functionality. The complete provision of LXC containers can be accomplished quickly with Ansible (Figure 4). On request, dozens of containers can be started at the same time, even from an individual Ansible playbook.
Transparent
A container setup based on LXC meets the needs of both administrators and developers. As with a typical deployment approach, it remains open to the developers to make changes within the container at their own discretion. Because each individual container is strictly separate from the main system, changes to the host are not reflected in the containers residing on the host. Unlike the typical scenario, administrators are familiar with the basis of the container and can even stimulate changes to its filesystem via the OverlayFS route. The container is thus no longer a black box; instead, it is based on defined standards.
Admittedly, this scenario doesn't prevent developers from ripping open a security gap in containers that the general update can't repair when making changes to the host system. However, the vast majority of developers will settle for what they discover in their container via OverlayFS, so if manual updates are still required, the number of affected installations will be small.
« Previous 1 2 3 4 Next »