« Previous 1 2 3 4 Next »
Opportunities and risks: Containers for DevOps
Logistics
2. Reproducible Containers
As elegant as the OverlayFS solution may be, it might not fit situations that fall outside the need for a single standard setup. Anyone who has to support different versions of Ubuntu or add CentOS, for example, has two options: get the appropriate amount of hardware to provide the necessary Overlay systems for different distributions, or deploy fully fledged containers.
The good news is: Docker has various strengths here, and anyone who uses Docker sensibly can get around the black box problem. The magic word is reproducibility: Containers that can be recovered easily at any time cause few problems in everyday admin life.
The discussion is reminiscent of the comparison used extensively in the cloud context between kittens and herd animals: Kittens are classic old-school systems, which are less automated – if at all – and require a lot of manual work. The herd animal is a symbol for virtual machines that are automatically rolled out in the same way and are even interchangeable at any time with a modified version.
The reproducibility factor solves the problem of the sheer number of containers with which administrators have to deal in case of problems: If it is possible to recreate many containers automatically via the push of a button, then time-consuming repair of individual instances is not needed.
Docker Tools
Docker cannot be faulted for the huge numbers of kittens built by admins. In fact, it is possible to create clean containers with Docker. A simple example shows this:
FROM ubuntu RUN apt-get update && apt-get install -y x11vnc xvfb firefox RUN mkdir /.vnc RUN x11vnc -storepasswd 1234 ~/.vnc/passwd RUN bash -c 'echo "firefox" >> /.bashrc' EXPOSE 5900 CMD ["x11vnc", "-forever", "-usepw", "-create"]
The short block of code creates a Docker container in which both X11 and Firefox are running and that allows external access via VNC. On the basis of this Docker file, the same Docker containers can be created reproducibly on each host. The basis of the process is the Ubuntu image from the Docker Hub, which is more or less official and maintained by the provider.
Admittedly complex web applications cannot be satisfied with so few commands, and Docker files cannot be read when they stretch over several screens, so the second basic component of reproducible containers is the use of an automation solution.
Puppet, Chef, Ansible, …
Whether Puppet, Chef, Ansible, or another solution is used for automation is of secondary importance from an administrator's perspective. The only important point is that a fresh container based on an official image supplies the desired function after the tool is called.
For a change, the developer, not the admin, is responsible for the bulk of the work. In many cases, admins aren't in a position to deploy the container, because they simply aren't familiar enough with all the potential applications to be rolled out.
The developer's mantra, "works on my machine in the container," isn't enough in this case, which must be reproducible. Of course, nothing can stop admins from helping developers implement the automation, and cooperation would save both sides a lot of work that might otherwise be required later. The admin's task is to ensure compliance. If a developer asks for a live container deployment, there's no harm in taking a quick look first. A modified image, instead of bare image, of the chosen distribution is a no-go.
This mode of operation might require patience and the will to cooperate on all sides, but ultimately, it establishes ground rules from which administrators and developers can benefit: Admins get maintainable containers, and developers get development environments off the ground much more quickly than they could starting from scratch.
« Previous 1 2 3 4 Next »