« Previous 1 2 3 4 Next »
Tips and Tricks for Containers
Extending a Container
Sometimes the user of a container will want to add something to it to create a customized container. This addition could be another package or another tool or library – something needed in the container. Note that putting datasets in containers is really not recommended unless they are very small and for the specific purpose of testing or baselining the container application.
Adding applications to containers is generically termed extending them and implies that users can install something into a container and save the container somewhere (e.g., Docker Hub or Singularity Hub).
The next two sections discuss how to extend Docker and Singularity containers – that is, containers not images.
Extending a Docker Container
A Docker container is writable by default. However, when you exit the container, if --rm was used, the container stops and is erased and any changes are lost. However, if you do not use that option, the container is still running on the host on exit. You can take advantage of this situation with a Docker command to save the running container as an image or to commit it as an image to a local repository. The docker commit command run outside of the container commits the container’s changes into a new image.
The overall process of extending a Docker container is not too difficult, as the next example illustrates. The images on my local repository on my desktop are:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE nvidia/cuda 10.1-base-ubuntu18.04 3b55548ae91f 4 months ago 106MB hello-world latest fce289e99eb9 16 months ago 1.84kB
Running the nvidia/cuda image without the --rm option,
$ docker run --gpus all -ti nvidia/cuda:10.1-base-ubuntu18.04 root@c31656cbd380:/#
makes it possible to extend the container. Recall that this option automatically removes the container when it exits. However, it does not remove the image on which it is based. If you did use this option, on exiting the container, all of your changes would disappear.
At this point, you can install whatever you want in the running container. An example I use in explanation is installing Octave or Scilab into the container. A word of caution: Before using the appropriate package manager to install anything, synchronize and update the package repository. (I speak from experience.) Below is the abbreviated output for updating the repository for Ubuntu (the container operating system):
root@c31656cbd380:/# apt-get update Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] ... Fetched 18.0 MB in 9s (1960 kB/s)
After the package repositories are synced, I can install Octave:
# apt-get install octave Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: ... done. done. Processing triggers for install-info (6.5.0.dfsg.1-2) ... Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.36.11-2) ...
Now, I'll make sure Octave is installed:
root@c31656cbd380:/# octave octave: X11 DISPLAY environment variable not set octave: disabling GUI features GNU Octave, version 4.2.2 Copyright (C) 2018 John W. Eaton and others. This is free software; see the source code for copying conditions. There is ABSOLUTELY NO WARRANTY; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. For details, type 'warranty'. Octave was configured for "x86_64-pc-linux-gnu". Additional information about Octave is available at http://www.octave.org. Please contribute if you find this software useful. For more information, visit http://www.octave.org/get-involved.html Read http://www.octave.org/bugs.html to learn how to submit bug reports. For information about changes from previous versions, type 'news'.
Finally, exit from the container.
Although I have exited the container, it is still running, as you can see with docker ps -a:
$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c31656cbd380 nvidia/cuda:10.1-base-ubuntu18.04 "/bin/bash" 11 minutes ago Exited (0) 39 seconds ago my_image
The next step is to use docker commit and the container ID to save the running container to a new image. You also need to specify the name of the new image:
$ docker commit c31656cbd380 cuda:10.1-base-ubuntu19.04-octave sha256:b01ee7a9eb2d4e29b9b6b6e8e3664442813f858d14307a09263f3322f3e5732e
The container ID corresponds to the running container you want to put into the Docker repository – the local repository. After saving it locally, you might want to push it to a more permanent repository to which you have access.
To make sure the new image is where you want it, use docker images:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE cuda 10.1-base-ubuntu19.04-octave b01ee7a9eb2d 47 seconds ago 873MB nvidia/cuda 10.1-base-ubuntu18.04 3b55548ae91f 4 months ago 106MB hello-world latest fce289e99eb9 16 months ago 1.84kB
Extending a Singularity Container
As mentioned earlier, by default, Singularity containers are read-only and immutable because it uses SquashFS, a read-only filesystem. When Singularity runs a container, a few filesystems that likely need read-write access are mounted read-write from the host into the container:
- $HOME
- /tmp /proc
- /sys
- /dev
- $PWD
Being read-only creates some issues for extending an immutable Singularity image. However, you can extend an image with a brute force process that modifies the original container definition file and builds a new image. What do you do if you do not have the definition file? Fortunately, when a Singularity image is created, the definition file is embedded in the image, and you have an option to inspect the container and list this definition file, allowing you to edit it to create your new image.
For example, begin by creating a Singularity image from a Docker image:
$ singularity build cuda_10_1-base-ubuntu18_04.simg docker://nvidia/cuda:10.1-base-ubuntu18.04 INFO: Starting build... Getting image source signatures Copying blob 7ddbc47eeb70 done Copying blob c1bbdc448b72 done Copying blob 8c3b70e39044 done Copying blob 45d437916d57 done Copying blob d8f1569ddae6 done Copying blob 85386706b020 done Copying blob ee9b457b77d0 done Copying config a6188358e1 done Writing manifest to image destination Storing signatures 2020/05/02 07:47:53 info unpack layer: sha256:7ddbc47eeb70dc7f08e410a6667948b87ff3883024eb41478b44ef9a81bf400c 2020/05/02 07:47:54 info unpack layer: sha256:c1bbdc448b7263673926b8fe2e88491e5083a8b4b06ddfabf311f2fc5f27e2ff 2020/05/02 07:47:54 info unpack layer: sha256:8c3b70e3904492c753652606df4726430426f42ea56e06ea924d6fea7ae162a1 2020/05/02 07:47:54 info unpack layer: sha256:45d437916d5781043432f2d72608049dcf74ddbd27daa01a25fa63c8f1b9adc4 2020/05/02 07:47:54 info unpack layer: sha256:d8f1569ddae616589c5a2dabf668fadd250ee9d89253ef16f0cb0c8a9459b322 2020/05/02 07:47:54 info unpack layer: sha256:85386706b02069c58ffaea9de66c360f9d59890e56f58485d05c1a532ca30db1 2020/05/02 07:47:54 info unpack layer: sha256:ee9b457b77d047ff322858e2de025e266ff5908aec569560e77e2e4451fc23f4 INFO: Creating SIF file... INFO: Build complete: cuda_10_1-base-ubuntu18_04.simg
Going forward, the image cuda_10_1-base-ubuntu18_04.simg will be used in this example. You can inspect the image for its definition file with the -d option:
$ singularity inspect -d cuda_10_1-base-ubuntu18_04.simg bootstrap: docker from: nvidia/cuda:10.1-base-ubuntu18.04
Because the starting point was a Docker image, the definition file is very simple. The point is that every Singularity container has a definition file embedded in the image, and it can be extracted with a simple command, which allows anyone to reconstruct the image.
The process for extending a Singularity image is simply to take the embedded definition file, modify it to add the needed libraries or tools, and rebuild the image. Simple.
As an example, start with the extracted definition file in the example and make sure Octave is not already installed:
$ singularity shell cuda_10_1-base-ubuntu18_04.simg Singularity> octave bash: octave: command not found Singularity> exit exit
Now, take the definition file and modify it to install Octave:
BootStrap: docker From: nvidia/cuda:10.1-base-ubuntu18.04 %post . /.singularity.d/env/10-docker*.sh %post cd / apt-get update %post cd / apt-get install -y octave
With this updated definition file, you can create a new image. In this case, just name it test.simg. After it is created, shell into it and try the octave command:
$ singularity shell test.simg Singularity> octave (process:13030): Gtk-WARNING **: 11:04:09.607: Locale not supported by C library. Using the fallback 'C' locale. libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast Singularity> exit exit
The GUI pops up on the screen, so you know it is installed. With no direct command to extend a Singularity image, you have to get the definition file from the existing image, update it, and create a new image.
As I have mentioned in past container articles, HPCCM is a great, easy-to-use tool for creating Dockerfiles or Singularity definition files because it contains many building blocks for common HPC components, such as Open MPI or the GCC or PGI toolchains. HPCCM recipes are writen in Python and are usually very short.
HPCCM makes creating Dockerfiles or Singularity definition files very easy; therefore, I tend to use it almost exclusively. However, I would like to store the recipe in the image just as Singularity stores its definition file. Fortunately, Singularity allows you to add metadata to your image in a %label section of the definition. After HPCCM creates the Singularity file, you can then just add a %label section that contains the recipe.
« Previous 1 2 3 4 Next »