« Previous 1 2 3 4 Next »
Building and Running Containers
Singularity
A Singularity container can be run four ways: (1) as a native command, (2) with the run subcommand, (3) with the exec subcommand, or (4) with the shell subcommand. Probably the easiest way to run the container is as a native command (i.e., execute the container’s runscript):
$ ./hello-world_latest.sif
This command runs the script in the %runscript section in the container. A huge point to be made of the command is that you don’t need to be root to run the container.
The %runscript section of a Singularity container will be executed when the container is run. (You define the script in the Singularity specification file.) A simple example of a %runscript in a specification file is,
%runscript echo "Container was created $NOW" echo "Arguments received: $*" exec echo "$@"
which echos two lines to stdout, as well as the command-line arguments. In the command to run the container, you can pass arguments after the container name, which are used in the %runscript section.
The second way Singularity can run a container is with the run subcommand:
$ singularity run hello-world_latest.sif
This command runs the %runscript section of the container as with the previous command. To run a container from a remote hub – for example, the Singularity hub – enter:
$ singularity run shub://singularityhub/hello-world
You can also use a Docker container:
$ singularity run docker://godlovedc/lolcow
Singularity pulls the Docker image from Dockerhub and converts it to a Singularity image, pulling the Docker layers and combining them into a single layer in a final Singularity image that is then executed. Normally for a Singularity container, the %runscript section of the container is executed. However, Docker does not have this feature, so Singularity runs the script in the ENTRYPOINT of the Docker container.
The third way to run a Singularity container is with the exec subcommand, which lets you run a command different from those in %runscript:
$ singularity exec hello-world_latest.sif cat /etc/os-release
This command “executes” the container, which then runs the command cat /etc/os-release.
The fourth way to run a Singularity container is to use the shell subcommand, which starts a Singularity container and a shell in the container:
$ singularity shell lolcow.sif Singularity lolcow:~> whoami eduardo Singularity lolcow:~> hostname eduardo-laptop
Note that if you use the shell subcommand for a Docker container, once you exit the container, it disappears. The converted container is ephemeral by default unless you do something to save it.
Some important options can be used with Singularity when running containers. The first is for using GPUs. For Nvidia GPUs, add the option --nv to the command line:
$ singularity exec --nv hello-world.sif /bin/bash
For Singularity versions 3.0+, this option causes Singularity to look for Nvidia libraries on the host system and automatically bind mount them to the container.
Singularity can “bind mount” files or directories from the host system to mountpoints in the container. A best practice is for containers not to store their data inside the container (possibly making it very large). Instead, it is highly recommended that you leave the data outside the container and bind mount it to the container from the host system.
With Singularity, the bind option has the form:
--bind src[:dest[:opts]]
The options (opts) are pretty simple: ro, read-only, and rw, read-write (the default); for example:
--bind /scratch:/mnt/scratch:rw
If you need to bind mount several files or directories, you can use a comma-delimited string of bind path specifications, or you can just use a number of --bind options on the command line.
Running an encrypted Singularity image is the same as running an unencrypted image: You can use run, shell, or exec and either a passphrase, a PEM file, or an environment variable. You just use the appropriate option on the command line.
You can also use --fakeroot to run a Singularity image with the following options,
- shell
- exec
- run
- instance start
- build
which also works when Docker containers are used.
Docker
Docker is a bit different from Singularity when running images, because it has a single run subcommand with a number of options.
Before getting into the details of running a container, I should mention some preliminaries. To run a Docker container, you should be part of the docker group, which is inherently insecure because any user who can send commands to the Docker engine can escalate privilege and run root user operations. But such is the nature of Docker.
A good example of best practices for the docker run command is:
$ sudo docker run --gpus all -it --name testing \ -u $(id -u):$(id -g) -e HOME=$HOME -e USER=$USER \ -v $HOME:$HOME --rm nvcr.io/nvidia/digits:17.05
Running a Docker image requires root access, for which this example uses sudo.
The option --gpus, which tells Docker which GPU devices to add to the container, is followed by the option all, which tells Docker to use all of the GPUs available. Note that you don’t have to use this option if the applications in the container don’t use GPUs.
To specify a specific GPU with the device name, enter:
$ sudo docker run -it --gpus device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a ...
Another option for GPUs is to call them out by device numbers, beginning with zero. A simple comma-delimited list is used to specify the devices:
$ sudo docker run -it --gpus device=0,2
This command tells the container to use the first GPU, device=0, and the third GPU,device=2.
The -it option is really two commands that are typically used when you need interactive access in the container. The option -i tells Docker you want an interactive session. The option -t allocates a pseudo-TTY for the session. You almost always see these options together. If you don’t need interactive access to the Docker container, you don’t need this option.
When running a Docker container, it is handy if the running process has a name so you can locate the specific container in the process table. Otherwise, the container process has the same name as the container, making it very difficult to identify a specific container. The --name testing option gives the process name testing to the container. You can specify whatever you like for the name. For example, you could create a name that includes $USER, $GROUP, or both. A best practice is to always assign a process name to the container that is different from the container name.
The next options in the sample command line allow you to set the username and group:
-u $(id -u):$(id -g)
The general form of the option is:
name|uid]:[group|gid]
This best practice sets the GID and UID in the container to match those on the host.
Docker also allows you to set environment variables as part of the docker run command. For the sample command, the home directory and the username are all set in the container by environment variables:
-e HOME=$HOME -e USER=$USER
This best practice for running Docker containers sets the user’s home directory and the name in the container.
You can define container environment variables on the command line with the -e option:
-e VARIABLE='value'
VARIABLE is the environment you want to set in the container, and value is the value of the environment variable.
The -v option lets you bind mount files or directories from the host system into the container. As mentioned in the previous article, you should really never put data into your container unless you are done with the container and will be archiving it. The recommended approach is to bind mount the data from the host system to the container:
-v (host file/directory):(container mount point):(option)
The first part after the switch is the file or directory on the host, followed by a colon and the mountpoint in the container. The :(option) is an optional field to specify details about the mount, such as ro (read-only) or rw (read-write).
In the sample docker run command, the user’s home directory on the host is mounted in the container:
-v $HOME:$HOME
A definite best practice is to mount the directories you need in the container. To make things easy, I always mount my /home directory. If your home directory doesn’t contain the data you need, then a second mount option might be needed to mount the data directory into the container, for example:
-v /data/testcase1:/data
The last best practice is to use the --rm option, which automatically removes the container when it exits. It does not remove the image on which it is based. Remember that containers are merely an instance of an image. If you don’t use this command, you will still have containers hanging around, and you will have to stop and kill them.
« Previous 1 2 3 4 Next »