« Previous 1 2 3 4 Next »
MPI Apps with Singularity and Docker
Running the Containers
In this section, I show how to run MPI applications that are in containers for both Singularity and Docker.
Singularity
Rather than show the output from the MPI application for each command-line options, Listing 7 shows sample output from a Singularity container run that represents the output for all options.
Listing 7: Singularity Container Run
POISSON_MPI - Master process: FORTRAN90 version A program to solve the Poisson equation. The MPI message passing library is used. The number of interior X grid lines is 9 The number of interior Y grid lines is 9 The number of processes is 2 INIT_BAND - Master Cartesian process: The X grid spacing is 0.100000 The Y grid spacing is 0.100000 INIT_BAND - Master Cartesian process: Max norm of boundary values = 0.841471 POISSON_MPI - Master Cartesian process: Max norm of right hand side F at interior nodes = 1.17334 Step ||U|| ||Unew|| ||Unew-U|| 1 0.403397 0.497847 0.144939 2 0.570316 0.604994 0.631442E-01 3 0.634317 0.651963 0.422323E-01 4 0.667575 0.678126 0.308531E-01 5 0.687708 0.694657 0.230252E-01 6 0.701077 0.705958 0.185730E-01 7 0.710522 0.714112 0.154310E-01 8 0.717499 0.720232 0.131014E-01 9 0.722829 0.724966 0.111636E-01 10 0.727009 0.728716 0.955921E-02 11 0.730356 0.731745 0.825889E-02 12 0.733083 0.734229 0.726876E-02 13 0.735336 0.736293 0.641230E-02 14 0.737221 0.738029 0.574042E-02 15 0.738814 0.739502 0.514485E-02 16 0.740172 0.740762 0.461459E-02 17 0.741339 0.741849 0.414240E-02 18 0.742348 0.742791 0.372162E-02 19 0.743225 0.743613 0.334626E-02 20 0.743993 0.744333 0.301099E-02 21 0.744667 0.744967 0.271109E-02 22 0.745261 0.745526 0.244257E-02 23 0.745787 0.746022 0.220180E-02 24 0.746254 0.746463 0.198567E-02 25 0.746669 0.746855 0.179151E-02 26 0.747039 0.747205 0.161684E-02 27 0.747369 0.747518 0.145969E-02 28 0.747665 0.747799 0.131971E-02 29 0.747930 0.748050 0.119370E-02 30 0.748168 0.748276 0.107971E-02 31 0.748382 0.748479 0.976622E-03 POISSON_MPI - Master process: The iteration has converged POISSON_MPI: Normal end of execution.
You can execute an MPI application in Singularity containers in two ways, both of which execute the application, which is the ultimate goal. Note that in both cases, being a privileged user (e.g.,root) is not required.
The first way to run the MPI code in the Singularity container is:
$ singularity exec poisson.sif mpirun --mca mpi_cuda_support 0 -n 2 /usr/local/bin/poisson_mpi
In this case, any support for CUDA is turned off explicitly with the --mca mpi_cuda_support 0 option, which is also used in the next method.
The second way to execute MPI code in a Singularity container is:
$ mpirun -verbose --mca mpi_cuda_support 0 -np 2 singularity exec poisson.sif /usr/local/bin/poisson_mpi
Running the MPI code with mpirun , uses the Singularity container as an application, just like any other application.
If you examine the command a little closer, you will notice that mpirun is used “outside” the container and is run in the host operating system (OS). Make sure (1) the command in $PATH and (2) the MPI version on the host OS are compatible with the MPI library in the container. On examination of the command, you can see that the MPI initialization is done in the host OS, not in the container. The command line looks just like you run any other MPI application.
Overall, this approach could make the container more portable, because if the MPI versions are compatible, you don’t have to worry about which version of the MPI implementation is in the container, compared with on the host. It just all works, as long as the MPI versions are compatible.
« Previous 1 2 3 4 Next »