17%
21.04.2016
to Greg about his background and some of his projects in general and about his latest initiative, Singularity, in particular. (Also see the article on Singularity.)
Jeff Layton: Hi Greg, tell me bit
16%
17.05.2017
improve application performance and the ability to run larger problems. The great thing about HDF5 is that, behind the scenes, it is performing MPI-IO. A great deal of time has been spent designing
15%
14.10.2019
of classic HPC tools, such as MPI for Python (mpi4py
), a Python binding to the Message Passing Interface (MPI). Tools such as Dask focus on keeping code Pythonic, and other tools support the best performance
15%
10.09.2013
domains. Assuming that your application is scalable or that you might want to tackle larger data sets, what are the options to move beyond OpenMP? In a single word, MPI (okay, it is an acronym). MPI
15%
06.10.2023
deprecated ADIOS package and introduced ADIOS2.
Introduced two OpenMPI variants. One with PMIX support and one without PMIX support.
The OpenMPI variant with PMIX support is used in the slurm based
15%
06.11.2012
options, and you notice that some simple options are a choice of MPI and BLAS libraries. Of course, you also need to choose a compiler. The task seems simple enough until you lay out the possible choices
14%
12.09.2018
, it offers the possibility of a shared filesystem using SSH, which can help with security because only port 22 needs to be open (which you need for MPI application communications, anyway). SSHFS also uses SFTP
14%
18.07.2012
In the first two Warewulf articles, I finished the configuration of Warewulf so that I could run applications and do some basic administration on the cluster. Although there are a plethora of MPI
14%
16.07.2015
Programmers use a number of compilers, libraries, MPI libraries/tools, and other tools to write applications. For example, someone might code with OpenACC, targeting GPUs and Fortran, whereas another person
14%
08.12.2020
service (DVS)
Nonuniform memory access (NUMA) properties
Network topology
Message passing interface (MPI) communication statistics (currently you have to use Intel MPI or MVAPICH2)
Power