12%
17.05.2017
improve application performance and the ability to run larger problems. The great thing about HDF5 is that, behind the scenes, it is performing MPI-IO. A great deal of time has been spent designing
12%
21.08.2012
Listing 6: Torque Job Script
[laytonjb@test1 TEST]$ more pbs-test_001
1 #!/bin/bash
2 ###
3 ### Sample script for running MPI example for computing PI (Fortran 90 code)
4 ###
5 ### Jeff Layton
12%
10.09.2013
domains. Assuming that your application is scalable or that you might want to tackle larger data sets, what are the options to move beyond OpenMP? In a single word, MPI (okay, it is an acronym). MPI
11%
01.08.2012
mpi/mpich2/1.5b1 modulefile
#%Module1.0#####################################################################
##
## modules mpi/mpich2/1.5b1
##
## modulefiles/mpi/mpich2/2.1.5b1 Written by Jeff
11%
01.08.2012
mpi/mpich2/1.5b1-open64-5.0 modulefile
#%Module1.0#####################################################################
##
## modules mpi/mpich2/1.5b1-open64-5.0
##
## modulefiles/mpi/mpich2/1.5b1
11%
06.10.2023
deprecated ADIOS package and introduced ADIOS2.
Introduced two OpenMPI variants. One with PMIX support and one without PMIX support.
The OpenMPI variant with PMIX support is used in the slurm based
11%
06.11.2012
options, and you notice that some simple options are a choice of MPI and BLAS libraries. Of course, you also need to choose a compiler. The task seems simple enough until you lay out the possible choices
11%
18.07.2012
In the first two Warewulf articles, I finished the configuration of Warewulf so that I could run applications and do some basic administration on the cluster. Although there are a plethora of MPI
11%
13.06.2018
.5 laptop, these examples won’t involve any GPUs.
Example 1
The first example is very simple: just a base OS along with the GCC compilers (GCC, G++, and GFortran). The HPCCM recipe is basically trivial
11%
30.01.2013
as well), but you might also have users who need previous versions of these packages. This problem is compounded by having multiple compilers and multiple MPI libraries, resulting in a large number