Listing 6: Warewulf – Part 4

Listing 6: Torque Job Script

[laytonjb@test1 TEST]$ more pbs-test_001
1  #!/bin/bash
2  ###
3  ### Sample script for running MPI example for computing PI (Fortran 90 code)
4  ###
5  ### Jeff Layton
6  ### 8/5/2012
7
8  ### Set the job name
9  #PBS -N mpi_pi_fortran90
10
11 ### Run in the queue named “batch”
12 #PBS -q batch
13
14 ### Specify the number of cpus for your job.  This example will allocate 4 cores
15 ### using 3 processors on 1 node.
16 #PBS -l nodes=1:ppn=3
17
18 ### Tell PBS the anticipated run-time for your job, where walltime=HH:MM:SS
19 #PBS -l walltime=0:10:00
20
21 ### Load needed modules here
22 . /etc/profile.d/modules.sh
23 module load compilers/open64/5.0
24 module load mpi/mpich2/1.5b1-open64-5.0
25
26 ### Switch to the working directory; by default TORQUE launches processes
27 ### from your home directory.
28 cd $PBS_O_WORKDIR
29 echo Working directory is $PBS_O_WORKDIR
30
31 # Calculate the number of processors allocated to this run.
32 NPROCS=`wc -l < $PBS_NODEFILE`
33
34 # Calculate the number of nodes allocated.
35 NNODES=`uniq $PBS_NODEFILE | wc -l`
36
37 ### Display the job context
38 echo “Running on host `hostname` “
39 echo “Start Time is `date` “
40 echo “Directory is `pwd` “
41 echo “Using ${NPROCS} processors across ${NNODES} nodes “
42
43 mpirun -np 3 ./mpi_pi < file1 > output.mpi_pi
44
45 echo “End time is `date` “

Related content

  • openlava – Hot Resource Manager

    HPC systems are really designed to be shared by several users. One way to share them is through a software tool called a resource manager. Openlava is an open source version of the commercial scheduler LSF. It shares the robustness of LSF while being freely available, very scalable, and easy to install and customize.

  • Warewulf 4 – Python and Jupyter Notebooks

    Interactive HPC applications written in languages such as Python play a very important part today in high-performance computing. We look at how to run Python and Jupyter notebooks on a Warewulf 4 cluster.

  • Warewulf 4 – GPUs

    Install NVIDIA GPU drivers on the head and compute nodes.

  • Warewulf Cluster Manager – Completing the Environment

    Installing and configuring Warewulf on the master node and booting the compute nodes creates a basic cluster installation; however, a little more configuration to the master remains and a few other tools must be installed and configured for the Warewulf cluster to become truly useful for running HPC applications.

  • Building a HPC cluster with Warewulf 4
    Warewulf installed with a compute node is not really an HPC cluster; you need to ensure precise time keeping and add a resource manager.
comments powered by Disqus