« Previous 1 2 3 Next »
ClusterHAT
First Boot
Booting is pretty simple: You plug in the HDMI cable to the monitor and then plug in the power cable to boot the RPi3 master node, which should go into the Pixel desktop. The first time I booted the ClusterHAT, I didn't have it plugged in to the network because my router acts as a DHCP server and assigns IPs to the compute nodes. This setup can sometimes cause problems.
Once the master node has booted, it is a good idea to check the node to see if it looks correct – look especially to see that the two filesystems are NFS exported and that gfortran , mpich , and pdsh are functioning.
The ClusterHAT images come with a very useful tool to start and stop the compute nodes. The clusterhat tool is a simple Bash script that uses gpio (General Purpose Input/Output) pin commands to control the power to the compute nodes, allowing you to turn nodes on and off individually, in groups, or all together and adding a two-second delay between the command for each node. For example, to turn on all of the compute nodes, you run:
pi@controller:~ $ clusterhat on all Turning on P1 Turning on P2 Turning on P3 Turning on P4 pi@controller:~ $
People have also taken the spirit of the clusterhat tool and created something a little different. For example, clusterctl allows you turn the compute nodes on and off, but also lets you determine the status of the node, cut the power, and even run a command across all of the compute nodes.
The first time, it’s probably a good idea to boot the cluster nodes one at a time. For example, to boot the first node, run:
pi@controller:~ $ clusterhat on p1 Turning on P1 pi@controller:~ $
Booting nodes one at a time allows each to be checked to make sure everything is installed and has booted properly.
Remember that the master node NFS-exports two filesystems to the compute nodes. Given that the Pi Zeros use a bridged network over USB 2.0, the network performance is not expected to be very good. Therefore, it will take a little longer for the filesystems to mount. One suggestion is to ping the node (ping p1.local ) until it responds. If the filesystems don't mount for some reason, you can use the clusterhat tool to turn the node off and then on again.
After testing each node independently and ensuring that everything works correctly, you can then reboot all of the nodes at once. Now you can test the cluster by running some MPI code on it.
MPI Code Example
I'm not going to cover "benchmarks" on the ClusterHAT, but it is important to illustrate some real MPI code running on the cluster. Rather than run HPL, the high-performance Linpack benchmark, and argue over tuning options to get the "best" performance, I find it’s better to run the NAS Parallel Benchmarks (NPB), which are fairly simple benchmarks that cover a range of algorithms, primarily focused on CFD (computational fluid dynamics). They stress the processor, memory bandwidth, and network bandwidth; are easy to build and compile; and come in several flavors, including MPI. Also, different problem sizes or “classes” scale from very small to very large systems.
Because the ClusterHAT is a small cluster, I used only the class A test. In the interest of brevity, I only used the cg (conjugate gradient, irregular memory access and communication), ep (embarrassingly parallel), is (integer sort, random memory access), and lu (lower-upper Gauss-Seidel solver) applications with four and eight processors. Four processors included two cases: (1) Pi Zeros only, and (2) RPi3 only. The eight processors case included the RPi3 and the Pi Zeros (a total of eight cores).
For all four applications, performance, measured in millions of operations per second (MOPS), was recorded from the output for the entire MPI group and for each process in the MPI group. These results are tabulated in Table 1.
Table 1: NPB Results
Test | Class | No. of Cores | Total MOPS (RPi3 Only) | MOPS/Process (RPi3 Only) | Total MOPS (Pi Zeros Only) | MOPS/Process (Pi Zeros Only) | Total MOPS (Pi Zeros + RPi3) | MOPS/Process (Pi Zeros + RPi3) |
CG | A | 4 | 198.98 | 49.75 | 38.77 | 9.69 | — | — |
CG | A | 8 | — | — | — | — | 71.98 | 9 |
EP | A | 4 | 25.8 | 6.45 | 6.93 | 1.73 | — | — |
EP | A | 8 | — | — | — | — | 13.92 | 1.74 |
IS | A | 4 | 43.85 | 10.96 | 3.99 | 1 | — | — |
IS | A | 8 | — | — | — | — | 6.71 | 0.84 |
LU | A | 4 | 425.36 | 106.34 | 197.88 | 49.47 | — | — |
LU | A | 8 | — | — | — | — | 396.22 | 49.53 |
« Previous 1 2 3 Next »