Desktop Supercomputers: Past, Present, and Future
Desktop supercomputers give individual users control over compute power to run applications locally at will.
Rethinking RAID (on Linux)
Configure redundant storage arrays to boost overall data access throughput while maintaining fault tolerance.
How Linux and Beowulf Drove Desktop Supercomputing
Open source software and tools, the Beowulf Project, and communities changed the face of high-performance computing.
A Brief History of Supercomputers
This first article of a series looks at the forces that have driven desktop supercomputing, beginning with the history of PC and supercomputing processors through the 1990s into the early 2000s.
Remora – Resource Monitoring for Users
Remora provides per-node and per-job resource utilization data that can be used to understand how an application performs on the system through a combination of profiling and system monitoring.
mpi4py – High-Performance Distributed Python
Tap into the power of MPI to run distributed Python code on your laptop at scale.
Why Good Applications Don’t Scale
You have parallelized your serial application, but as you use more cores you are not seeing any improvement in performance. What gives?
SMART Devices
Most storage devices have SMART capability, but can it help you predict failure? We look at ways to take advantage of this built-in monitoring technology with the smartctl utility from the Linux smartmontools package.
Caching with CacheFS
For read-heavy workloads, CacheFS is a great caching mechanism for NFS and AFS.