Introducing parity declustering RAID
Silverware
Fault tolerance has been at the forefront of data protection since the dawn of computing. To this day, admins continue to struggle with efficient and reliable methods to maintain the consistency of stored data, either locally or remotely on a server (or cloud storage pool) and keep searching for the best way to recover from a failure, regardless of how disastrous that failure might be.
Some of the methods still being used today are considered ancient by today's standards. Why replace something that continues to work? One such technology is called RAID. Initially, the acronym stood for redundant array of inexpensive disks, but it was later reinvented to describe a redundant array of independent disks.
The idea of RAID was first conceived in 1987. The primary goal was to scale multiple drives into a single volume and present it to the host as a single pool of storage. Depending on how the drives were structured, you also saw an added performance or redundancy benefit. (See the box titled "RAID Recap.")
RAID Recap
RAID allows you to pool multiple drives together to represent a single volume. Depending on how the drives are organized, you can unlock certain features or advantages. For instance, depending on the RAID type, performance can dramatically improve, especially as you stripe and balance the data across multiple drives, thus removing the bottleneck of using a single disk for all write and read operations. Again, depending on the RAID type, you can grow the storage capacity.
Most of the RAID algorithms do offer some form of redundancy that can withstand a finite amount of drive failures. In such a situation, when a drive fails, the array will continue to operate in a degraded mode until you recover it by rebuilding the failed data to a spare drive.
Most traditional RAID implementations use some form of striping or mirroring (
Buy this article as PDF
(incl. VAT)