Tuning SSD RAID for optimal performance

Flash Stack

RAID 1

The commonly used RAID 1 provides resiliency. Read access benefits because the two SSDs are accessed simultaneously. Both SWR and HWR confirm this assumption and achieve around 107,000 and 120,000 random read IOPS (4KB block size), which is more than 1.4 times that of a single SSD. The RAID array should execute random writes as fast as an SSD however, tests reveal performance penalties for SWR and HWR of about 30 percent for random writes and 50/50 mixed workloads.

The latency test for the SWR shows some pleasing results. The latencies in all three workloads are only minimally above those of a single SSD. In terms of HWR, the RAID controller adds latency. A constant overhead of approximately 0.04ms is seen across all workloads (Figure 1).

Figure 1: The overhead for latency (LAT) in HWR is about 0.04ms. In RAID 1 with SWR, the latencies increase minimally compared with a single SSD.

RAID 5

RAID level 5 is characterized by its use of parity data, which causes additional overhead in write operations. The additional overhead is clearly reflected in the results of IOPS tests. A RAID 5 with three SSDs achieved 4KB IOPS in a random write test, on a par with a single SSD. Anyone who believes that adding another SSD will significantly increase write performance for RAID 5 is wrong. Intel analyzed this situation in detail in a whitepaper [3]. Double the IOPS performance compared with three SSDs is only achieved in write operations as of eight SSDs in RAID 5. When comparing HWR and SWR, the contenders are on par with three SSDs. The setup with four SSDs in RAID 5 is more favorable for SWR. Mixed workloads and write-only operations at 4KB reveal a performance of approximately 4,000 IOPS for the HWR. Figure 2 summarizes the results of a single SSD by HWR and SWR with three and four SSDs in RAID 5.

Figure 2: Note the small IOPS numbers for random write in RAID 5. When compared with HWR and SWR, the workload plays a role.

The more read-heavy the workload becomes, the less noticeable the parity calculations are. With four SSDs in a RAID 5 setup, the HWR achieves almost 180,000 IOPS – more than double the performance of a single SSD. SWR does not quite achieve this number, just exceeding the 150,000 IOPS mark. The latency test illustrates the differences between read and write access. About 0.12ms are added from read only, through 65/35 mixed, to write only in a HWR. The increase is about 0.10ms per workload in the SWR.

RAID 5 also impresses with its read throughput for 1,024KB blocks. The HWR with four SSDs in a RAID 5 setup handles more than 1.2GBps, whereas the SWR lags slightly behind at 1.1GBps. The write throughput is limited by the 80GB model, which is 100MBps according to the specs. The four SSDs in RAID 5 achieve around 260MBps in the HWR tests and 225MBps in the SWR tests.

RAID 10

As the name implies, RAID 10 is a combination of RAID 1 and RAID 0 (sometimes also referred to as RAID 1+0). In setups with four SSDs, a direct performance comparison with RAID 5 makes sense. Only 50 percent of the net capacity is available in RAID 10, but with many random writes, you see major benefits compared with RAID 5. Figure 3 clearly shows the weakness of RAID 5 in terms of write IOPS. Likewise, the increased latency in RAID 5 from parity calculations is noticeable. Compared with this, RAID 10 achieves latencies that are on a par with those of a single SSD. If you are not worried about write performance and latencies, then you can safely stick with RAID 5.

Figure 3: The performance of RAID 5 and RAID 10 is strongly dependent on the intended use. The IOPS figures for random read are significantly higher in RAID 5. The more random writes, the better RAID 10 fares. But latencies on a par with SSD are only achieved by RAID 10.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus