Tuning SSD RAID for optimal performance
Flash Stack
Effective Tuning Measures
When using hardware RAID controllers for SSD RAID, the following recommendation applies: Use the latest possible version of the RAID controller! Firmware optimizations for SSD are significantly better in newer RAID controllers than older models. The Avago MegaRAID controllers (e.g., the new 12GB SAS controller) now integrate a feature (FastPath) for optimizing SSD RAID performance out the box.
To improve performance of hard disk drive (HDD) RAID arrays, hardware RAID controllers were equipped with a cache at a very early stage. Current controllers have 512MB to 1GB of cache memory for use both in read (write cache) and write access (read ahead). For SSD RAID, however, you will want to steer clear of using this cache memory for the reasons discussed above.
Disabling Write Cache
With HDD RAID, write caching (writeback policy) offers a noticeable performance boost. The RAID controller cache can optimally cache small and random writes in particular before writing them out to comparatively slow hard disks. The potential write performance thus increases from a few hundred to several thousand IOPS. Additionally, latency is reduced noticeably – from 5 to 10ms to less than 1ms. In RAID 5 and 6 with HDDs, the write cache offers another advantage: Without cache memory, the read operation required for parity computation (read-modify-write) would slow down the system further. Thanks to the cache, the operation can be time staggered. To ensure that no data is lost during a power failure, a write cache must be protected with a battery backup unit (BBU) or flash protection module (LSI or Adaptec CacheVault ZMCP).
With SSDs, everything is different: Their performance is already so high that a write cache only slows things down because of the associated overhead. This is confirmed by the measurement results in Figure 4: In a RAID 5 with four SSDs, the write IOPS decreased with writeback activated from 14,400 to 3,000 – nearly 80 percent fewer IOPS. The only advantage of the write cache is low write latencies for RAID 5 sets. Because latencies are even lower for RAID 1 (no parity calculations and no write cache), this advantage is quickly lost. The recommendation is thus: Use SSD RAID without write cache (i.e., write through instead of write back). This also means savings of about $150-$250 for the BBU or flash protection module.
Disabling Read Ahead
Read ahead – reading data blocks that reside behind the currently requested data – is also a performance optimization that only offers genuine benefits for hard disks. In RAID 5 with four SSDs, activating read ahead slowed down the read IOPS these tests by 20 percent (from 175,000 to 140,000 read IOPS). In terms of throughput, you'll see no performance differences for reading with 64KB or 1,024KB blocks. Read ahead only offered benefits in our lab in throughput with 8KB blocks. Additionally, read ahead can only offer benefits in single-threaded read tests (e.g., with dd
on Linux). However, both access patterns are atypical in server operations. My recommendation is therefore to run SSD RAIDs without read ahead.
Buy this article as PDF
(incl. VAT)