Review: Accelerator card by OCZ for ESX server

Turbocharger for VMs

Writing Is Poison

Thus far, I have only considered read benchmarks – and for a good reason: Although VXL also caches writes, they must be confirmed to the host to ensure consistency as soon as they are completed on the physical media. This means that the cache offers virtually no speed benefits – instead the writes occupy cache space that parallel read operations can no longer use.

The effect is that even workloads with 50 percent write operations more or less wipe out the performance benefits on the cache-accelerated volume. Although write operations fortunately do not slow down the system to below the uncached performance, the speed gains are lost. For more information, see the "How We Tested" box.

How We Tested

The ESX server used was a proven ProServ II machine by ExuSData. It came with two Xeon X5667 quad-core processors (3GHz) on an Intel S5520HC mainboard equipped with 16GB of RAM. In one of the total of five PCI Express slots, I inserted the Z-Drive R4 SSD storage card by OCZ. Two RAID systems were connected to the server: a transtec 600 Premium SCSI array with ten 400GB SATA drives, which housed the virtual system disks of the VMs, and the iSCSI system I wanted to accelerate (Figure 4): a NASdeluxe NDL-21085T by starline, equipped with ten 2TB SATA drives. This system had two Gigabit Ethernet interfaces (in addition to USB and eSATA ports).

Figure 4: The iSCSI system to be accelerated was a NASdeluxe NDL-21085T by starline.

Of course, there are many read-heavy workloads, especially in today's buzz disciplines like big data. Even in everyday situations, such as simultaneous booting of many VMs, read operations outweigh writes. Here is where the VXL cache finds its niche and where it will contribute to significant performance gains and better utilization of the ESX server, whose storage can thus tolerate a much greater number of parallel VMs.

The VXL cache also is useful in environments where write operations can be focused on specific volumes. Newer versions of the software allow admins to partition the SSD so that only part of it is used as cache, while another part acts as a normal SSD. Write-intensive operations can then be routed to the SSD drive, whereas the cache on the same memory card is mainly used to read parts of a database (e.g., some indices). The vendor's benchmarks with long-term data warehouse queries on MS SQL Server 2012 thus show speed gains of up to 1,700 percent [1], thanks to the use of the VXL software.

Conclusions

When read and write operations are fairly balanced, cannot be separated, and use the same medium, this caching solution does not make sense. However, when reads outweigh writes, or can at least be separated from the write operations, and many VMs or processes are involved, the OCZ solution accelerates the data transfer by a factor of five or more while reducing the data transfer to and from SAN and allowing the ESX Server by VMware to support more virtual machines in parallel  – provided it otherwise has sufficient resources. The cache can also serve multiple hosts or be mirrored to a second Z-Drive and supports migration of VMs between virtualization hosts.

Of course, these benefits come at a price. For the tested version, the cost was EUR 2,560 for a VXL license, including one year's support and no additional features (e.g., cache mirroring), plus EUR 4.5/GB of flash on the cache card, which is available in sizes from 1.2 to 3.2TB.

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Simplify integration of S3 storage with local resources
    The AWS hybrid storage service, known as the Storage Gateway, provides local applications with a seamless connection to Amazon S3 storage. We explain the different gateway types and guide you through their setup.
  • Highly available Hyper-V in Windows Server 2016
    Microsoft has extended the failover options for Hyper-V in Windows Server 2016 to include two new cluster modes, as well as the ability to define an Azure Cloud Witness server. We look at how to set up a Hyper-V failover cluster.
  • The Benefit of Hybrid Drives
    People still use hard disks even when SSDs are much faster and more robust. One reason is the price; another is the lower capacity of flash storage. Hybrid drives promise to be as fast as SSDs while offering as much capacity as hard drives. But can they keep that promise?
  • SDS configuration and performance
    Software-defined storage promises centrally managed, heterogeneous storage with built-in redundancy. We examine how complicated it is to set up the necessary distributed filesystems. A benchmark shows which operations each system is best at tackling.
  • Storage protocols for block, file, and object storage
    The future of flexible, performant, and highly available storage.
comments powered by Disqus