« Previous 1 2 3 4 Next »
Energy efficiency in the data center
Turn It Down
More Software-Defined Storage
A software-based layer is responsible for avoiding functional dependencies on the hardware side. However, it also creates a large number of new virtual instances, which can become complex in terms of management, backup, and monitoring and will tend to grow quickly. Therefore, a proactive infrastructure monitoring approach is important for containers and microservices, combined with high levels of automation and tools like AutoQoS with flash.
Dependencies are shifting, and new tools and APIs for containers in hybrid cloud deployments need to be considered when planning the right storage and application environment. In the software-defined data center (SDDC), core to edge, all storage services are shifted to an independent logical software control plane. The control plane handles management tasks, such as provisioning, logical unit number (LUN) configuration, replication, deduplication, compression, snapshots, access to attached storage resources, and so on.
If storage and CPU resources are combined in a hyperconverged system, for example, storage can co-manage these systems with APIs in a centralized SDS solution, even if the systems reside in the cloud. You then have an SDDC with a focus on storage, such as VMware vSAN.
Flash – All About the Data Carrier
Increasing data volumes continue to make hard disk drive (HDD) storage essential. The hard drive power requirement was estimated at around 14W/HDD in 2005. Since then, this value dropped by around 50 percent to about 8.5W/HDD in 2015 and is currently about 4.3-5.0W for a 16-18TB HDD. Greater energy efficiency means that more power is available for the energy consumed, which in turn reduces the total cost of ownership.
Besides drive performance, another key figure for determining energy efficiency is drive efficiency, defined as performance per watt. With a random read/write 4K/16Q profile, the power consumption of an HDD is between 10 and 5W given a realistic level of 300-400 input/output operations per second (IOPS). The energy consumption depends on the write-read ratio, queue depth, block sizes, throughput rates, and drive writes per day for SSDs.
A simple comparison of storage media on paper without a genuine application is of limited value and always needs to be taken into account in the design. Since 2010, the wattage of an SSD has remained relatively constant at 6W/drive, whereas the wattage per terabyte has increased. Currently, a high-performance NVMe SSD achieves >90,000 IOPS/W for random reads of 4K, with latencies between 70µs (read) to 10µs (write) at around 1.5 million IOPS/SSD.
In terms of the performance parameters mentioned previously (i.e., cost of access and energy efficiency), the SSD beats any fast HDD in a direct comparison for transaction-oriented profiles. As far as total cost of ownership is concerned, however, other factors also play a role: Besides space and drive capacity, these factors include data reduction with tools for deduplication and data compression, which results in the electricity costs per kilowatt-hour per application and storage system calculated over the year.
The question, of course, is whether a company wants to plan for performance or capacity. The computations will usually look different for file and object storage, depending on capacity and access profile. For capacity and cost reasons, high-capacity HDD systems are still attractive on a server storage network.
Determining Energy Efficiency
Several approaches can determine the energy efficiency of storage systems. On the one hand is I/O performance efficiency: Admins often consider that for storage servers to be optimally utilized, they must operate at about 60-80 percent of their total I/O capacity. Software-defined scale-out systems allow operations to be run closer to load limits by adding more servers; however, this scenario is of course not possible in all application environments.
I/O performance efficiency is expressed in IOPS per US dollar or euro. As has already been seen, SSDs have an advantage over HDDs. However, the difference can be very small or can even turn negative given permanently high levels of write operations (think NAND wear-out). SSDs also consume more power if the operations are mainly writes, which is why I/O workload profiles continue to play a role in the energy-efficient use of storage systems.
Admins also often talk about capacity performance efficiency (i.e., the storage performance per watt). The parameter is defined as the number of stored capacity units per watt of system power consumption. Efficient technologies help more than double capacity performance efficiency. Storage systems with an efficiency factor of 1.0 should always be able to store more data than they have available in terms of raw capacity.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)