Photo by Ussama Azam on Unsplash

Photo by Ussama Azam on Unsplash

Energy efficiency in the data center

Turn It Down

Article from ADMIN 76/2023
By
Storage systems are one of the biggest factors in power consumption, so data storage can make a massive difference in operating costs. We look at how you can achieve savings through technologies such as flash, tiered storage, or even cloud-native container environments.

The days of cheap electricity are over and, despite all the technical advances, increasing amounts of energy are being consumed as the number of new installations around the world grows. According to Bitkom [1], Germany alone has more than 3,000 data centers with greater than 40kW of IT connection capacity and at least 10 server racks, as well as around 47,000 smaller IT installations, and the trend is rising. Electricity demand for these data centers was an estimated 16 billion kilowatt-hours (kWh) in 2022 and is expected to increase by about five percent per year through 2030. For comparison's sake: Five years ago, electricity consumption was 2.8 billion kWh.

Spending on power and cooling has increased by more than a factor of 10 over the last decade, and the IT environment of a hyperscale cloud data center now consumes between 20 and 50MW per year – that's as much electricity as around 35,000 homes. This increase is driven not least by the rise in demand for cloud computing services, digitalization, and data-intensive applications such as artificial intelligence and big data analytics, both in the core and on the edge.

By the way, the share of renewable energies was still in the single-digit percentage range until 2020. It is estimated that data centers account for up to three percent of electricity consumption worldwide and are expected to reach a share of more than four percent by 2030. This hunger for energy looks likely to continue to increase with new technologies such as high-density architectures. The volumes of data generated in this process are constantly increasing because of increasing volumes of inactive and unstructured data (>80 percent), which makes improving energy efficiency and reducing electricity consumption a high priority from an operator's perspective for reasons of cost, for compliance with EU regulations, and just to stay competitive.

In the US, an Environmental Protection Agency report on data center efficiency stated that, "if state-of-the-art technology were adopted … energy efficiency could be improved by as much as 70 percent" and went on to say that even a modest 10 percent total improvement would save 10.7 billion kWh/year [2].

Adjusting Screws

Operating data centers more efficiently requires leveraging all opportunities. You have basically two ways to do this: Implement new technologies or adapt processes and existing procedures to optimize energy consumption and storage capacities. IT infrastructure means not just servers, hard drives, solid-state drives (SSDs), and network devices, but also cooling, power distribution, backup batteries and generators, lighting, fire protection, and more. This optimization potential encompasses many influencing factors.

Power usage effectiveness (PUE) – the ratio between the power draw of the infrastructure and the energy supplied to the devices – is a common method for calculating the cost of an infrastructure. With a theoretical PUE value of 1.0, the components would receive 100 percent of the energy used in the data center. According to research, the average value is currently 1.67 [3].

Today, enterprise storage and highly virtualized hyperconverged systems with software-defined storage (SDS) platforms are therefore no longer economical to operate without integrated efficiency improvement measures that include data deduplication, compression, thin provisioning, delta snapshots, erasure coding, SSD NAND chips, and tape in the archive environment. In this context, the use of energy-saving technologies with the aid of optimized cooling processes and the reuse of heat is an important step toward greater energy efficiency. Government players have also committed themselves to energy consumption in the ICT sector, for example in the form of the Federal Republic of Germany's Green IT Initiative [4] [5].

Storage as a Large Consumer

According to a study by the Lawrence Berkeley National Laboratory, storage systems still accounted for eight percent of total data center energy consumption in 2014 [6]. In 2016, the share had already risen to 11 percent. Efficiency in data use is a critical issue. The biggest challenge is power consumption of server systems, including cooling, which has increased by 266 percent since 2017 and continues to be the dominant consumption block in the data center [7].

In this light, physical separation of storage and compute platforms makes sense. Until now, much of the energy consumed by increasingly densely packed storage systems has been used for cooling and will probably not change as long as conventional hard drives are predominantly used for enterprise storage in the petabyte range. Only the consistent use of NAND flash and, in the long term, other semiconductor memories in combination with storing DNA data in archives would theoretically have the potential to limit resource consumption sustainably over the life cycle.

Cloud and Containerization

Enterprises need to look for the most energy-efficient hardware and cooling methods, both on-premises and in the cloud. For high-transaction tier 0/1 applications such as database systems and for server and storage systems (e.g., high-density, quad-level cell flash storage; QLC). If it makes sense from an application perspective, physical servers should ideally always be highly virtualized and operated with the greatest possible number of virtual machines (VMs).

Containerization with Kubernetes and the like has even more potential to reduce CO2 emissions from cloud-native resources or local IT infrastructures. After all, Kubernetes has a direct, positive effect on server utilization and its CO2 emission intensity. Of course, the move to hyperscale environments has been accompanied by innovations in server utilization through consolidation and software management systems, such as hypervisors and container technologies, but this is not true everywhere. To date, most servers hardly ever run at full capacity; even the most efficient systems usually only have a utilization rate of around 50 percent, and in most cases the systems only achieve a utilization rate of just 10 to 25 percent.

However, server capacity utilization has always been one of the most important factors in determining energy efficiency. Cloud-native, containerized approaches mean that a significantly larger number of applications can run on the same hardware, which automatically improves utilization. An organization can run identical application workloads with a smaller number of VMs than would be possible without Kubernetes. The cloud also eliminates the need for new IT infrastructures for short-term projects.

Last but not least, fewer servers means a smaller footprint; at the end of the day, this translates to lower energy requirements with greater operational efficiency. The CO2 emissions of applications in cloud operations are another issue. After all, a Kubernetes environment's emission values usually vary depending on the regional costs levied by the cloud provider. That said, container technologies do make it relatively easy to move cloud resources to lower cost locations and flexibly drive workload placements according to the power mix. Because Kubernetes is innately portable as a cloud-native implementation, the technical requirements are already met.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus