There have been many analogies used to depict the amount of electricity consumed by data centers - some have estimated that data centers around the world represent about 3% of the planet's total energy consumption, while others compare the energy consumption of a mid-size data center to that of a small town. Needless to say, power/electricity is data centers' most scarce resource, and making efficient use of it is a must.

In data center jargon, the 'power utilization efficiency' (PUE) factor represents the ratio of the total amount of energy used by a data center facility (IT equipment + cooling) to the energy delivered to IT equipment (servers, storage, and networking). To put things in perspective, a data center PUE of 2.0 means that power consumption of IT equipment is equal to the power consumption of cooling systems required to maintain the IT equipment at operating temperature. In other words, for every 1 watt consumed by the IT equipment, there is another 1 watt used by the cooling system. Using real life numbers, a core data center with 100,000 MW of total power consumption, where 50,000 MW is used to power IT equipment and the other 50,000 MW is used by the cooling system, has a PUE of 2.0, or 100,000/50,000. The following table highlights the level of power efficiency in data centers measured by PUE:

PUELevel of Efficiency
3.0 Very Inefficient
2.5 Inefficient
2.0 Average
1.5 Efficient
1.2 Very Efficient

The lower the PUE, the better the power utilization. Using our previous example, if the data center has a PUE of 1.2 (very efficient), the IT power to cooling ratio is now 83,000 MW for powering the IT equipment that keeps the data center running to 17,000 MW for cooling.

Over the years, lowering the PUE has been a major target for every data center operator in order to maximize the utilization of every watt and hence reduce operating costs and environmental impact. While major improvements were achieved prior to 2012, it's been baby steps, or almost flat, since then, as depicted in Figure 1 (Source: Uptime Institute).

Figure 1: Data center PUE improvements over time

Furthermore, in addition to the limited space in data centers, operators are restrained from adding too much equipment in a rack by the power-per-rack (kW/Rack) density levels planned and engineered when building the data center power distribution and HVAC infrastructure. As a matter of fact, while most data centers (65%) support power density levels of up to 10 kilowatts per rack (kW/rack), only 12% support higher density levels exceeding 20 kW/rack. Needless to say, for any data center operator, reducing total power consumption and increasing efficiency by reducing the portion of power used for cooling have become paramount.

Data center operators are coming up with creative ways to reduce the power consumption of cooling and lighting units. To name a few:

For IT equipment, there has been significant progress in reducing power consumption by leveraging the latest breakthroughs in silicon technology, advanced software, and hardware miniaturization. For example, servers (compute resources) can now process larger amounts of data in smaller footprints and with lower power consumption through server virtualization (vs. dedicated servers), right-sizing capacity to workloads (e.g., automating the resizing of clusters and/or deactivating/reactivating servers dynamically in response to changing loads), load balancing, and many other practices.

But servers aren't the only area where data center power consumption improvements are occurring. In the next blog, we will discuss how optical networking equipment for data center interconnect is also leveraging the latest advancements in coherent optical engine technology, digital signal processing, and intelligent design to consistently deliver power consumption reductions per gigabit of transmission. You can get a quick glimpse by visiting https://www.infinera.com/compact-modular.

Stay tuned.

Attachments

  • Original document
  • Permalink

Disclaimer

Infinera Corporation published this content on 15 July 2021 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 15 July 2021 18:18:10 UTC.