Lost Password
Data Center
Educational Article

Overcoming the Problems of the Power-Hungry Data Center

Article Featured in Networks Europe Magazine
By: Gordon Johnson, Senior CFD Manager

In this insightful article, Gordon Johnson, Senior CFD Manager at Subzero Engineering, delves into the evolving landscape of the data center industry. He highlights the emergence of hyperscale and edge data centers, comparing their functions and designs. Gordon also emphasizes the industry’s growing focus on Environmental, Social, and Governance (ESG) concerns, linking ESG improvements to better financial outcomes. Additionally, he addresses the risk of data centers becoming stranded assets due to their power-intensive nature and cooling capacity challenges, shedding light on key industry trends and challenges.

Synopsis

Gordon Johnson, Senior CFD Manager at Subzero Engineering, offers insights into the evolving landscape of the data center industry. In this article, he highlights key aspects such as the emergence of hyperscale and edge data centers, the sector’s approach to ESG concerns, and the risk of power-hungry data centers becoming stranded assets.Gordon discusses how some of the major players in the industry actively strive to improve their Power Usage Effectiveness (PUE) ratio by adopting a combination of hardware and software developments. He compares hyperscale and edge data centers, with hyperscale data centers handling large-scale processing across a wide footprint, and edge data centers designed for localized data processing to reduce latency and improve data transfer speeds.

The data center industry’s increased awareness and engagement with Environmental, Social, and Governance (ESG) concerns is a significant theme in Gordon’s article. ESG performance improvement aligns with better financial outcomes, motivating data centers to prioritize energy efficiency and sustainability.

Gordon also addresses the potential risk of data centers becoming stranded assets due to their power-intensive nature. As data centers increasingly handle High Performance Computing (HPC), facilities are challenged when it comes to having enough cooling capacity. It’s an expensive problem that’s not meeting design capacity or contributing to sustainability measures.

Stand-first

Gordon Johnson’s insights into the evolving landscape of the data center industry highlight several critical aspects that are shaping its trajectory.

Let’s break down the key points:

Hyperscale and Edge Data Centers:

Hyperscale data centers and edge data centers are shaping the future data landscape, albeit in different ways, while simultaneously complementing each other. 

Where traditional hyperscale HPC data centers, typically used by tech giants such as Amazon, Google, and Microsoft to power their cloud services, global user bases, and massive workloads, can cover a large physical footprint serving hundreds of thousands of servers, edge data centers are not designed to be large network hubs. Instead, they’re often small-scale data center sites designed to process and compute data in a closer locale to where it’s generated or consumed. This results in lower latency delays for data processing since applications can process data closer to where it’s needed (at the edge), which further results in less data being transferred back and forth to large cloud and HPC sites. This distinction is important as it reflects the industry’s efforts to optimize data processing for various use cases and locations.

With so many Internet of Things (IoT) applications creating data, the ability to pre-process data at the edge eliminates the need for all the data needing to be transferred back to the cloud. This means only the data that is actually required is transferred. Indeed, rather than competing with, or trying to replace, HPC data centers, the edge will continue to complement and make HPC data centers more efficient in processing only the data that’s necessary.

Power Usage Effectiveness (PUE) Ratio:

The PUE metric is used to determine the energy efficiency of a data center. While not a perfect calculation, the PUE is determined by dividing the total amount of power entering a data center by the power used to run the IT equipment within it. PUE is expressed as a ratio and is useful for benchmarking data center efficiency. A lower PUE value is desirable because it indicates that a higher percentage of the energy consumed is being used for actual computing, rather than for supporting infrastructure. Ideally, a perfectly efficient data center would have a PUE of 1.0, meaning that all energy is used for IT equipment. In practice, however, achieving a PUE of 1.0 is extremely difficult due to the energy required for cooling and other non-computing functions. All the same, data centers aim to reduce their PUE as much as possible to improve energy efficiency, reduce operational costs, and lower their carbon footprint.

It’s important to note that reducing PUE is not only an environmental goal but also a cost-saving measure. With data centers among the largest consumers of electricity, improving efficiency can result in significant operational cost savings. Improvements in PUE can often be achieved through a combination of hardware and software developments, such as better cooling systems, hot/cold aisle containment, server consolidation, environmental sensors to collect data and intelligent monitoring and management of resources. 

Many of the leading players in the data center industry, such as Amazon Web Services (AWS), Apple, Cologix, CyrusOne, DataBank, Digital Realty, Equinix, Google, H5 Data Centers, IBM, Microsoft, Oracle, Switch and T5 Data Centers, are actively working to improve their PUE ratio. They have made substantial commitments to reduce their carbon footprint by transitioning to renewable energy sources. These efforts align with broader sustainability goals and the growing importance of environmentally responsible business practices.

As technology continues to advance, Gordon believes we can expect even more innovative solutions and approaches to further improve data center energy efficiency and reduce environmental impact.

Risk of Stranded Assets

Data centers, particularly those handling HPC, are at a constant risk of becoming stranded assets. Stranded assets occur when facilities do not meet their designed capacity, are no longer economically viable due to changes in technology or business needs, or fail to contribute effectively to sustainability measures. In the context of data centers, this often relates to cooling challenges and the cooling capacity that can’t be used by the ITE.

Every data center has an invisible ceiling that limits the amount of ITE and servers that it can cool.  With the shift towards HPC data centers, facilities are even more challenged when it comes to having enough cooling capacity to match (and slightly exceed) ITE cooling demand. As data centers become more power-intensive due to HPC requirements, the cooling infrastructure must keep pace.

Inadequate cooling can lead to increased operational costs, reduced efficiency, and environmental concerns. It’s a problem since it prevents data centers from meeting design capacity and also prevents them from becoming sustainable and energy efficient. It’s also an expensive problem since wasted cooling energy is not contributing to the overall cooling of the ITE. 

To avoid becoming a stranded asset, organizations often conduct thorough assessments when building or upgrading data center facilities. These include considering future scalability, energy efficiency, and the potential for technology obsolescence. Additionally, some organizations opt for colocation or cloud-based solutions to minimize the risk of stranded assets by outsourcing their data center needs to providers that specialize in maintaining infrastructure.

In summary

Gordon’s insights shed light on the dynamic and constantly evolving nature of the data center industry. The interplay between technological advancements, environmental considerations, and the need to balance efficiency with scalability is shaping the future of data center operations.

Addressing these challenges is vital for ensuring the industry remains both sustainable and economically viable while laying the financial foundation for the long-term future of the data center.