Lost Password
Data Center
Educational Article

Is Liquid Cooling Becoming Non-Negotiable?

Air cooling alone can’t keep up with the thermal output of modern CPUs and GPUs, but even with advanced DTC, approximately 25% of ITE heat still needs air cooling.

Cold and hot aisle containment is a tried-and-tested climate control strategy that separates the two airflows while improving energy efficiency. For energy savings that can’t be ignored, should hot/cold aisle containment be considered a necessity in hyperscale data centers?

By Gordon Johnson, Senior CFD Manager at Subzero Engineering

Introduction

AI workloads are driving unprecedented compute demand. Not only is the demand intensifying rather than decreasing, it is also altering the economics and structure of computing at all levels.

AI is expected to be the primary cause of the anticipated doubling of data center power demand worldwide between 2022 and 2026 and, unless offset, increased compute = increased emissions.

With GPUs drawing up to 700W each and power densities exceeding 80–100 kW per rack, thermal management has become one of the most critical challenges in hyperscale environments. Conventional air-cooling techniques can no longer keep up with the thermal densities of contemporary AI workloads and liquid cooling is no longer just a viable option for the future. It has become the new norm.

Direct Liquid Cooling (DLC), and specifically Direct-to-Chip (DTC), is now essential for controlling heat. However, about 25% of the heat produced by IT equipment still needs to be expelled through the air, especially from secondary parts such as memory subsystems, storage and power delivery circuits. It is impossible to overlook this heat residue, and that’s where traditional airflow strategies are still needed, albeit in a supporting role.

Challenges

Hyperscale operators are seeing a sharp rise in OPEX from both power and cooling, and it’s becoming one of their most pressing financial and operational challenges.

In recent years, power and cooling have become strategic levers and margin killers in hyperscale operations. If you’re operating at scale, your P&L is directly tied to your power and cooling intelligence. Those who get it right will widen their advantage. Those who don’t could find AI infrastructure becoming financially unsustainable.

Efficiency is no longer just best practice

Many hyperscalers have already hit PUEs of 1.1–1.2, limiting room for improvement and further efficiency gains. This suggests that absolute power usage is now rising even if relative efficiency stays the same. In high-density environments, even marginal improvements in airflow containment can lead to significant energy savings.

Despite the rise of renewable energy resources, AI is effectively slowing down decarbonization. Data centers’ energy usage is driven by the fact that advancements in AI model performance frequently result in larger models and more inference, raising energy costs and contributing to sustainability challenges. AI needs to get more efficient, not just more powerful.

Air-Cooling Limits

Millions of kWh of electricity are needed to train large-scale AI models like GPT-4, Gemini or Claude-class. The scale of this energy consumption is one of the key concerns of the modern AI era. Once deployed, these models require an enormous amount of inference infrastructure to process the countless number of queries every day, and this can exceed training energy usage.

Modern AI GPUs (like NVIDIA H100 or AMD MI300X) are now drawing upwards of 500 watts per chip. Hyperscale data centers that once operated in the 10–30 kW/rack range are now pushing 80–120 kW/rack to support AI training and inference. With air cooling limited to about 30–40 kW/rack, the air just cannot carry the created heat quickly enough, even with optimal containment and supply airflow.

Direct Liquid Cooling (DLC)

Higher compute density, increased energy efficiency, and more reliable thermal control at the component level are all made possible for hyperscale operators by DLC, specifically DTC. It is a practical means of maintaining the safe thermal working range of contemporary CPUs and GPUs. DLC also permits higher incoming air temperatures, reducing reliance on traditional HVAC systems and chillers.  

In addition, Direct-to-Chip (DTC) can reduce overall cooling energy (PUE impact) by up to 40%, when compared to traditional air systems, by targeting cooling directly to the hottest components. However, even the most advanced DLC/DTC systems do not eliminate the need for air cooling. Air cooling is necessary even with the most sophisticated DTC   systems. The cooling of non-critical components, cabinet pressurization and residual heat evacuation still require airflow.

Hot/Cold Aisle Containment

Hot/cold aisle containment is a proven architectural strategy that separates the hot exhaust air from the cold intake air. Through containment, the two air temperatures are kept from mixing, meaning colder air is ensured, servers are reached more directly, cooling load is decreased and thermal predictability is enhanced. Efficiency increases can result in a cooling energy decrease of 10–30%. Containment is essential for optimizing the performance of air-cooled systems in legacy settings.

Raised flooring, hot/cold aisles, and containment systems are becoming progressively more crucial in environments that are transitional or hybrid (liquid + air cooled). These airflow techniques aid in the separation of AI-specific and older infrastructure in mixed-use data centers. However, in modern AI racks, air cooling is the supporting act rather than the main attraction.

The Case for Containment

For operators managing tens or hundreds of megawatts of IT load, hot/cold aisle containment is one of the most cost-effective and space-saving solutions available.

Even with DTC intensive systems, containment is not obsolete. Modest improvements to airflow containment can result in large-scale energy savings in high-density settings. By absorbing and diverting leftover heat from partially liquid-cooled equipment, containment enhances airflow circulation to secondary components. By stabilizing temperature zones and lowering fluctuation, this improves cooling system responsiveness while lowering chiller load and encouraging energy-reuse initiatives.

Hot/cold aisle containment is no longer just a best practice; it is becoming a critical optimization layer in tomorrow’s high-performance, high-efficiency data centers. For operators managing hundreds of megawatts of IT load, hot/cold aisle containment is still one of the most cost-effective, space-efficient tools available.

Conclusion

As hyperscale operators transition to liquid-cooled infrastructure, the expectation might be that airflow strategies will become irrelevant. But the reverse is happening. In the cooling stack, air management is changing from a primary to a supporting, yet essential, system. The industry is improving and re-integrating traditional tactics alongside cutting-edge liquid systems rather than discarding them.

Hyperscalers are under constant examination to meet net-zero targets. In addition to complying with energy efficiency regulations, the hybrid solution offers data center operators a way to transition from conventional air-cooled facilities to liquid-readiness without requiring complete overhauls.

With high-density AI workloads, air cooling just cannot keep up. It’s a limitation of physical limitation. Hybrid methods that combine regulated airflow with DLC are now the engineering benchmark for scalable, effective, and future-ready data centers.

About the writer 

Gordon Johnson is the Senior CFD Engineer at Subzero Engineering, responsible for planning and managing all CFD-related jobs in the US and worldwide. 

He has over 25 years of experience in the data center industry which includes data center energy efficiency assessments, CFD modeling, and disaster recovery.  He is a certified US Department of Energy Data Center Energy Practitioner (DCEP), a certified Data Centre Design Professional (CDCDP), and holds a Bachelor of Science in Electrical Engineering from New Jersey Institute of Technology.