Is Air Cooling Still Vital in the Liquid Cooling Transition?
Gordon Johnson, Subzero Engineering’s Senior CFD Manager, examines how air cooling remains critical for data center memory storage and containment systems
By Amber Jackson As Published on DataCentreMagazine.com
Why liquid cooling is becoming essential for AI-driven data centers—while air cooling still plays a critical role
As AI drives immense compute demands, it is expected to be the main cause of data center power demands doubling worldwide between 2022 and 2026—unless operators are able to tackle sustainability head-on to decrease rising emissions.
One of the main topics of conversation is thermal management, as GPUs continue to draw significant power densities. Cooling servers quickly has become one of the most pressing challenges in a hyperscale data center environment, particularly as traditional air cooling systems can no longer keep up with the demands of AI workloads.
To dig into this further, Subzero Engineering’s Senior CFD Manager, Gordon Johnson, shares his analysis that liquid cooling has quickly become the new norm for data centers of the future—but that air cooling is still required.
“Direct Liquid Cooling (DLC), and specifically Direct-to-Chip (DTC), is now essential for controlling heat,” he says. “However, about 25% of the heat produced by IT equipment still needs to be expelled through the air, especially from secondary parts such as memory subsystems, storage, and power delivery circuits.
“It is impossible to overlook this heat residue, and that’s where traditional airflow strategies are still needed, albeit in a supporting role.”
Confronting hyperscale cooling challenges
Gordon explains that hyperscale operators are seeing a sharp rise in OPEX from both power and cooling, given that it has become one of their most significant challenges.
“In recent years, power and cooling have become strategic levers and margin killers in hyperscale operations,” he says. “If you’re operating at scale, your P&L is directly tied to your power and cooling intelligence.
“Those who get it right will widen their advantage. Those who don’t could find AI infrastructure becoming financially unsustainable.”
He argues that efficiency is no longer just best practice, as—despite the rise of renewable energy resources—AI is effectively slowing down decarbonization.
“Data centers’ energy usage is driven by the fact that advancements in AI model performance frequently result in larger models and more inference, raising energy costs and contributing to sustainability challenges,” he explains.
“AI needs to get more efficient, not just more powerful.”
Energy consumption remains a key concern as AI continues to boom. Once deployed, these models require an enormous amount of inference infrastructure to process countless queries every day.
“Modern AI GPUs are now drawing upwards of 500 watts per chip,” Gordon says. “Hyperscale data centers that once operated in the 10–30 kW/rack range are now pushing 80–120 kW/rack to support AI training and inference.
“With air cooling limited to about 30–40 kW/rack, the air just cannot carry the created heat quickly enough, even with optimal containment and supply airflow.”
Embracing air cooling in direct liquid cooling
Higher compute density, increased energy efficiency, and more reliable thermal control at the component level are all made possible for hyperscale operators by DLC—and specifically DTC.
Gordon says: “It is a practical means of maintaining the safe thermal working range of contemporary CPUs and GPUs. DLC also permits higher incoming air temperatures, reducing reliance on traditional HVAC systems and chillers.”
However, he argues that advanced DTC systems do not eliminate the need for air cooling.
“Air cooling is necessary even with the most sophisticated DTC systems,” he says. “The cooling of non-critical components, cabinet pressurization, and residual heat evacuation still require airflow.”
Additionally, hot and cold aisle containment systems have been proven to separate the hot exhaust air from the cold intake air effectively.
“Efficiency increases can result in a cooling energy decrease of 10–30%,” Gordon says. “Containment is essential for optimizing the performance of air-cooled systems in legacy settings.
“Raised flooring, hot/cold aisles, and containment systems are becoming progressively more crucial in environments that are transitional or hybrid (liquid + air cooled).
“These airflow techniques aid in the separation of AI-specific and older infrastructure in mixed-use data centers. However, in modern AI racks, air cooling is the supporting act rather than the main attraction.”
For operators managing large megawatts of IT load, hot/cold aisle containment is one of the most cost-effective and space-saving solutions available. Gordon explains that making slight improvements to airflow containment can ultimately result in large-scale energy savings in high-density settings.
“By stabilizing temperature zones and lowering fluctuation, this improves cooling system responsiveness while lowering chiller load and encouraging energy-reuse initiatives,” he explains.
“Hot/cold aisle containment is no longer just a best practice—it is becoming a critical optimization layer in tomorrow’s high-performance, high-efficiency data centers.
“For operators managing hundreds of megawatts of IT load, hot/cold aisle containment is still one of the most cost-effective, space-efficient tools available.”
Where does the industry go from here?
As the data center industry continues to transition toward liquid cooling adoption, Gordon is eager for operators to understand that air cooling will remain relevant.
He explains that air management in the cooling stack is changing from a primary to a supporting, yet essential, system—highlighting that the industry is improving and re-integrating traditional tactics alongside cutting-edge liquid systems rather than discarding them.
“Hyperscalers are under constant examination to meet net-zero targets. In addition to complying with energy efficiency regulations, the hybrid solution offers data center operators a way to transition from conventional air-cooled facilities to liquid-readiness without requiring complete overhauls,” he says.
“With high-density AI workloads, air cooling just cannot keep up. It’s a physical limitation. Hybrid methods that combine regulated airflow with DLC are now the engineering benchmark for scalable, effective, and future-ready data centers.”
Gordon Johnson, Senior CFD Manager for Subzero Engineering, outlines the need for sustainability in data center operations and highlights how optimized strategies can reduce energy waste and environmental impact.
By Gordon Johnson, Senior CFD Manager at Subzero Engineering As Published in Data Centre & Network News
An Environmental Cost
As essential as data centers are to our increasingly digital lives, they come at a huge environmental cost to our planet.
It doesn’t help that much of the energy required to power them is still sourced from fossil fuels. It’s one of the reasons that the industry has been identified as a major contributor to climate change.
Given the growing environmental concerns, it is now an urgent necessity to transition to sustainable, renewable energy sources, energy-efficient technologies, and recyclable materials. To impose the importance of net zero, governments and regulatory bodies worldwide are seeking to implement stricter environmental policies to meet global climate goals.
The adoption of sustainable design ensures adherence to these regulations. Furthermore, as sustainability becomes a crucial component of corporate social responsibility (CSR) for many organizations, and more consumers and businesses are favoring companies with strong environmental commitments, a strong sustainability policy can yield a competitive advantage in a tough marketplace.
Transitioning from White to Green
White space, as it relates to data centers, is the space inside a building devoted to IT hardware, such as servers, storage, and networking components. It is a highly controlled environment with restricted access, monitored for temperature, humidity, and other factors critical to maintaining the health of IT systems.
Increasing demand for data center performance and capacity while at the same time reducing operating costs requires an efficient use of white space. What could the transformation from white space to green building offer? And can it still deliver on operational excellence?
Incorporating renewable energy sources and embracing natural power supplies, such as wind or solar, enables operational efficiencies to be raised, cooling requirements reduced, and CO₂ emissions to be significantly reduced.
In addition, construction using recycled and recyclable materials also supports global initiatives in combating climate change, reducing waste, and lowering greenhouse gas emissions.
Green Building Certifications
According to the US Office of Energy Efficiency and Renewable Energy, data centers are one of the most energy-intensive building types, consuming 10 to 50 times the energy per floor space of a typical commercial office building. This energy consumption is only expected to increase due to high intensity emerging technologies such as artificial intelligence (AI), blockchain and cryptocurrency.
Global green building certifications, such as Leadership in Energy and Environmental Design (LEED), are heralding a new era of environmentally sustainable practices. These certifications set a framework for integrating recycled and recyclable materials with measurable benchmarks for sustainability, energy efficiency and environmental stewardship.
Globally recognized green building certifications and standards that evaluate the environmental impact and performance of buildings are essential in promoting environmentally conscious design in contemporary infrastructure. Internationally recognized indicators give data centers the means to demonstrate their commitment to minimizing environmental impact, and set a bar for best practice in sustainable construction and operation. This encourages industry-wide adoption, opening the door for a more sustainable future.
Balancing Costs and Sustainability
Transitioning to greener materials and practices offers significant environmental benefits, but it also raises questions about cost. Does the investment in recyclable, green materials balance the return on investment?
Upfront costs of adopting green building practices are indeed high, particularly in legacy data centers, but the long-term financial benefits are indisputable. Over time, utilizing energy-efficient designs and systems can lead to a lower total cost of ownership (TCO) by reducing power and operational expenses.
Integrating renewables can also decrease organizations’ reliance on fossil fuels, helping them to better manage any future energy challenges. Additionally, data centers that actively pursue net zero initiatives can enhance their brand perception by complying with regulations, benefiting from a value that is difficult to quantify.
These benefits justify the initial investment. When evaluating costs concerning TCO, the argument for both financial and environmental sustainability is compelling.
The Power of Collective Responsibility
While data centers have an unavoidable influence on the environment, the industry is quickly establishing itself as a leader in environmental sustainability by implementing a variety of net zero strategies. However, all industry stakeholders need to play a role in the collective accountability for an environmentally friendly future.
Partnerships are integral to this collaborative approach. From operators adopting renewable energy sources to designers innovating with eco-friendly materials, investors funding sustainability projects to policymakers incentivizing green practices; we are all answerable in the acceleration of sustainable operation.
Setting an Example
Taking decisive action is the first step to sustainability. The choice of being a sustainability leader yields benefits beyond the environment; it brings about a positive change chain reaction, a ripple effect across all industries. Positive transformation inspires and influences all sectors and markets.
Adopting this role of responsibility leverages a legacy of accountability and investment in sustainability, with the long-lasting positive impact on the globe to be enjoyed by the next generation of technology entrepreneurs.
Gordon Johnson is the Senior CFD Engineer at Subzero Engineering, responsible for planning and managing all CFD-related jobs in the US and worldwide.
He has over 25 years of experience in the data center industry which includes data center energy efficiency assessments, CFD modeling, and disaster recovery. He is a certified US Department of Energy Data Center Energy Practitioner (DCEP), a certified Data Centre Design Professional (CDCDP), and holds a Bachelor of Science in Electrical Engineering from New Jersey Institute of Technology.
Company
Educational Article
Shane Kilfoil of Subzero Engineering On The 5 Best Ways to Drive Product Growth
Commitment. Once the resources required have been identified, are we truly committed? While we’re not a small organization, we’re not a large one either and there is only so much we can do at any given time. This is probably the most important step because if we are committed, then all the other preceding things are worth it. If we’re not, but still start the project, it has a high likelihood of failing.
An Interview with Shane Kilfoil by Rachel Kline published on medium.com
In the realm of business, particularly with regard to tech products, growth is the key to success. However, navigating the journey from ideation to expansion presents its own unique set of challenges. How does one devise a strategy to ensure sustained growth of a product in a competitive marketplace? What are the best practices, strategies, and methodologies to accomplish this? In this interview series, we would like to speak to experienced professionals who have successfully driven product growth. As part of this series, we had the distinct pleasure of interviewingShane Kilfoil.
Shane Kilfoil, currently serving as President of Subzero Engineering and Simplex, brings to the role a wealth of experience gained over 25 years in leadership positions across the Industrial and IT sectors on a global scale. Notably, he served as the Senior Vice President Global Sales and Marketing for Tripp Lite, showcasing his strategic prowess. With an 11-year tenure at Eaton, including a role as Managing Director of Africa, Shane’s versatility extends from sales to product management. Holding a National Diploma in Electrical Engineering from Nelson Mandela University and a Post Graduate Diploma in Business Management.
At the helm of Subzero Engineering, a global leader in critical environment solutions, Shane drives sustainability, efficiency, and innovation. Leading with a customer-centric approach, he actively engages major providers, ensuring Subzero Engineering remains a frontrunner by developing cutting-edge, next-generation solutions. Shane Kilfoil’s leadership continues to propel Subzero Engineering to success in the dynamic and evolving global critical environments market, solidifying its status as an industry leader.
Thank you so much for joining us in this interview series! Before diving in, our readers would love to learn more about you. Can you tell us a little about yourself?
My name is Shane Kilfoil. I was born, raised, and educated in South Africa, and I take great pride in my heritage. Throughout my career, I’ve had the privilege of working in various countries around the world. I’ve lived in the UK for several years, worked in the US, and returned to South Africa as an expat for an assignment. My work experience is extensive and diverse, encompassing roles in field service, engineering, sales, product marketing, product management, and general management, which is my current focus.
What led you to this specific career path?
I studied electrical engineering but quickly realized it wasn’t my forte, so I transitioned to the commercial side. I was fortunate to have mentors who were far more knowledgeable than I was at the time, and they helped guide me down a different career path. Over the years, I discovered my passion for general management. I began to focus more on this area and seized opportunities that guided me down this path. That’s how I arrived at where I am today — initially being guided by others and gradually becoming more decisive as I identified my passion for leading teams and companies.
Can you share the most exciting story that has happened to you since you began at your company?
The most exciting aspect for me has been witnessing the company’s transformation over the past 36 months. We’ve evolved from a niche player in containment to a team capable of supporting our customers’ needs in a fast-paced and ever-changing data center environment. This transformation has allowed the organization to blossom and grow. Our team now tackles opportunities head-on, driving and fulfilling custom products while still meeting our core business needs. I don’t think we could have achieved this 36 months ago. Seeing the organization’s development is incredibly exciting for me.
You’re a successful business leader. What are three traits about yourself that you feel helped fuel your success? Can you share a story or example for each?
I firmly believe in building a robust and diverse team. A group of experts who collaborate effectively is far more powerful than relying on a single person. At Subzero Engineering, our rapid growth is driven by a strong leadership team that promotes our shared values. My role is to remove any barriers they face. Having a team of skilled individuals not only enhances our company’s success but also improves our perceived capabilities from our customers.
In my position it is important to see the big picture and help the organization translate that vision into actionable tactics. All too often teams embark on a project that is not aligned with their company’s goals. In these instances, you must evaluate the project, see if it can add significant value or halt it. Companies have finite resources and unfortunately you cannot take them all on. It is my job to help the team understand which project helps us meet our corporate goals and which do not. When changing a project’s direction or ending it, I ensure that teams understand the reasoning behind the decision. Understanding why a decision is made makes it easier to accept decisions, even if we don’t always agree.
I am passionate about the businesses I work in and the customers we serve. However, I know this energy needs to be tempered at times. Not everyone is motivated the same way, and I need to ensure that I don’t overwhelm the teams with my ideas. However, during tough times, passion and energy can help pull a team together and motivate individuals to get them through the rough patch.
Do you have any mentors or experiences that have particularly influenced you?
Mentors come in many different forms. I like to think that my team mentors me daily, helping me become a more successful leader. Throughout my career, several influential people have guided me at various stages, each of them fundamentally shaping who I am today.
I was once told that if you can trust your team and allow them to guide your leadership style, their open and honest feedback can make you a better leader. I’ve tried to live by this advice for the past decade. It can be humbling because you might think you’re doing well, only to learn from your team that you’re not performing as well as you thought. However, if open dialogue and feedback are maintained and you’re willing to act on it, you can improve. This has been a significant learning curve, one that my mentors have strongly encouraged me to embrace.
What have been the most effective tactics your organization has used to accelerate product growth?
Our organization has been evolving. For many years, we were known as innovators, but over a three-to-five-year period, we stagnated. It wasn’t that we didn’t want to innovate; we were just so focused on day-to-day operations and executing incoming business that innovation took a back seat.
In the past two years, we’ve addressed this. It wasn’t a specific tactic but rather a recognition that we needed to do more to stay relevant. We identified key individuals and created teams around them that are dedicated solely to innovation. Some focus on driving innovation with specific customers, while others concentrate on the broader business. These teams wake up every day thinking about innovation, allowing them to avoid distractions from daily operations. This dedicated focus has significantly accelerated our innovation mentality and processes within the organization.
What do you see as the biggest challenge with respect to scaling a product-led business?
Not believing in your business plan or strategy and being distracted by the “new shiny object”! At the start of the year, businesses set a budget and strategy, but it’s easy to get distracted by new opportunities that occur during the year. While these opportunities should be considered, deviating from the original business plan to pursue a different direction can severely impact annual performance if not correctly thought through. It’s crucial to balance seizing new opportunities with staying focused on the end goal.
What, in your view, is a good litmus test to screen for a skilled and effective growth manager?
Initially, it’s important to look for someone with a track record of developing and bringing similar products to market. Throughout my career, I’ve hired people with different skill sets to drive growth, depending on the business’s maturity cycle and the type of development or growth needed.
However, it’s also crucial to ensure that whoever you’re hiring can fit within your company culture, regardless of their experience. If someone looks great on paper but doesn’t fit within that culture, there can be a clash. A highly successful person can become combative or unsuccessful if they don’t align with the culture. So, you must ask yourself: despite their technical capabilities, does the hire have the right personality to fit within the organization?
Of course, you might need someone to proactively change the company culture, and that’s a different hire. However, if you have a business that is trying to accelerate and you believe you’re doing all the right things elsewhere, then fitting within that culture is vitally important.
Can you describe a product growth tactic you or your team has used that was more effective than you anticipated? What was the goal, how did you execute, and what was the outcome?
The most effective product growth strategies often come from listening to customers and solving a problem that they have. If one customer has a problem that you can solve, you might be able to solve other customers problems.
One transformative opportunity came from a customer who reached out through our website, asking if we could develop a solution for them. At the time, this request wasn’t our focus, and although they were a large customer, we might have ordinarily walked away. However, the timing worked out as we were looking to reboot our product development cycle. The customer was passionate and helped us understand the potential benefits, not just for us but for the wider industry. We took a risk and spent a year developing a product solution without any promise of a purchase order. Now, 24 months later, this has led to significant transformation in a sector of our business that we had not anticipated participating in.
Customers can provide the most beneficial ideas because they have challenges that need resolution. As an organization, we have refocused our efforts on having a mindful approach to solve customer issues in a proactive way. We believe that this is what sets us apart from our competitors.
Thank you for all of that. Here is the main question of our interview. Based on your experience, what are your “5 Best Ways to Drive Product Growth”? If you can, please share a story or an example for each.
Is it core to the business? The starting point is always, is this core to our business? Is it a natural adjacency? Does it add to something that we’ve already got that strengthens our existing base business?
Payback. If it is core to our business, do we do it? Is the payback worth the investment, depending on what that investment is? Do we have the resources that can support that investment?
Resources. If the idea is good, do you have the resources?
Finding resources. If resources are lacking, how hard would it be to get those resources. Can they be hired? Can they be bought? Can those resources be acquired to aid the success of the project?
Commitment. Once the resources required have been identified, are we truly committed? While we’re not a small organization, we’re not a large one either and there is only so much we can do at any given time. This is probably the most important step because if we are committed, then all the other preceding things are worth it. If we’re not, but still start the project, it has a high likelihood of failing.
What is the number one mistake you see product marketers make that may actually be hurting their growth outcomes?
Not enough people halt failing projects. In any engineering team or project management team, once you start a project, it feels like your child — you feel personally connected and responsible for it. However, sometimes during development you realize the project isn’t going to meet the customer or project requirements. Teams are typically reluctant to kill a project at this stage, especially when significant financial and emotional investment has already been committed.
Companies need to create an environment where it’s okay to be wrong. Things change, and as a result, the solution or initiative may no longer be relevant or won’t provide the expected return. It’s not necessarily a failure on the project team; you just don’t always get it right.
This is one of the biggest lessons I’ve learned and one of the main struggles I’ve seen product marketing teams face. We need to regularly ask ourselves if the projects being worked on are still relevant. If not, then we need to be ok in reallocating resources to other more important or strategic projects that help the company realize their vision. Having a robust process that helps this ensures that you are always maximizing your company’s resources.
Thank you so much for this. This was very inspirational, and we wish you only continued success!
The Rise of AI Data Center Models and the Decline of the General-Purpose Data Center
We’re slowly beginning to see a shift toward purpose-built AI data centers, especially from hyperscalers like AWS, Google, and Microsoft. With the latest announcement from the White House in their AI Action Plan, the US Government has formalized exporting American AI across the world, and promoting the rapid buildout of data centers and AI-focused infrastructure, making this shift simpler to implement.
These aren’t just scaled-up legacy setups, however. They’re designed from the ground up for AI workloads and require specific infrastructure, particularly in power delivery and cooling infrastructure. But what do they need that separates them from the ‘standard’ facility, and what makes it so challenging to try to retrofit legacy data centers for these?
By Gordon Johnson, Senior CFD Manager at Subzero Engineering
AI infrastructure demands
AI is quickly becoming the dominant consumer of compute, and traditional infrastructure just can’t keep up. The industry is shifting to fundamentally new architectures and data centers that don’t adapt will be left behind as the industry transitions to radically new designs, which is where the US White House announcement in unveiling their AI Action Plan is helping to accelerate the development of AI infrastructure across the country. But its main focus on the rapid buildout of AI-ready data centers, exporting AI technology, and a more targeted focus on the infrastructure required to support this shift, reflects what hyperscalers are doing.
It is hard to overlook the infrastructure constraints of traditional data centers as AI workloads grow in complexity and scale. We’re entering a new era in data infrastructure, one that legacy data centers weren’t built to handle. To address the specific requirements of large-scale artificial intelligence, top hyperscalers such as AWS, Google, and Microsoft are spearheading the evolution by building a new class of data center from the ground up: AI-native infrastructure.
Why can’t legacy facilities handle AI’s demands?
Legacy data centers were designed for general-purpose computing. They were built to account for predictable workloads, with moderate power usage and flexible hardware.
AI has different needs and many legacy data centers are unsuitable for the task due to the scope and intricacy of the criteria.
AI workloads are vastly more power intensive than traditional workloads. They require three to ten times as much electricity per rack, therefore merely adding extra GPUs to the same old racks is not an option. The extreme heat produced by CPUs, GPUs, and TPUs cannot be controlled using conventional air-cooling methods, so in meeting the requirements of contemporary AI, liquid cooling infrastructure such as direct-to-chip becomes necessary.
Unpredictable bursts of energy required for ultra-fast connections between thousands of nodes are necessary for AI training, and the requirement for densely populated, high-performance clusters conflicts with the sprawl of traditional data halls. Long cable runs and low-density racks can increase latency, reducing performance for large AI jobs. This kind of infrastructure concern is exactly why this AI Action Plan couldn’t have come at a better time, with targets to fund next-gen, resilient digital infrastructure and collaboration between public and private organisation on AI system reliability and performance.
Legacy Data Centers Built for Yesterday
Legacy data centers were built for yesterday’s workloads. AI isn’t just demanding more, it’s demanding different. Hyperscalers know it, and they’re not waiting around. The future of digital infrastructure is being redefined by the emergence of the purpose-built AI data center era.
Retrofitting an existing data center for AI isn’t easy. Typically, data centers will need to leverage their existing investments in air cooling while selectively deploying liquid cooling where needed. Although infrastructure can be reworked and redesigned, concessions will always need to be made. These compromises could come at the expense of performance and efficiency. Power availability (typically capped at the site level) and cooling capacity (particularly in raised-floor environments), while rack weight and floor loading, together with ceiling height restrictions that reduce airflow design, are physical constraints on most older sites. Add in the layout obstructions and interconnect distances that cause latency bottlenecks, and this could be a compromise too far.
What Makes AI Workloads Different
Traditional data centers tend to average 5–20 kW per rack, whereas the power draw per rack due to AI workloads can be significantly higher than traditional compute, pushing 30–100 kW per rack or higher. Infrastructure needs to be approached very differently to support this degree of power density, as on-site substations, busways, and high-capacity PDUs are increasingly the standard rather than the exception.
AI workloads are inherently unforgiving of infrastructure failures. While traditional workloads can often handle transient faults or recover from minor slowdowns, AI training that is running for days or weeks requires near-perfect uptime, clean compute environments, and dependable performance. Even small variations or inconsistencies in hardware, firmware, or thermal performance can be catastrophic. Not because it can’t recover, but because the cost of failure is so high, with every crash potentially hours (or days) of lost compute, wasted energy, and missed opportunity.
Designing for AI
To unlock the full value of AI, the AI data center infrastructure must evolve. AI demands infrastructure that’s not just fault-tolerant, but fault-predictive and self-healing.
When embarking on a new data center build, you must consider:
High-Density Power and Cooling Custom power paths must be able to handle 80–100kW racks or more, while air cooling, the mainstay of legacy facilities will not be enough to cool these high-density racks. Advanced thermal strategies such as liquid cooling and direct-to-chip solutions must be integrated into the infrastructure of the AI data center.
Architecture Physical CPU/GPU/TPU cluster layouts need to be optimized to ensure latency is minimized and training throughput maximized. Consolidated floorplans and thermal awareness allows for increased efficiency, faster deployment, and future expansion.
AI-Centric Design Real-time predictive failure monitoring and telemetry should be used on every component from temperature to power draw. Machine learning-based fault prediction isn’t optional anymore. It’s how downtime can be preempted, and uptime can be optimized.
Sustainability Carbon-neutral power resources, energy storage, recycling and reusing waste heat output and using alternative building materials can all assist with sustainability and environmentally friendly policies and strategies. Adherence to green strategies can not only improve the facility’s efficiency but can provide competitive advantage.
Legacy data centers were designed for flexible, general-purpose compute. However, AI clusters depend on ultra-low-latency interconnects between accelerators. That changes everything from the physical layout to how cable trays are built. New facilities need to be dense, compact, and often modular designed to reduce data movement friction.
Bigger and Better
AI data centers aren’t just bigger — they’re different by design. Larger footprints are not a luxury but rather a necessity to accommodate the density and specialized layout, thermal management and performance characteristics AI environments.
With the White House calling for more land and more power, hyperscalers are starting to plan and construct data centers in pod-based, modular designs that are tailored for AI workloads and optimized for independent cooling, powering, and scaling. Workloads are not distributed equally throughout the data center by AI clusters. Rather, concentrated compute pods (hundreds to thousands of GPUs in a tightly integrated fabric) are needed, calling for larger real estate to accommodate the GPU/TPU cages or liquid-cooled racks, zoning to isolate various workloads and effectively manage thermal loads, and a larger whitespace per cluster to accommodate power, cooling, and cabling routes.
Space is needed to manage the significant heat produced by high-density AI workloads, for heat exchange devices, immersion tanks, liquid cooling loops, and greater hot/cold aisle separations, often with isolated or enclosed cooling corridors.
Each pod needs short, direct power paths, larger substations, and dedicated power rooms. They also require extra room for redundant switchgear, transformers, and UPS systems, as well as increased floor loads and reinforced infrastructure to support denser, heavier racks.
A Fundamental Rethink
The White House’s AI Action Plan reinforces what industry leaders are starting to adopt, that companies that are already leading in AI-native infrastructure are paving the way forward for AI, and this transformation needs to be seen across the industry. Hyperscalers aren’t building these new AI driven data centers because they’re trendy or because they want the biggest facility. It’s because it’s necessary. AI is not an experimental upgrade cycle. It’s fundamental infrastructure. And with any foundational shift in computing, it demands a matching evolution in physical and digital architecture.
Companies that continue trying to run next-generation AI on last-generation infrastructure will find themselves bottlenecked in performance, efficiency and ultimately competitiveness. The AI-native future is rapidly overshadowing the computing era for which legacy data centers were constructed.
For this reason, hyperscalers are designing data centers that embrace and give priority to specially designed AI infrastructure. They are not merely scaled up facilities. They are precision engineered, and offer the performance, resilience, and AI acceleration that will define the next decade.
About the writer
Gordon Johnson is the Senior CFD Engineer at Subzero Engineering, responsible for planning and managing all CFD-related jobs in the US and worldwide.
He has over 25 years of experience in the data center industry which includes data center energy efficiency assessments, CFD modeling, and disaster recovery. He is a certified US Department of Energy Data Center Energy Practitioner (DCEP), a certified Data Centre Design Professional (CDCDP), and holds a Bachelor of Science in Electrical Engineering from New Jersey Institute of Technology.
Data Center
Educational Article
Is Liquid Cooling Becoming Non-Negotiable?
Direct Liquid Cooling (DLC), especially Direct-to-Chip Cooling (DTC), is now essential for high-density AI racks. DLC enables higher compute densities and better energy efficiency.
Air cooling alone can’t keep up with the thermal output of modern CPUs and GPUs, but even with advanced DTC, approximately 25% of ITE heat still needs air cooling.
Cold and hot aisle containment is a tried-and-tested climate control strategy that separates the two airflows while improving energy efficiency. For energy savings that can’t be ignored, should hot/cold aisle containment be considered a necessity in hyperscale data centers?
By Gordon Johnson, Senior CFD Manager at Subzero Engineering
Introduction
AI workloads are driving unprecedented compute demand. Not only is the demand intensifying rather than decreasing, it is also altering the economics and structure of computing at all levels.
AI is expected to be the primary cause of the anticipated doubling of data center power demand worldwide between 2022 and 2026 and, unless offset, increased compute = increased emissions.
With GPUs drawing up to 700W each and power densities exceeding 80–100 kW per rack, thermal management has become one of the most critical challenges in hyperscale environments. Conventional air-cooling techniques can no longer keep up with the thermal densities of contemporary AI workloads and liquid cooling is no longer just a viable option for the future. It has become the new norm.
Direct Liquid Cooling (DLC), and specifically Direct-to-Chip (DTC), is now essential for controlling heat. However, about 25% of the heat produced by IT equipment still needs to be expelled through the air, especially from secondary parts such as memory subsystems, storage and power delivery circuits. It is impossible to overlook this heat residue, and that’s where traditional airflow strategies are still needed, albeit in a supporting role.
Challenges
Hyperscale operators are seeing a sharp rise in OPEX from both power and cooling, and it’s becoming one of their most pressing financial and operational challenges.
In recent years, power and cooling have become strategic levers and margin killers in hyperscale operations. If you’re operating at scale, your P&L is directly tied to your power and cooling intelligence. Those who get it right will widen their advantage. Those who don’t could find AI infrastructure becoming financially unsustainable.
Efficiency is no longer just best practice
Many hyperscalers have already hit PUEs of 1.1–1.2, limiting room for improvement and further efficiency gains. This suggests that absolute power usage is now rising even if relative efficiency stays the same. In high-density environments, even marginal improvements in airflow containment can lead to significant energy savings.
Despite the rise of renewable energy resources, AI is effectively slowing down decarbonization. Data centers’ energy usage is driven by the fact that advancements in AI model performance frequently result in larger models and more inference, raising energy costs and contributing to sustainability challenges. AI needs to get more efficient, not just more powerful.
Air-Cooling Limits
Millions of kWh of electricity are needed to train large-scale AI models like GPT-4, Gemini or Claude-class. The scale of this energy consumption is one of the key concerns of the modern AI era. Once deployed, these models require an enormous amount of inference infrastructure to process the countless number of queries every day, and this can exceed training energy usage.
Modern AI GPUs (like NVIDIA H100 or AMD MI300X) are now drawing upwards of 500 watts per chip. Hyperscale data centers that once operated in the 10–30 kW/rack range are now pushing 80–120 kW/rack to support AI training and inference. With air cooling limited to about 30–40 kW/rack, the air just cannot carry the created heat quickly enough, even with optimal containment and supply airflow.
Direct Liquid Cooling (DLC)
Higher compute density, increased energy efficiency, and more reliable thermal control at the component level are all made possible for hyperscale operators by DLC, specifically DTC. It is a practical means of maintaining the safe thermal working range of contemporary CPUs and GPUs. DLC also permits higher incoming air temperatures, reducing reliance on traditional HVAC systems and chillers.
In addition, Direct-to-Chip (DTC) can reduce overall cooling energy (PUE impact) by up to 40%, when compared to traditional air systems, by targeting cooling directly to the hottest components. However, even the most advanced DLC/DTC systems do not eliminate the need for air cooling. Air cooling is necessary even with the most sophisticated DTC systems. The cooling of non-critical components, cabinet pressurization and residual heat evacuation still require airflow.
Hot/Cold Aisle Containment
Hot/cold aisle containment is a proven architectural strategy that separates the hot exhaust air from the cold intake air. Through containment, the two air temperatures are kept from mixing, meaning colder air is ensured, servers are reached more directly, cooling load is decreased and thermal predictability is enhanced. Efficiency increases can result in a cooling energy decrease of 10–30%. Containment is essential for optimizing the performance of air-cooled systems in legacy settings.
Raised flooring, hot/cold aisles, and containment systems are becoming progressively more crucial in environments that are transitional or hybrid (liquid + air cooled). These airflow techniques aid in the separation of AI-specific and older infrastructure in mixed-use data centers. However, in modern AI racks, air cooling is the supporting act rather than the main attraction.
The Case for Containment
For operators managing tens or hundreds of megawatts of IT load, hot/cold aisle containment is one of the most cost-effective and space-saving solutions available.
Even with DTC intensive systems, containment is not obsolete. Modest improvements to airflow containment can result in large-scale energy savings in high-density settings. By absorbing and diverting leftover heat from partially liquid-cooled equipment, containment enhances airflow circulation to secondary components. By stabilizing temperature zones and lowering fluctuation, this improves cooling system responsiveness while lowering chiller load and encouraging energy-reuse initiatives.
Hot/cold aisle containment is no longer just a best practice; it is becoming a critical optimization layer in tomorrow’s high-performance, high-efficiency data centers. For operators managing hundreds of megawatts of IT load, hot/cold aisle containment is still one of the most cost-effective, space-efficient tools available.
Conclusion
As hyperscale operators transition to liquid-cooled infrastructure, the expectation might be that airflow strategies will become irrelevant. But the reverse is happening. In the cooling stack, air management is changing from a primary to a supporting, yet essential, system. The industry is improving and re-integrating traditional tactics alongside cutting-edge liquid systems rather than discarding them.
Hyperscalers are under constant examination to meet net-zero targets. In addition to complying with energy efficiency regulations, the hybrid solution offers data center operators a way to transition from conventional air-cooled facilities to liquid-readiness without requiring complete overhauls.
With high-density AI workloads, air cooling just cannot keep up. It’s a limitation of physical limitation. Hybrid methods that combine regulated airflow with DLC are now the engineering benchmark for scalable, effective, and future-ready data centers.
About the writer
Gordon Johnson is the Senior CFD Engineer at Subzero Engineering, responsible for planning and managing all CFD-related jobs in the US and worldwide.
He has over 25 years of experience in the data center industry which includes data center energy efficiency assessments, CFD modeling, and disaster recovery. He is a certified US Department of Energy Data Center Energy Practitioner (DCEP), a certified Data Centre Design Professional (CDCDP), and holds a Bachelor of Science in Electrical Engineering from New Jersey Institute of Technology.
Data Center
Educational Article
Optimizing Containment for Sustainable Data Center Goals: Enhancing Efficiency and Reducing Environmental Impact
Discover how optimized containment strategies and sustainable Composite AisleFrame (CAF) systems can reduce data center carbon emissions by up to 4,299 kg CO₂ per frame while improving operational efficiency and cutting costs.
By Andy Conner, Channel Director EMEAr at Subzero Engineering
Introduction
Environmental consciousness is not just a trend. We can’t rapidly mend the hole in the ozone layer and climate change concerns won’t be undermined by ever-evolving technology any time soon. Humankind has a collective responsibility to reduce our carbon emissions, to lower waste, and to change the way we create and use energy in our day-to-day lives. However, it’s a constant challenge for organizations to balance scalability, operational efficiency and power resourcefulness with sustainability objectives.
Data centers are hugely energy-intensive buildings. Handling an increasingly growing capacity and complexity of AI and high-performance computing (HPC) means they are consistently using an aggressive amount of power and energy. It’s imperative that we minimize the environmental impact of these buildings, reducing the power consumed while maximizing the energy that is used. New strategies need to be implemented, sustainable materials deployed, and a mindset change to get to net zero and stay there.
Goals and Objectives
Reduce, recycle, and reuse policies should be integrated in every organization’s core values; however, longer-term environmental goals that support a sustainable infrastructure built with energy-efficient technologies and renewable energy sources must be considered when redesigning or extending legacy data center facilities, or building new ones.
One of the best strategies to accomplish sustainability objectives in these buildings is by utilizing optimized containment. Optimizing containment is a vital first step toward achieving a sustainable data center and can significantly reduce unfavorable environmental consequences.
After the ITE, cooling is the biggest consumer of a facility’s energy resources. Containment strategies decrease energy waste and efficiently regulate airflow. This enables data centers to maximize and boost operational efficiency while minimizing their impact on the environment.
Utilizing containment helps maintain consistent thermal temperatures and increases cooling effectiveness by keeping the hot and cold air streams separate, enabling a regulated airflow environment and increasing the efficiency of the cooling systems. This way data centers can conserve energy, not consume more.
Traditionally, an AisleFrame containment system is made of steel. This provides an integral floor supported structure that physically separates the cooled and expelled hot air. With excellent sustainability credentials, steel is 100% recyclable and can be melted down and reused time and again without deterioration. Through closed-loop recycling, every ton of steel scrap recovered can replace one ton of primary steelmaking while keeping its integrity in terms of properties or performance. In addition, steel’s long lifespan and minimal maintenance requirements contribute to its overall sustainability attestation. On the flip side however, decarbonizing remains a challenge and a global priority, and steelmaking currently contributes around 8% of the world’s total carbon emissions.
The Alternative
Composite AisleFrame (CAF) is a system made from alternative and sustainable materials 100% and is a frame-based, floor supported structure for IT/HPC deployments. Used in the construction industry for more than 20 years in many proven applications, such as airplane tail structures, outdoor utility/telephone poles, and transportation bridges, this composite material has now been refined for specific use in data centers to be denser, stronger, and with additional fireproof properties.
CAF has many benefits compared with a Steel AisleFrame system. Every element in a data center has an intrinsic cost that needs to be accounted for, and steel is a heavy material. This translates to high transit costs and increased installation times that must be factored into the build.
In comparison, CAF material is 50% lighter than steel alternatives. It can be installed swiftly without the need for powder coating and is easily reconfigurable as requirements change, offering more flexibility and easier scalability. It can be reused multiple times and has an extended lifespan over steel, supporting waste reduction and net-zero initiatives, leading to lower Total Cost of Ownership (TCO). It can be flat-packed, allowing more product to be shipped in the same physical footprint, and delivers on lower transportation emissions and costs, offering up to 4,299 kg CO₂ savings per frame compared with non-recycled steel and up to 429 kg CO₂ savings per frame compared with recycled steel.
CAF’s strength per linear meter and its seismic compliance enable multi-level data centers to have CAF systems running throughout each building floor without the additional financial risk of having to strengthen weight-bearing floors. Its higher tensile and flexural attributes, with a better compressive strength-to-weight ratio than steel, mean CAF is more efficient structurally.
While steel is resource-heavy, CAF is non-resource-heavy in implementation. This means the CAF system can be delivered and installed in a fast and time-appropriate fashion. A steel structure can potentially take months to be shipped, but CAF could conceivably be delivered in weeks.
Conclusion
Optimizing cooling by separating the hot and cold air can ensure stable and consistent temperature distribution. By improving energy efficiency and overall cooling effectiveness, this can deliver on significant energy savings.
As the industry shifts to using greener technology, the development of a sustainable infrastructure built with energy-efficient technologies and recycled materials continues to be a key strategy in the next generation of high-performance data centers.
Renowned for being hugely power-intensive buildings, data center operatives must constantly investigate strategies and technologies to lower their TCO. Whether restructuring, redesigning, or building from scratch, the cost savings accredited to CAF can contribute to a much quicker return on investment in data center infrastructure. And it’s a win-win when you’re lowering operational costs and optimizing the facility’s high performance and reliability at the same time as achieving long-term global environmental objectives.
About the writer Gordon Johnson is the Senior CFD Engineer at Subzero Engineering, responsible for planning and managing all CFD-related jobs in the US and worldwide.
He has over 25 years of experience in the data center industry which includes data center energy efficiency assessments, CFD modeling, and disaster recovery. He is a certified US Department of Energy Data Center Energy Practitioner (DCEP), a certified Data Centre Design Professional (CDCDP), and holds a Bachelor of Science in Electrical Engineering from New Jersey Institute of Technology.
Data Center
Team
Deep Dive: Gordon Johnson, Senior CFD Manager, Subzero Engineering
Interview with Gordon Johnson for www.intelligentdatacentres.com
Data Center Industry Interview
What would you describe as your most memorable achievement in the data center industry?
It’s hard to settle on just one memorable event, but I fondly recall working on a project for a large data center customer that was still in the design phase. My modeling showed that with some simple design changes, the customer could increase supply temperatures and lower airflow, resulting in an annual savings of approximately US$400,000. The customer made the recommended changes from the modeling and about two years later after the facility was up and operating, I received a call, claiming my energy savings and annual operating costs were on the conservative side.
They estimated they had saved closer to US$500,000 in operating costs in just one year and had simultaneously made a major impact in reducing their CO2 footprint. This is not a one-time event either. I’ve been in the industry for over 30 years, and I can honestly say that anytime I can help data center operators and managers understand and save on their TCO by operating their data center as efficiently as possible is always a memorable moment for me.
What first made you think of a career in technology/data centers?
I started doing Disaster Recovery work in data centers, but soon found my passion was in the design and operating of energy efficiency data centers. This led me to reach out and obtain various certifications to help with the goal of helping our industry become as sustainable and green as possible.
What style of management philosophy do you employ with your current position?
I believe that a successful manager needs to possess effective communication and listening skills, and I try to apply these skills whether I’m managing or being managed. In addition, I strongly believe in the importance of showing respect to others, which includes valuing others’ beliefs, contributions and ideas. I’ve seen firsthand that this results in increased productivity, improved employee and team morale, and helps reduce turnover. Well respected employees are happier, more productive and tend to work harder with a greater sense of pride in their work. You can never go wrong when it comes to listening, communicating and showing respect to others.
What do you think is the current hot talking point within the data center space?
One hot topic is AI (Artificial Intelligence), and that some form of liquid cooling will be needed to cool our next generation data centers. While AI requires significant computing power, the reality is that it currently represents a small fraction of ITE’s global energy consumption, although that’s expected to change and increase in the next five years. Therefore, to offset the environmental impact of AI, greater control over data center energy consumption will increasingly become a top priority. We’re also going to want to look at our data center designs and especially our cooling from a holistic approach as opposed to the current one size fits all perspective.
How do you deal with stress and unwind outside the office?
Volunteer work has always been an important part of my life, and I find any opportunity to work with and help others to be one of the greatest stress relievers available. I also believe that exercise on a regular basis is a big stress reliever, so I try to stay as active as possible during down time. I love to play tennis, run a few local 5K events each year, and nothing helps me unwind more than daily long walks with my dog.
What do you currently identify as the major areas of investment in your industry?
We’re going to need to get serious and give more attention to sustainability in our industry. This includes avoiding the ‘rip and replace’ mentality where we’re constantly replacing our cooling and ITE every few years. Smart investing and positive sustainability practices include properly specifying our hardware and cooling to last at least 10 years. In addition, if we’re planning on moving towards some form of liquid cooling during that time period, we’ll want to focus on what type of technology works best for our business case, both now and in the future.
What are the region-specific challenges you encounter in your role?
Since our industry is constantly looking towards reducing energy usage, which includes free cooling and the use of renewable power, one region-specific challenge I encounter is where to build new data centers. While many factors are involved in this decision including the availability of land, power and water, the challenge exists to find and maximize the use of colder climates for new buildings. These colder climates are naturally favorable since they reduce or sometimes even eliminate the reliance on conventional cooling systems which are energy intensive. With cooler climates we may be able to just run naturally available cold air or chilled water in the data center to reduce temperatures at the ITE, saving significantly on capital costs and reduce operational expenses.
What changes to your job role have you seen in the last year and how do you see these developing in the coming months?
Data centers are increasingly adopting greener practices to be more sustainable and energy efficient, and now we’re starting to see more products inside the white space being made from greener and more environmentally friendly materials. One of my roles recently has been to research and determine the GWP (Global Warming Potential) benefits of using recyclable and composite materials for both cold and hot aisle containment products. Besides further lowering the GWP and carbon footprint in the white space, these products will assist data centers to obtain LEED (Leadership in Energy and Environmental Design) certification which proves they have reduced environmental impact, comply with regulations and are operating with enhanced efficiency.
From Concept to Completion: The Five Phases of Cleanroom Excellence
Your Guide to Critical Environment Protection
Do you have a critical environment that demands protection through a cleanroom solution, but don’t know where to begin? Subzero Engineering has developed an educational whitepaper that walks you through the entire process of designing and building a Simplex modular cleanroom.
Unraveling the Cleanroom Construction Process
Our comprehensive whitepaper breaks down the complex journey into five manageable phases, giving you clarity and confidence at every step:
“From the initial consultation and solution concepting to manufacturing and final installation, we’re right there with you every step of the way. Subzero Engineering is committed to delivering the highest quality solutions, installation and service.”
The Five Phases of Cleanroom Construction
1. Consulting Phase
Our engineers collaborate with you to identify and understand your critical environment’s unique needs, work processes, and goals. This initial consultation sets the foundation for everything that follows.
2. Design and Layout Phase
Our experienced engineers create detailed design drawings of your cleanroom, focusing not just on a sterile environment but on a space that integrates seamlessly with your equipment and unique workflow requirements.
3. Client Review, Modification, and Approval Phase
You’ll review the plans, provide feedback, and ultimately approve the design. We believe collaboration is fundamental to ensuring our custom modular cleanrooms perfectly address your specific needs.
4. Manufacturing Phase
At our state-of-the-art 155,000 square foot facility, our skilled team meticulously builds every customized component of your cleanroom solution to exacting standards.
5. Installation Phase
You have options: our skilled site services team can manage all aspects of installation, or you can take charge. This is where your cleanroom becomes fully operational and equipped to meet your specific requirements.
Why Choose Subzero Engineering for Your Cleanroom Needs
Custom Solutions: Tailored to your unique critical environment requirements
Expert Guidance: Support at every step from consultation to installation
State-of-the-Art Manufacturing: Built in our 155,000 square foot facility
Flexible Installation Options: Choose our installation team or manage it yourself
Collaborative Approach: Your input shapes the final product
Ready to Protect Your Critical Environment?
Whether you’re a seasoned industry professional or just exploring your options, our educational whitepaper offers valuable insights into the intricacies of each phase of cleanroom construction.
The Five-Phase Guide to Designing Perfect Cleanrooms
Simplifying Cleanroom Design and Implementation
Designing and building a top-notch cleanroom doesn’t have to be complicated. Subzero Engineering has distilled the entire process into five straightforward yet crucial phases in our latest whitepaper.
Your All-Access Guide to Cleanroom Excellence
Our educational whitepaper breaks down the complex world of cleanroom engineering into accessible knowledge that helps you:
Understand the complete cleanroom development process
Identify the critical requirements for your specific needs
Navigate the journey from concept to completion
Ensure quality at every step of implementation
“Subzero Engineering is committed to the highest quality of product, installation and service to all of our customers. Throughout the five-phase start-to-finish process, we continuously consult with you to ensure that your cleanroom is precisely engineered to your specific needs and requirements.”
The Five-Phase Approach to Cleanroom Success
Our whitepaper details each phase of creating the perfect cleanroom environment:
Assessment & Planning: Determining your specific requirements and constraints
Design & Engineering: Creating the optimal solution for your unique needs
Manufacturing & Quality Control: Building components to exacting standards
Installation & Integration: Expert implementation with minimal disruption
Testing & Verification: Ensuring all specifications are met or exceeded
Modular Solutions for Critical Environments
Learn how our modular approach delivers advantages that traditional construction simply can’t match:
Faster implementation timeline
Superior quality control
Enhanced flexibility for future modifications
Consistent performance across installations
Reduced on-site disruption during construction
Building a Partnership You Can Trust
This whitepaper represents the first step in demonstrating our commitment to your success. At Subzero Engineering, we believe in building partnerships based on trust, transparency, and continuous consultation.
Download The Educational Whitepaper Today
Take the first step toward your perfectly engineered cleanroom solution.
Unlocking Data Center Sustainability: Financial Gains Meet Environmental Responsibility
The Hidden Environmental Impact of Data Centers
In our digital-first world, data centers have become critical infrastructure—but at what cost to our environment?
Data centers account for nearly 2% of global energy consumption
Modern facilities offer untapped potential for significant sustainability improvements
Businesses face increasing pressure to meet environmental compliance standards
“Did you know data centers account for nearly 2% of global energy consumption in our industry? We have an opportunity to align financial gains with environmental responsibility.”
Actionable Sustainability Strategies
Our comprehensive whitepaper reveals how your data center—especially if built in the last five years—can become a powerful leverage point for sustainability initiatives that benefit both the planet and your bottom line.
What You’ll Learn:
Practical steps to reduce your data center’s carbon footprint
How to unlock substantial operational cost savings through green initiatives
Ways to align your facility with evolving environmental compliance requirements
Strategic approaches to green building standards for data centers
Methods for measuring and reporting your environmental impact
The Business Case for Green Data Centers
Going green isn’t just good for the environment—it’s good for business. Our whitepaper demonstrates how sustainability initiatives create a powerful dual impact:
Reduced operational costs through energy efficiency
Enhanced brand reputation with environmentally conscious stakeholders
Improved compliance with evolving regulations
Future-proofing against rising energy costs
Competitive advantage in the marketplace
Revolutionize Your Approach to Sustainability
Your data center can play a critical role in building a sustainable digital future. Our whitepaper provides the insights and strategies you need to transform your facility into an environmental and financial asset.
Download the Whitepaper Today
Ready to seize the green opportunity? Get immediate access to our comprehensive guide on data center sustainability.