Lost Password
Data Center
Educational Article

The Decline of the General-Purpose Data Center

These aren’t just scaled-up legacy setups. They’re designed from the ground up for AI workloads and require specific infrastructure, particularly in power delivery and cooling infrastructure. But what do they need that separates them from the ‘standard’ facility, and what makes it so challenging to try to retrofit legacy data centers for these?

By Gordon Johnson, Senior CFD Manager at Subzero Engineering

AI infrastructure demands

AI is quickly becoming the dominant consumer of compute, and traditional infrastructure just can’t keep up. The industry is shifting to fundamentally new architectures and data centers that don’t adapt will be left behind as the industry transitions to radically new designs.

It is hard to overlook the infrastructure constraints of traditional data centers as AI workloads grow in complexity and scale. We’re entering a new era in data infrastructure, one that legacy data centers weren’t built to handle. To address the specific requirements of large-scale artificial intelligence, top hyperscalers such as AWS, Google, and Microsoft are spearheading the evolution by building a new class of data center from the ground up: AI-native infrastructure.

Why can’t legacy facilities handle AI’s demands?

Legacy data centers were designed for general-purpose computing. They were built to account for predictable workloads, with moderate power usage and flexible hardware.

AI has different needs and many legacy data centers are unsuitable for the task due to the scope and intricacy of the criteria.

AI workloads are vastly more power intensive than traditional workloads. They require three to ten times as much electricity per rack, therefore merely adding extra GPUs to the same old racks is not an option. The extreme heat produced by GPUs and TPUs cannot be controlled using conventional air-cooling methods, so in meeting the requirements of contemporary AI, liquid cooling infrastructure such as direct-to-chip becomes necessary.

Unpredictable bursts of energy required for ultra-fast connections between thousands of nodes are necessary for AI training, and the requirement for densely populated, high-performance clusters conflicts with the sprawl of traditional data halls. Long cable runs and low-density racks can increase latency, reducing performance for large AI jobs.

Legacy Data Centers Built for Yesterday

Legacy data centers were built for yesterday’s workloads. AI isn’t just demanding more, it’s demanding different. Hyperscalers know it, and they’re not waiting around. The future of digital infrastructure is being redefined by the emergence of the purpose-built AI data center era.

Retrofitting an existing data center for AI isn’t easy.  Typically, data centers will need to leverage their existing investments in air cooling while selectively deploying liquid cooling where needed.  Although infrastructure can be reworked and redesigned, concessions will always need to be made. These compromises could come at the expense of performance and efficiency. Power availability (typically capped at the site level) and cooling capacity (particularly in raised-floor environments), while rack weight and floor loading, together with ceiling height restrictions that reduce airflow design, are physical constraints on most older sites. Add in the layout obstructions and interconnect distances that cause latency bottlenecks, and this could be a compromise too far.

What Makes AI Workloads Different

Traditional data centers tend to average 5–20 kW per rack, whereas the power draw per rack due to AI workloads can be significantly higher than traditional compute, pushing 30–100 kW per rack or higher. Infrastructure needs to be approached very differently to support this degree of power density, as on-site substations, busways, and high-capacity PDUs are increasingly the standard rather than the exception.

AI workloads are inherently unforgiving of infrastructure failures. While traditional workloads can often handle transient faults or recover from minor slowdowns, AI training that is running for days or weeks require near-perfect uptime, clean compute environments, and dependable performance. Even small variations or inconsistencies in hardware, firmware, or thermal performance can be catastrophic. Not because it can’t recover, but because the cost of failure is so high, with every crash potentially hours (or days) of lost compute, wasted energy, and missed opportunity.

Designing for AI

To unlock the full value of AI, the AI data center infrastructure must evolve. AI demands infrastructure that’s not just fault-tolerant, but fault-predictive and self-healing.

When embarking on a new data center build, you must consider:

  • High-Density Power and Cooling
    Custom power paths must be able to handle 80–100kW racks or more, while air cooling, the mainstay of legacy facilities is not enough. Advanced thermal strategies such as liquid cooling and direct-to-chip solutions must be integrated into the infrastructure of the AI data center.
  • Architecture
    Physical GPU/TPU cluster layouts need to be optimized to ensure latency is minimized and training throughput maximized. Consolidated floorplans and thermal awareness allows for increased efficiency, faster deployment, and future expansion.
  • AI-Centric Design
    Real-time predictive failure monitoring and telemetry should be used on every component from temperature to power draw. Machine learning-based fault prediction isn’t optional anymore. It’s how downtime can be preempted, and uptime can be optimized.
  • Sustainability

Carbon-neutral power resources, energy storage, recycling and reusing waste heat output and using alternative building materials can all assist with sustainability and environmentally friendly policies and strategies. Adherence to green strategies can not only improve the facility’s efficiency but can provide competitive advantage.

Legacy data centers were designed for flexible, general-purpose compute. However, AI clusters depend on ultra-low-latency interconnects between accelerators. That changes everything from the physical layout to how cable trays are built. New facilities need to be dense, compact, and often modular designed to reduce data movement friction.

Bigger and Better

AI data centers aren’t just bigger — they’re different by design. Larger footprints are not a luxury but rather a need to accommodate the density and specialized layout, thermal management and performance characteristics AI environments.

Hyperscalers are starting to plan and construct data centers in pod-based, modular designs that are tailored for AI workloads and optimized for independent cooling, powering, and scaling. Workloads are not distributed equally throughout the data center by AI clusters. Rather, concentrated compute pods (hundreds to thousands of GPUs in a tightly integrated fabric) are needed, calling for larger real estate to accommodate the GPU/TPU cages or liquid-cooled racks, zoning to isolate various workloads and effectively manage thermal loads, and a larger whitespace per cluster to accommodate power, cooling, and cabling routes.

Space is needed to manage the significant heat produced by high-density AI workloads, for heat exchange devices, immersion tanks, liquid cooling loops, and greater hot/cold aisle separations, often with isolated or enclosed cooling corridors.

Each pod needs short, direct power paths, larger substations, and dedicated power rooms. They also require extra room for redundant switchgear, transformers, and UPS systems, as well as increased floor loads and reinforced infrastructure to support denser, heavier racks.

A Fundamental Rethink

Hyperscalers aren’t building these new AI driven data centers because it’s trendy or because they want the biggest facility. It’s because it’s necessary. AI is not an experimental upgrade cycle. It’s fundamental infrastructure. And with any foundational shift in computing, it demands a matching evolution in physical and digital architecture.

Companies that continue trying to run next-generation AI on last-generation infrastructure will find themselves bottlenecked in performance, efficiency and ultimately competitiveness. The AI-native future is rapidly overshadowing the computing era for which legacy data centers were constructed.

For this reason, hyperscalers are designing data centers that embrace and give priority to specially designed AI infrastructure. They are not merely scaled up facilities. They are precision engineered, and offer the performance, resilience, and AI acceleration that will define the next decade. 

About the writer 

Gordon Johnson is the Senior CFD Engineer at Subzero Engineering, responsible for planning and managing all CFD-related jobs in the US and worldwide. 

He has over 25 years of experience in the data center industry which includes data center energy efficiency assessments, CFD modeling, and disaster recovery.  He is a certified US Department of Energy Data Center Energy Practitioner (DCEP), a certified Data Centre Design Professional (CDCDP), and holds a Bachelor of Science in Electrical Engineering from New Jersey Institute of Technology.