Lost Password
Data Center
Educational Article

Airflow Management’s Role in Data Center Cooling Capacity

White Paper – Airflow Management’s Role in Data Center Cooling Capacity, by Larry Mainers

Airflow management (AFM) is changing the way data centers cool the IT thermal load.

In the simplest terms AFM is the science of separating the cooling supply from the hot return airflow.

AFM’s impact on cooling capacity is huge. This is because the traditional cooling scenario without the full separation of supply and return airflow requires as much as four times the cooling capacity to satisfy the same thermal load. This creates the unnecessary need for cooling units due to airflow inefficiency.

Data center managers can easily determine the percentage of inefficiency by counting the tons of available cooling capacity and measuring it against the IT thermal load measured in kW.

Summary

Airflow management (AFM) is changing the way data centers cool the IT thermal load. In the simplest terms AFM is the science of separating the cooling supply from the hot return airflow.

AFM’s impact on cooling capacity is huge. This is because the traditional cooling scenario without the full separation of supply and return airflow requires as much as four times the cooling capacity to satisfy the same thermal load. This creates the unnecessary need for cooling units due to airflow inefficiency.

Data center managers can easily determine the percentage of inefficiency by counting the tons of available cooling capacity and measuring it against the IT thermal load measured in kW.

For example, 40 tons of cooling capacity will cool 140.67 kW. Thus the average data center without AFM might have as much as 160 tons of cooling to mediate 140 kW.

It’s easy to conclude that if AFM can reduce the excessive cooling capacity, that operating cost could likewise be reduced.

There are three typical ways to reduce the energy use of cooling units:

1) Turning off cooling units
2) Reducing fan speeds
3) Increasing temperature and decreasing relative humidity set points.

“It is important to note that there is a fundamental difference between what is required to change the volume of airflow and the supply temperature/RH.” In order to understand this difference the engineers at Subzero Engineering coined the term UNIFLOW in 2005. ‘Uniflow’ describes air that moves in only one direction.

Data center airflow should flow from the cooling unit directly to the IT thermal load and back to the cooling unit intake. This unidirectional airflow should correspond to the volume of air that is required to carry the thermal load of the IT equipment.
Anything in excess of this airflow requires additional cooling energy to create the volume or CFM. Thus anytime you plug leaks in the UNIFLOW a reduction in fan speed could be made.

As you can imagine, the energy saved due to excessive volume has more to do with the amount of airflow leaks. It is here that some AFM companies have over-estimated potential energy saved. It is common to hear that if you plug your cable cutouts that you will save 30% of energy. That is only true if you have leakage that amounts to 30% of excessive volume. And, that this volume can be
adjusted.

Note: In some cases additional energy can be wasted when cold supply air bypasses the thermal load and returns to the cooling unit.
This is when the cooling unit is required to lower the RH of the return air.

The other part of the cooling efficiency equation is adjusting of the cooling units’ set points. This can only be accomplished when the intake temperature and RH across the face of the IT equipment (intake) is within the acceptable manufacturer’s range.

This can be likened to the ‘weakest link’ in a chain. Cooling set points are fixed to the warmest IT intake. The warmest IT intake is the weakest link in the chain.

Understanding these fundamentals help IT managers when they are presented with AFM solutions and the estimated energy savings.

How then can data center managers determine the true energy savings with each AFM best practice
solution? Which best practice solution delivers the most return on investment? What would the proper AFM road map look like?

Background

AFM is not new to data centers. Cable Cutout Covers (CCC) first introduced in the 1990’s, were the industry’s first experience with eliminating UNIFLOW leaks.

Another area addressed was the misplacement of perforated tiles. Previously, it was common to see perforated tiles placed between computer racks and the cooling units. This caused what is called a ‘short cycle’ where cooling air doesn’t pass through the thermal load before returning to the cooling unit’s return.

Today the industry fully embraces the need to properly place supply perforated tiles in the cold
aisle and eliminate leaks with cable cutout covers in the hot aisle.

Another common and longstanding AFM tool is the Blanking Panel (BP). Principally, the main purpose of the BP is to prevent the migration of hot return air from moving within the cabinet into the cold supply aisle. Additionally, air moving from the cold aisle into the hot aisle without passing through a thermal load is another form of leakage where volume must be made up with increased cubic feet per minute (CFM).

Still another AFM tool is the Velocity Adjustor (VA), invented by Subzero Engineering in 2006. The VA is used to balance subfloor air pressure ensuring a consistent ‘throw rate’ (CFM) to each perforated tile. It also prevents two rivers of airflow from creating a subfloor eddie that can generate a negative pressure that sucks ambient airflow into the subfloor void. This tool can be used to lower CFM or airflow volume, because it allows a consistent volume of air into the cold aisle.

Another AFM tool can be found in some IT cabinets. These panels are placed in the 6 common areas
around the cabinet that would allow hot exhaust airflow to migrate back into the cool supply aisle.
Like blanking panels, AFM within the cabinets plug air leakage and lowers the volume of air required.

The most recent innovation in AFM is ‘Containment’.

The term ‘Containment’ is used in two ways. First, it can describe the doors, walls, and ceilings that
‘contain’ airflow, secondly, it can be used to describe all the tools of AFM combined that creates UNIFLOW.

Containment, as it relates to doors, walls, and ceiling systems is the final piece of the puzzle for AFM. This is because consistent IT intake temperatures cannot be achieved by just plugging leaks in the Uniflow. Containment fundamentally changes AFM by managing the largest part of the UNIFLOW.

AFM Tools

  1. Cable Cutout Covers
  2. Perforated Tile Placement
  3. Blanking Panels
  4. Velocity Adjustors
  5. Rack or Cabinet AFM
  6. Containment

AFM Methods and Returns

While all AFM methods contribute to the separation of supply and return airflow, what actual reduction in energy occurs with each method?

For instance – CCC. What cooling unit energy adjustments are made when all of the cable cutouts
are covered? The most obvious benefit is the reduction in the volume of air required. Airflow
volume can be reduced by turning off cooling unit fans or by slowing fan speed with variable frequency drives (VFDs).

CCC’s plug leaks in the Uniflow but they cannot prevent the mixture of supply and return airflow.
Instead, CCC’s should be seen as a building block, toward full supply and return separation.

What about BP and cabinet airflow management?

These too are necessary components to plug leaks in the Uniflow. The amount of space open has a
direct correlation to the amount of air leaking that does not pass through the thermal load. Energy reduction is thus limited to the amount of leakage eliminated.

What about containment in the form of doors, walls, and roof systems?

Containment components represent the largest space in the UNIFLOW. For instance, the space at
the end of a four-foot-wide aisle can attribute to as many as 30 square feet of open space where airflow mixing occurs. Next, add the space between the top of the row of a four-foot aisle with 12 cabinets and you have an additional 96 square feet. If you combine the overhead space and the two ends of the row, it amounts to 156 square feet of open space in one 24-foot aisle alone.

If the rack row has no opposing cabinets or areas with missing cabinets, this square footage space
can easily double or triple. Clearly these spaces represent the bulk of cold and hot air mixing.
Which AFM contributes the most to energy efficiency?

The key to answering this question is found when we examine the individual energy use of the cooling units.

The two main energy components of the cooling units are fans that produce airflow volume and the
mechanisms (chiller compressors and pumps) that produce air temperature and relative humidity.

According to the US Dept. of Energy (Energy Star) cooling unit fans account for 5% to 10% of total
data center energy consumption. Also a study by Madhusudan Iyengar and Roger Schmidt of IBM
entitled “Energy Consumption of Information Technology Data Centers” concludes that cooling
compressors, pumps, etc. account for 25% to 30% of the total data center energy consumption.

From this we learn that more potential energy savings come from set point adjustment, than air volume adjustment.

And it is here where most of the confusion about AFM energy savings comes from.

Many data center managers were given the expectations of huge energy savings when they
deployed CCC and BP. Instead, the actual energy saved was a small percentage of the cooling
energy total. This is because the energy savings experienced were a combination of the amount of
leakage that was mitigated and the percentage of energy used in creating that volume.

In contrast, much larger energy reductions have been measured when DC managers contained either the hot or cold aisle. This is due to two reasons:
1) The open space or leakage is greater in the containment space.
2) The energy sources (cooling unit compressors, etc.) use more energy as a percentage than fans that create air volume.

A proof of this can be seen in the way utility companies provide rebates to data centers that reduce energy consumption. Utility companies that offer energy rebates require a true before and after energy consumption measurement in order to determine just how much energy is being saved. It is very rare for data centers that use CCC and installed BP to get such a rebate, as the energy reduction was not significant. This changed when data centers contained the hot or cold aisle. Data centers with a full AFM solution, incorporating containment, are commonly receiving thousands of dollars in rebates.

Does that mean that containment is the holy grail of AFM? Yes and no. While containment offers the biggest bang for the buck, the need to plug the holes in the Uniflow is still a major part of the overall airflow chain. The good news is that the energy saved when incorporating all aspects of AFM can more than pay for the cost, in a matter of 12 to 18 months.

That said, those with a limited budget will get more airflow separation with containment doors, walls, and ceilings than with BP and CCC.

Think of it this way… imagine a bathtub with several pencil-sized holes in it. You can imagine the water pouring from these holes. Now imagine the same tub with one of the sides missing. Which of the two ‘leaks’ would you patch up first?

When faced with airflow management solutions, remember that the largest energy cost is in controlling the temperature of the supply air. Next, know that the largest mixing space is at the aisle ends and top of the rack row. This then supports the road map of supplying a consistent and predictable cooling airflow to the cold aisle that can be adjusted according to the manufacturer’s intake specifications in order to save the most energy.

The good news is that most data centers have some level of cable cutout management and blanking panels. This then stresses the value of getting the bulk of the energy savings by completing AFM with full hot or cold aisle containment.

References

1) https://www.energystar.gov/index.cfm?c=power_mgt.datacenter_efficiency_vsds
2) Schmidt, R. and Iyengar, M., “Thermodynamics of Information Technology Data Centers