Lost Password

News

Data Center
Success Story

Datacenter Revamps Cut Energy Costs At CenturyLink

by Timothy Prickett Morgan — EnterpriseTech Datacenter Edition

Subzero Engineering’s Containment Solutions contributed to Century Links hefty return on investment

It is probably telling that these days datacenter managers think of the infrastructure under their care more in terms of the juice it burns and not by counting the server, storage, and switch boxes that consume that electricity and exhale heat. Ultimately, that power draw is the limiting factor in the scalability of the datacenter and using that power efficiently can boost processing and storage capacity and also drop profits straight to the bottom line.

Three years ago, just as it was buying public cloud computing provider Savvis for $2.5 billion, CenturyLink took a hard look at its annual electric bill, which was running at $80 million a year across its 48 datacenters. At the time, CenturyLink had just finished acquiring Qwest Communications, giving it a strong position in the voice and data services for enterprises and making it the third largest telecommunications company in the United States. CenturyLink, which is based in Monroe, Louisiana, also provides Internet service to consumers and operates the PrimeTV and DirectTV services; it has 47,000 employees and generated $18.1 billion in revenues in 2013.

One of the reasons why CenturyLink has been able to now expand to 57 datacenters – it just opened up its Toronto TR3 facility on September 8 – comprising close to 2.6 million square feet of datacenter floor space is that it started tackling the power and cooling issues three years ago.

The facilities come in various shapes and sizes, explains Joel Stone, vice president of global data center operations for the CenturyLink Technology Solutions division. Some are as small as 10,000 square feet, others are more than ten times that size. Two of its largest facilities are located in Dallas, Texas, weighing in at 110,000 and 153,700 square feet and both rated at 12 megawatts. The typical facility consumes on the order of 5 megawatts. CenturyLink uses some of that datacenter capacity to service its own telecommunications and computing needs, but a big chunk of that power goes into its hosting and cloud businesses which in turn provide homes for the infrastructure of companies from every industry and region. CenturyLink’s biggest customers come from the financial services, healthcare, online games, and cloud businesses, Stone tells EnterpriseTech. Some of these customers have only one or two racks of capacity, whole others contract for anywhere from 5 megawatts to 7 megawatts of capacity. Stone’s guess is that all told, the datacenters have hundreds of thousands of servers, but again, that is not how CenturyLink, or indeed any datacenter facility provider, is thinking about it. What goes in the rack is the customers’ business, not CenturyLink’s.

“We are loading up these facilities and trying to drive our capacity utilization upwards,” says Stone. And the industry as a whole does not do a very good job of this. Stone cites statistics from the Uptime Institute, which surveyed colocation facilities, wholesale datacenter suppliers, and enterprises actually use only around 50 percent of the power that comes into the facilities. “We are trying to figure out how we can get datacenters packed more densely. Space is usually the cheapest part of the datacenter, but the power infrastructure and the cooling mechanicals are where the costs reside unless you are situated in Manhattan where space is such a premium. We are trying to drive our watts per square foot higher.”

While server infrastructure is getting more powerful in terms of core counts and throughput and storage is getting denser and, in the case of flash-based or hybrid flash-disk arrays, are getting faster, the workloads are growing faster and therefore the overall power consumption for the infrastructure as a whole in the datacenter continues to grow.

“People walk into datacenters and they have this idea that they should be cold – but they really shouldn’t be,” says Stone. “Servers operate optimally in the range of 77 to 79 degrees Fahrenheit. If you get much hotter than that, then the server fans have to kick on or you might have to move more water or chilled air. The idea is to get things optimized. You want to push as little air and flow as little water as possible. But there is no magic bullet that will solve this problem.”

Companies have to do a few things at the same time to try to get into that optimal temperature zone, and CenturyLink was shooting for around 75 degrees at the server inlet compared to 68 degrees in the initial test in the server racks at a 65,0000 square foot datacenter in Los Angeles. Here’s a rule of thumb: For every degree Fahrenheit that the server inlet temperature was raised in the datacenter, it cut the power bill by 2 percent. You can’t push it too far, of course, or you will start impacting the reliability of the server equipment. (The supplied air temperature in this facility was 55 degrees and the server inlet temperature was 67 degrees before the energy efficiency efforts got under way.)

The first thing is to control the airflow in the datacenter better, and the second is to measure the temperature of the air more accurately at the server so cooling can be maintained in a more balanced way across the facility.

CenturyLink started work on hot aisle and cold aisle containment in its facilities three and a half years ago, and the idea is simple enough: keep the hot air coming from the back of the racks from mixing with the cold air coming into the datacenter from chillers. The containment project is a multi-year, multi-million dollar effort, and CenturyLink is working with a company called SubZero Engineering to add containment to its aisles. About 95 percent of its facilities now have some form of air containment, and most of them are doing hot aisle containment.

“If we can isolate the hot aisles, that gives us a little more ride through from the cold aisles if we were to have some sort of event,” Stone explains. But CenturyLink does have some facilities that, just by the nature of their design, do cold aisle containment instead. (That has the funny effect of making the datacenter feel hotter because people walk around the hot aisles instead of the cold ones and sometimes gives the impression that these are more efficient. But both approaches improve efficiency.) The important thing about the SubZero containment add-ons to rows of racks, says Stone, is that they are reusable and reconfigurable, so as customers come and go in the CenturyLink datacenters they can adjust the containment.

Once the air is contained, then you can dispense cold air and suck out hot air on a per-row basis and fine-tune the distribution of air around the datacenter. But to do that, you need to get sensors closer the racks. Several years ago, it was standard to have temperature sensors mounted on the ceiling, walls, or columns of datacenters, but more recently, after starting its aisle containment efforts, CenturyLink tapped RF Code to add its wireless sensor tags to the air inlets on IT racks to measure their temperature precisely rather than using an average of the ambient air temperature from the wall and ceiling sensors. This temperature data is now fed back into its building management system, which comes from Automated Logic Control, a division of the United Technologies conglomerate. (Stone said that Eaton and Schneider Electric also have very good building management systems, by the way.)

The energy efficiency effort doesn’t stop here. CenturyLink is not looking at retrofitting its CRAC and CRAH units – those are short for Computer Room Air Conditioner and Computer Room Air Handler – with variable speed drives. Up until recently, CRAC and CRAH units were basically on or off, but now they can provide different levels of cooling. Stone says that running a larger number of CRAH units at a lower speeds provides better static air pressure in the datacenter and uses less energy than having a small number of larger units running faster. (In the latter case, extra cooling capacity is provided through extra units, and in the former it is provided by ramping up the speed of the CRAH units rather than increasing their number.) CenturyLink is also looking at variable speed pumps and replacing cooling towers fans in some facilities.

“We are taking a pragmatic, planned approach across our entire footprint, and we have gone into the areas where we are paying the most for power or have the highest datacenter loads and tackling those facilities first,” says Stone. The energy efficiency efforts in the CenturyLink datacenters have to have a 24 month ROI for them to proceed.

In its Chicago CH2 datacenter (one of three around that Midwestern metropolis and one of the largest run by CenturyLink in its fleet of facilities), it did aisle containment, RF Code sensors, variable speed CRAC units, variable speed drives on the pumps, and replaced the cooling tower fans with more aerodynamic units that ran slower and yet pulled the more air through the cooling towers. This facility, which is located out near O’Hare International Airport, has 163,747 square feet of datacenter space, has a total capacity of 17.6 megawatts, and can deliver 150 watts per square foot.

CenturyLink reduced the load in that CH2 facility by 7.4 million kilowatt-hours per year, and Stone just last month collected on a $534,000 rebate check from Commonwealth Edison, the power company in the Chicago area. All of these upgrades in the CH2 facility cost roughly $2.4 million, and with the rebates and the power savings the return on investment was on the order of 21 months – and that is before the rebate was factored in.

Data Center
Product Insight

Don’t cage your computer!

Subzero Engineering is partnering with Colo providers in creating cageless solutions for their customers.

Here’s what we are doing. We have combined Subzero’s aisle end doors, with auto close and locking features, along with airflow management cabinets that safely lock each cabinet to create a safe Colo environment that does not require cages.

• Locking Aisle End Doors
• Locking Cabinets
• Auto Close
• Complete Airflow Separation

Advances in containment and cabinets have created a fully secure colo environment without traditional wired cages. Instead secure aisle end doors, retractable roofs, and biometric locks create an easy to deploy, secure space for IT equipment.

A typical deployment includes:

• Locks can range from simple keyed, to biometric
• Auto closing doors that prevent accidental access
• Locking aisle end doors
• Locking cabinets
• Retractable roof system

Data Center
Educational Article

California Title 24 – It’s a Hole in One!

About Title 24

On July 1, 2014 California’s new energy efficiency standards went into effect. Title 24 will require, among other things 1) prohibiting reheat in computer rooms and 2) containment in large, high-density data centers with air-cooled computers. In order to prevent hot air from mixing with air that’s been mechanically chilled, data centers will need to modify their existing facilities to provide hot and cold aisle containment.

Why is Title 24 a good thing for data centers everywhere?

Data centers worldwide can benefit from the research done by the State of California. For instance, California determined that a 20,000 sq. ft. data center with a load of 100 Watts per sq. ft. could save up to a whopping $10,500,000 per year on energy expenses by implementing four energy efficient strategies. Imagine the potential savings when a nationwide effort is made?

State Requirements Vs. Corporate Initiative

No doubt state requirements are a great way to get companies to comply with new efficiency standards. That said, most states don’t have the requirements that California has. Should this cause corporations to lower their green initiatives? Of course not! Containment is an easy way to save money and make a contribution to lowering their carbon footprint. Hundreds of companies have installed containment systems, saved money, and increased the reliability of their cooling solution. Why not your company?

What’s next?

While many data centers have an ‘area’ of containment, the real energy savings only comes when all of the cooling air is separated from supply, to return – site wide. This requires a data center wide containment solution. Check out the ways Subzero Engineering has addressed the many aspects of data center wide containment.

Join California and dozens of other companies who have made a commitment to a site-wide containment solution.

Data Center
Success Story

NYI Rolls Out New Cold Aisle Containment System Within Data Centers

New York, NY

Deployment of Cold Aisle Containment Technology Reduces Energy Usage and Optimizes Equipment Performance

a New York company specializing in customized technology infrastructure solutions, announces today the deployment of Cold Aisle technology to its US data center facilities. As part of an initiative to implement the latest energy efficiency technologies, NYI is working closely with The New York State Energy Research and Development Authority (NYSERDA), sharing the state agency’s mission of exploring innovative energy solutions in ways that improve New York’s economy and environment. NYI’s CAC deployment is made possible through its partner, Subzero Engineering, a designer, manufacturer, and installer of custom, intelligent containment systems and products for data centers, worldwide.

Data Center Cold Aisle Containment fully separates the cold supply airflow from the hot equipment exhaust air. This simple separation creates a uniform and predictable supply temperature to the intake of IT equipment and a warmer, drier return air to the AC coil. Hot aisle and cold aisle containment are primary ways today’s leading businesses, like NYI, help reduce the use of energy and optimize equipment performance within their data centers.

“By adopting Cold Aisle Containment, NYI is increasing air efficiencies within its facilities, thereby translating to increased uptimes, longer hardware life and valuable cost and energy savings for NYI customers,” comments Lloyd Mainers, Engineer for Subzero Engineering. “Through efficiency, CAC also allows for the availability of additional capacity and increased load density, paving the way for higher density customer deployments.”

“When it comes to data center capacity, NYI is constantly monitoring our power density levels to ensure that we are spreading the capacity throughout our data centers most efficiently and decreasing our effect on the environment,” adds Mark Ward, Director of Business Development of NYI. “Cold Aisle Containment helps us to attain that level of efficiency, and not to mention, there are several government and cash incentives for incorporating it into our facilities. Above all, our customers benefit in that their equipment is cooled more effectively, reducing strain on the equipment’s own cooling mechanism and extending the lifespan of their servers.”

NYI Cold Aisle Containment Benefits include:

• Predictable, reliable, and consistent temperature to IT equipment at any point inside the aisle
• One (1) degree temperature difference from top to bottom
• Double or triple kW per rack
• Reduced white space requirements through optimized server racks
• Average of 30% energy cost savings
• Consistent, acceptable supply to IT intake
• Leaves more power available for IT equipment; increased equipment uptime
• Longer hardware life
• Increased fault tolerance (i.e. HVAC units that were required to achieve certain temperature goals are now redundant.)
• US Department of Energy recommended

Company
Press Release

Lights, Camera, Action!

Get the popcorn out, it’s time to watch some videos!

Subzero Engineering presents 12 new videos. Learn more about: NFPA compliant containment, new cageless containment bundles, our new auto closer with soft close features, hear from attendees at the Data Center World in Las Vegas, and much more.

Videos are a great way to see product, learn about product features and benefits from the people who created them, see product form and function, and hear from industry experts about their thoughts on product value.

Look for Bernard the bear photo bombs!

Data Center
Product Insight

Check out our new fully NFPA compliant retractable roof system!

This game changing roof system ensures that the containment roof obstruction is fully removed electronically when a smoke detector is alarmed.

Additional benefits include the ability to wirelessly open the roof for maintenance above the containment, modular design allowing for increase or decrease of size, ease of deployment, and the sleek look of the roof is easy on the eyes. This is the ultimate containment roof system.

Polar Cap Retractable Roof Available Fall of 2014!

The patented Subzero Engineering Polar Cap is the first fully NFPA compliant containment roof system that attaches to the top of the racks and forms a ceiling that prevents hot and cold air from mixing.

Most data center containment systems rely on the heat generated from a fire related incident to release the containment system, as it can pose an obstacle to the fire suppression agent. The NFPA has determined that it is important to have a faster response time and more importantly, a testable system.

The updated Subzero Polar Cap retractable roof system is now a fully electric roof system that retracts into a metal housing when the fire suppression system is alarmed. Having a pre-action system that reacts to a smoke detector will ensure that the containment roof is fully retracted long before the fire suppression system is discharged. Additionally, the roof material is made with the highest fire resistant standard of ASTM E-84 Class A rating.

The Polar Cap can also be opened and closed when maintenance is required above the containment space.

The new roof system is fully customizable in both length (up to 30’) and width (up to 5’). The aluminum profile is less than 6” high and thus it presents no problem with obstructions above the cabinets.

Data Center
Product Insight

Mind the Gap – The importance of gap-free data center containment door systems

By Subzero Engineering CEO, Larry Mainers

We need a “Mind the gap” philosophy in airflow management, especially containment doors.

No visit to London England is complete without seeing the London Underground, or as it is affectionately called “The Tube”. The tube connects the greater part of London with 270 stations. It’s an easy and inexpensive way to get around London town.

In 1969 an audible and/or visual warning system was added to prevent passengers’ feet from getting stuck between the platform and the moving train. The phrase “Mind the gap” was coined and has become associated with the London Underground ever since.

“Mind the gap” had been imitated all over the world. You will find versions of it in France, Hong Kong, Singapore, New Delhi, Greece, Sweden, Seattle, Brazil, Portugal, New York, Sydney, Berlin, Madrid, Amsterdam, Buenos Aires, and Japan.

It is my hope that this phrase can be embraced by the data center industry. Gaps in airflow management and especially when containment is deployed are an easy way to lose both energy and overall cooling effectiveness. We need a “Mind the gap” philosophy in airflow management, especially containment doors. Why doors? Because of the door’s moving parts it is less expensive for manufacturers to leave a gap then to have a door that fully seals. Data center managers who “Mind the gap” should insist on door systems that don’t leak.

Just how important is it to “Mind the gap” in your data center containment system? One way to see the importance of eliminating gaps is to take a look around your own house. How many acceptable gaps do you have around your windows? Do you conclude that since the window is keeping most of the conditioned air inside that a few gaps around the windows will not matter much? I doubt it. The fact is, utility companies have been known to provide rebates for the use of weather stripping to ensure a ll gaps are filled. Gaps equal waste. Over time any waste, no matter how small, ends up being substantial.

Gaps become an even more important area to fill when you consider that most contained aisles are under positive pressure. Positive pressure can increase the leakage fourfold. A cold aisle that is oversupplied should move air through IT equipment and not aisle end doors.

It’s important that we all “Mind the gap” in our data center containment doors. In this way we both individually and collectively save an enormous amount of energy, just as the world’s mass transit systems, like ‘The Tube’, do for us each and every day.

Company
Events

Data Center World Spring 2014 Conference was a blast!

We had a great time at the Data Center World Spring 2014 Conference.

We met with some amazing people, showed off our new products, worked with a fantastic film crew, learned how to apply for and receive utility rebates with environmental monitoring from Larry Mainers, and enjoyed some of the fun that is Las Vegas.

It was such a great show that we have decided to go to the Orlando Conference – October 19-22. We hope to see you there!

Click here to see some the pictures from the conference on our Facebook page.

Data Center
Press Release

What’s new at Subzero Engineering for 2014

At Subzero Engineering we are always looking for new ways to improve our products, by making them more energy efficient, NFPA compliant, and adding more standard features. 

This year is no exception!

We have been working hard taking our world class products and ma king them even better. Here are a few of the changes we are making for 2014.

Product Announcements

• New Polar Cap Retractable Roof – The first fully NFPA compliant containment roof system
• New Arctic Enclosure Sizes Available – Two new 48U cabinets available
• Power Management – We now offer a full line of Raritan power products
• New Elite Series Doors – All of our doors have a sleek new design & come with extra features, standard
• New Panel Options – 3MM Acrylic, 4MM Polycarbonate, 3MM FM4910

Data Center
Educational Article

Airflow Management’s Role in Data Center Cooling Capacity

White Paper – Airflow Management’s Role in Data Center Cooling Capacity, by Larry Mainers

Airflow management (AFM) is changing the way data centers cool the IT thermal load.

In the simplest terms AFM is the science of separating the cooling supply from the hot return airflow.

AFM’s impact on cooling capacity is huge. This is because the traditional cooling scenario without the full separation of supply and return airflow requires as much as four times the cooling capacity to satisfy the same thermal load. This creates the unnecessary need for cooling units due to airflow inefficiency.

Data center managers can easily determine the percentage of inefficiency by counting the tons of available cooling capacity and measuring it against the IT thermal load measured in kW.

Summary

Airflow management (AFM) is changing the way data centers cool the IT thermal load. In the simplest terms AFM is the science of separating the cooling supply from the hot return airflow.

AFM’s impact on cooling capacity is huge. This is because the traditional cooling scenario without the full separation of supply and return airflow requires as much as four times the cooling capacity to satisfy the same thermal load. This creates the unnecessary need for cooling units due to airflow inefficiency.

Data center managers can easily determine the percentage of inefficiency by counting the tons of available cooling capacity and measuring it against the IT thermal load measured in kW.

For example, 40 tons of cooling capacity will cool 140.67 kW. Thus the average data center without AFM might have as much as 160 tons of cooling to mediate 140 kW.

It’s easy to conclude that if AFM can reduce the excessive cooling capacity, that operating cost could likewise be reduced.

There are three typical ways to reduce the energy use of cooling units:

1) Turning off cooling units
2) Reducing fan speeds
3) Increasing temperature and decreasing relative humidity set points.

“It is important to note that there is a fundamental difference between what is required to change the volume of airflow and the supply temperature/RH.” In order to understand this difference the engineers at Subzero Engineering coined the term UNIFLOW in 2005. ‘Uniflow’ describes air that moves in only one direction.

Data center airflow should flow from the cooling unit directly to the IT thermal load and back to the cooling unit intake. This unidirectional airflow should correspond to the volume of air that is required to carry the thermal load of the IT equipment.
Anything in excess of this airflow requires additional cooling energy to create the volume or CFM. Thus anytime you plug leaks in the UNIFLOW a reduction in fan speed could be made.

As you can imagine, the energy saved due to excessive volume has more to do with the amount of airflow leaks. It is here that some AFM companies have over-estimated potential energy saved. It is common to hear that if you plug your cable cutouts that you will save 30% of energy. That is only true if you have leakage that amounts to 30% of excessive volume. And, that this volume can be
adjusted.

Note: In some cases additional energy can be wasted when cold supply air bypasses the thermal load and returns to the cooling unit.
This is when the cooling unit is required to lower the RH of the return air.

The other part of the cooling efficiency equation is adjusting of the cooling units’ set points. This can only be accomplished when the intake temperature and RH across the face of the IT equipment (intake) is within the acceptable manufacturer’s range.

This can be likened to the ‘weakest link’ in a chain. Cooling set points are fixed to the warmest IT intake. The warmest IT intake is the weakest link in the chain.

Understanding these fundamentals help IT managers when they are presented with AFM solutions and the estimated energy savings.

How then can data center managers determine the true energy savings with each AFM best practice
solution? Which best practice solution delivers the most return on investment? What would the proper AFM road map look like?

Background

AFM is not new to data centers. Cable Cutout Covers (CCC) first introduced in the 1990’s, were the industry’s first experience with eliminating UNIFLOW leaks.

Another area addressed was the misplacement of perforated tiles. Previously, it was common to see perforated tiles placed between computer racks and the cooling units. This caused what is called a ‘short cycle’ where cooling air doesn’t pass through the thermal load before returning to the cooling unit’s return.

Today the industry fully embraces the need to properly place supply perforated tiles in the cold
aisle and eliminate leaks with cable cutout covers in the hot aisle.

Another common and longstanding AFM tool is the Blanking Panel (BP). Principally, the main purpose of the BP is to prevent the migration of hot return air from moving within the cabinet into the cold supply aisle. Additionally, air moving from the cold aisle into the hot aisle without passing through a thermal load is another form of leakage where volume must be made up with increased cubic feet per minute (CFM).

Still another AFM tool is the Velocity Adjustor (VA), invented by Subzero Engineering in 2006. The VA is used to balance subfloor air pressure ensuring a consistent ‘throw rate’ (CFM) to each perforated tile. It also prevents two rivers of airflow from creating a subfloor eddie that can generate a negative pressure that sucks ambient airflow into the subfloor void. This tool can be used to lower CFM or airflow volume, because it allows a consistent volume of air into the cold aisle.

Another AFM tool can be found in some IT cabinets. These panels are placed in the 6 common areas
around the cabinet that would allow hot exhaust airflow to migrate back into the cool supply aisle.
Like blanking panels, AFM within the cabinets plug air leakage and lowers the volume of air required.

The most recent innovation in AFM is ‘Containment’.

The term ‘Containment’ is used in two ways. First, it can describe the doors, walls, and ceilings that
‘contain’ airflow, secondly, it can be used to describe all the tools of AFM combined that creates UNIFLOW.

Containment, as it relates to doors, walls, and ceiling systems is the final piece of the puzzle for AFM. This is because consistent IT intake temperatures cannot be achieved by just plugging leaks in the Uniflow. Containment fundamentally changes AFM by managing the largest part of the UNIFLOW.

AFM Tools

  1. Cable Cutout Covers
  2. Perforated Tile Placement
  3. Blanking Panels
  4. Velocity Adjustors
  5. Rack or Cabinet AFM
  6. Containment

AFM Methods and Returns

While all AFM methods contribute to the separation of supply and return airflow, what actual reduction in energy occurs with each method?

For instance – CCC. What cooling unit energy adjustments are made when all of the cable cutouts
are covered? The most obvious benefit is the reduction in the volume of air required. Airflow
volume can be reduced by turning off cooling unit fans or by slowing fan speed with variable frequency drives (VFDs).

CCC’s plug leaks in the Uniflow but they cannot prevent the mixture of supply and return airflow.
Instead, CCC’s should be seen as a building block, toward full supply and return separation.

What about BP and cabinet airflow management?

These too are necessary components to plug leaks in the Uniflow. The amount of space open has a
direct correlation to the amount of air leaking that does not pass through the thermal load. Energy reduction is thus limited to the amount of leakage eliminated.

What about containment in the form of doors, walls, and roof systems?

Containment components represent the largest space in the UNIFLOW. For instance, the space at
the end of a four-foot-wide aisle can attribute to as many as 30 square feet of open space where airflow mixing occurs. Next, add the space between the top of the row of a four-foot aisle with 12 cabinets and you have an additional 96 square feet. If you combine the overhead space and the two ends of the row, it amounts to 156 square feet of open space in one 24-foot aisle alone.

If the rack row has no opposing cabinets or areas with missing cabinets, this square footage space
can easily double or triple. Clearly these spaces represent the bulk of cold and hot air mixing.
Which AFM contributes the most to energy efficiency?

The key to answering this question is found when we examine the individual energy use of the cooling units.

The two main energy components of the cooling units are fans that produce airflow volume and the
mechanisms (chiller compressors and pumps) that produce air temperature and relative humidity.

According to the US Dept. of Energy (Energy Star) cooling unit fans account for 5% to 10% of total
data center energy consumption. Also a study by Madhusudan Iyengar and Roger Schmidt of IBM
entitled “Energy Consumption of Information Technology Data Centers” concludes that cooling
compressors, pumps, etc. account for 25% to 30% of the total data center energy consumption.

From this we learn that more potential energy savings come from set point adjustment, than air volume adjustment.

And it is here where most of the confusion about AFM energy savings comes from.

Many data center managers were given the expectations of huge energy savings when they
deployed CCC and BP. Instead, the actual energy saved was a small percentage of the cooling
energy total. This is because the energy savings experienced were a combination of the amount of
leakage that was mitigated and the percentage of energy used in creating that volume.

In contrast, much larger energy reductions have been measured when DC managers contained either the hot or cold aisle. This is due to two reasons:
1) The open space or leakage is greater in the containment space.
2) The energy sources (cooling unit compressors, etc.) use more energy as a percentage than fans that create air volume.

A proof of this can be seen in the way utility companies provide rebates to data centers that reduce energy consumption. Utility companies that offer energy rebates require a true before and after energy consumption measurement in order to determine just how much energy is being saved. It is very rare for data centers that use CCC and installed BP to get such a rebate, as the energy reduction was not significant. This changed when data centers contained the hot or cold aisle. Data centers with a full AFM solution, incorporating containment, are commonly receiving thousands of dollars in rebates.

Does that mean that containment is the holy grail of AFM? Yes and no. While containment offers the biggest bang for the buck, the need to plug the holes in the Uniflow is still a major part of the overall airflow chain. The good news is that the energy saved when incorporating all aspects of AFM can more than pay for the cost, in a matter of 12 to 18 months.

That said, those with a limited budget will get more airflow separation with containment doors, walls, and ceilings than with BP and CCC.

Think of it this way… imagine a bathtub with several pencil-sized holes in it. You can imagine the water pouring from these holes. Now imagine the same tub with one of the sides missing. Which of the two ‘leaks’ would you patch up first?

When faced with airflow management solutions, remember that the largest energy cost is in controlling the temperature of the supply air. Next, know that the largest mixing space is at the aisle ends and top of the rack row. This then supports the road map of supplying a consistent and predictable cooling airflow to the cold aisle that can be adjusted according to the manufacturer’s intake specifications in order to save the most energy.

The good news is that most data centers have some level of cable cutout management and blanking panels. This then stresses the value of getting the bulk of the energy savings by completing AFM with full hot or cold aisle containment.

References

1) https://www.energystar.gov/index.cfm?c=power_mgt.datacenter_efficiency_vsds
2) Schmidt, R. and Iyengar, M., “Thermodynamics of Information Technology Data Centers