Posts

Warming the Data Center

Warming the Data Center

For decades the idea of running a hot or warm data center was unthinkable; driving data center managers to create a ”meat locker” like environment – the colder, the better.

Today, the idea of running a warm data center has finally gotten some traction. Major companies like eBay, Facebook, Amazon, Apple, and Microsoft are now operating their data centers at temperatures higher than what was considered possible only a few years ago.

Why? And more importantly… How?

The “why” is easy.
For every degree the set point is raised, the cost of cooling the servers goes down 4%-8% depending on the data center location and cooling design. Additionally, some data centers can take advantage of free cooling cycles when the server intake temperatures increase. This is of course taking into account the manufacturers recommended temperature settings, and not surpassing them.

Now on to the “how”. Or we might ask why now? What changed?
The answer has to do with the ability to provide a consistent server intake temperature. Inconsistent intake temperatures are a result of return and supply airflows mixing. When this happens it creates “hot spots”, which causes cooling problems. Without a consistent supply temperature the highest temperature in those “hot spots” would determine the data center cooling set point temperature, resulting in a lower set point.

A few years ago containment was introduced to the data center industry. Containment fully separates supply and return airflow, which eliminates “hot spots” and creates a consistent intake temperature. Containment is the key to accomplishing consistent intake temperatures. With consistent intake temperatures data center managers can increase cooling set points, creating a warmer data center. A warmer data center means less money spent on cooling costs.

Airflow ManagementConsistent Intake TemperaturesData Center ContainmentHot SpotsRaise Set PointSubzero EngineeringThe World's Hottest Data Centerswarmer data center
The Role of CFDs in Containment

The Role of CFDs in Containment

Data center airflow management engineers have used Computational Fluid Dynamics (CFD) programs for years to determine the complicated movement of airflow in data centers. CFD models pinpoint areas where airflow can be improved in order to provide a consistent cooling solution and energy savings.

We interviewed Gordon Johnson who is a certified data center design professional, Data Center Energy Practitioner (DCEP), CFD and electrical engineer regarding the use of CFD’s and containment.


Gordon, what is the principle way CFD’s are used with regard to containment?

We use CFD’s to determine two basic data sets. The first is the baseline, or the current airflow pattern. This initial CFD model shows supply intake temperatures to each cabinet. This model also determines the effectiveness of each AC unit as it relates to airflow volume, return air temperature, delta T, and supply air temperature.

The second model is the proposed design of the CFD engineer who uses the information from the base model to enact airflow management best practices to separate supply from return airflow. Typically several models are created in order to adjust airflow volume, set point temperatures, and adjust individual aisle supply volume.


Gordon, Are there situations in which the CFD engineer does not recommend containment?

Not really, because the entire basis of airflow management is the full separation of supply and return airflow. Anytime these two airflows mix there is a loss of energy and consistent supply temperature to the IT thermal load.

We have seen CFD’s used by manufactures to prove product effectiveness. What are some ways CFD’s are made to exaggerate product effectiveness?

Exaggerations usually stem from the principle known as GIGO, short for Garbage In, Garbage Out. This refers to the fact that computers operate by logical processes, and thus will unquestioningly process unintended, even nonsensical input data (garbage in) and produce undesired, often nonsensical output (garbage out).

Let me give you an example. Recently I recreated a CFD model that was used to explain the effectiveness of airflow deflectors. The purpose of the CFD was to show the energy savings difference between airflow deflectors and full containment. We found that certain key data points were inserted into the models that do not reflect industry standards. Key settings were adjusted to fully optimize energy savings without regard to potential changes to the environment. Any potentially adverse effects to the cooling system’s ability to maintain acceptable thermal parameters, due to environmental changes, are not revealed in the CFD model. Thus, the model was operating on a fine line that could not be adjusted without a significant impact on its ability to cool the IT load.


Can you give us any specifics?

The airflow volume was manually changed from 1 kW at 154 CFM to 1 kW at 120 CFM. Industry standard airflow is 154 CFM. The formula most commonly used is as such:

Calculation

120 CFM airflow does not give the cooling system any margin for potential changes to the environment.

Another key area of unrealistic design is the placement of cabinet thermal load and high volume grates. The base model places high kW loads in specific, isolated areas surrounded by high volume grates. What then happens, if additional load is placed in areas of low volume airflow? Any changes to the rack kW in areas without high volume grates could not be accounted for. At the end of the day, any changes to the IT load would require an additional airflow management audit to determine what changes would affect the cooling solution. Thus, the proposed model is unrealistic because no data center would propose a cooling solution that would require regular modifications.


Are you recommending a CFD study every time you make changes to the data center thermal load?

No. a full separation supply and return airflow eliminates the guesswork with regards to the effect of air mixture. It also eliminates the need of specific high volume perforated tiles or grates to be placed in front of high kW loads. Instead, a CFD model would incorporate expected increases to the aisle thermal load. This falls in line with the “plus 1” kind of approach to cooling. Creating a positive pressure of supply air has many additional benefits, such as lowering IT equipment fan speed, and ensuring consistent supply temperature across the face of the IT intake.

Data centers should not be operated with little margin for changes or adjustments to the thermal load. That is why I always recommend a full containment solution with as close to 0% leakage as possible.  This is always the most efficient way to run a data center, and always yields the best return on investment. The full containment solution, with no openings at the aisle-end doors or above the cabinets, will easily allow the contained cold aisles to operate with a slightly greater supply of air than is demanded.  This in turn ensures that the cabinets in the fully contained aisle have a minimum temperature change from the bottom to the top of the rack, which allows the data center operator to easily choose predictable and reliable supply temperature set points for the cooling units.  The result?  Large energy savings, lower mean time between failures, and a more reliable data center.


What do you recommend as to the use of CFD studies and containment?

It’s important to create both an accurate baseline and a sustainable cooling solution design. This model will give data center operators a basis for an accurate representation of how things are being cooled. The proposed cooling solution can be used in numerous ways:

  • Accurate energy savings
  • Safe set point standards
  • Future cabinet population predictions
  • The ability to cool future kW increases
  • Identify and eliminate potential hot spots

Subzero Engineering endorses accurate and realistic CFD modeling that considers real world situations in order to create real world solutions.

Airflow ManagementCFDCold Aisle ContainmentComputational Fluid DynamicsContainmentdata centerData Center Containmentdata center coolingGordon JohnsonHot Aisle ContainmentLarry MainersSubzero Engineering
The Truth About Fans in the Data Center

The Truth About Fans in the Data Center

And how this influences data center airflow management.

Sorry sports fans… this is not about your favorite team. Instead we are going to explore the fascinating world of mechanical fans.

How many times have you seen vender illustrations of fans pushing air in long blue lines from perforated raised floor tiles into the intake of a rack? The truth is that air does not move in such a way.  Calculating the airflow induced by one particular fan at any given distance away from the fan, about any point of the fans face is a very involved set of calculations.

Traditional thermal designs for fans were originally measured as jet velocity of water jets. This presented a close estimate, but inaccurate data. A recent study in 2012 helped in creating very accurate formulas as to fan outlet velocity and distributions.

Fan Outlet Velocity Distributions and Calculations
Eli Gurevich, Michael Likov  (Intel Corporation, Israel Design Center, Haifa, Israel)
David Greenblatt, Yevgeni Furman, Iliya Romm (Technion Institute of Technology, Haifa, Israel)

Generally, volumetric flow rate and distance traveled decreases when contained air enters ambient room air, and this is why mechanical air contractors use ductwork or a contained plenum to direct supply air to the thermal load. Increasing the velocity of air in order to reach the thermal load, instead of using a duct system, is considered inefficient.

It’s important to understand the relationship of mechanical air movement from fans and what actually happens to the airflow. The issue with fans is the manufacturer’s stated CFM capacity, and the distance of air movement that the fan is capable of will carry it. This value reflects what the fan is able to produce in a given test environment. Manufacturer stated air displacement (CFM) is based on what is called normal temperature and pressure conditions (NTP).  The actual volume of air that a fan can displace varies due to two factors:

1) Air density (hot, low density or cold, high density)
2) Air pressure (positive or negative)

Thus it is important to determine the manufacturer’s test conditions for the fan, and then compare the data to the actual planned environment in which the fan will operate.

For example, when considering the installation of a fan in the subfloor to move subfloor air into the cold aisle, the first question that should be addressed is: “what is the temperature of the air and head pressure that the fan will operate in?”

Why? The temperature of the air will determine its density when confined to a constant volume. In most cases, the subfloor air is denser, which is good.  Thus the more important question will be about the subfloor pressure. It is not unusual to have negative pressure areas in the subfloor due to high velocity air steams. The Bernoulli principle explains our concern, in that an increase of air speed will result in a decrease of air pressure. Additionally, when two air streams of high velocity air intersect from opposing directions, the result is often a subfloor vortex, resulting in the reversal of current.

So what’s the point? Imagine putting a raised floor fan system over an area with negative pressure. This would negatively affect the fan’s ideal operating conditions.

Consider this, what is the typical reason for using additional fans to move air into the cold aisle? Most likely the unassisted perforated tile or grate is not able to deliver sufficient airflow to the thermal load of the racks. What if this is based on inadequate subfloor pressure? If that is the case, adding a fan assisted raised floor panel will require taking into consideration the fan NTP. Also it will can drastically and unpredictably impact other areas of the data center as you “rob Peter to pay Paul” so to speak.

Consider the following subfloor airflow management strategies:

1) Eliminate high velocity air: This will ensure a more balanced delivery of air due to a nominalized subfloor pressure.
2) Cold Aisle Containment: Instead of designing rack cooling by placing an airflow-producing raised floor tile at the feet of each rack, why not create a cold aisle that is not dependent on perforated tile placement?

Cold aisle containment creates a small room of supply air that can be accessed by all IT equipment fans. Instead of managing each supply raised floor tile, the only requirement is ensuring positive air pressure in the aisle. Cold aisle containment systems provide several benefits: most contained cold aisles will only have a one-degree differential from the bottom to the top of the rack, and the cold aisle containment does not require high air velocity, which can create other airflow management problems, such as bypassing IT equipment intake.

Understanding the NTP conditions of IT equipment cooling fans is an important aspect of data center airflow management. For example, in order to properly adjust CRAC unit set points, it is important to know the temperature at which the supply air’s density will drop below each fan’s NTP conditions.  It is possible to lower the supply temperature to a level at which an increase in fan speed would be required to make up for the less dense airflow, potentially offsetting any energy savings from a higher cooling set point.

Simply adding fans to cool IT equipment is not a quick fix; it is imperative to first understand why sufficient airflow is not available. It is important to understand the fan’s NTP in the proposed environment, and to see if you can supply IT equipment with consistent airflow by simply separating supply and return air through data center containment. Containment can prevent the unnecessary use of additional electricity that is required to operate fans, saving money and electricity in the long term.

AirflowAirflow ManagementContainmentdata centerData Center Containmentdata center coolingData Center FansLarry MainersSubzero Engineering
The Truth Behind Data Center Airflow Management: It’s Complicated

The Truth Behind Data Center Airflow Management: It’s Complicated

Does hot air rise? The answer of course is “yes”.

Does hot air fall? The answer is yes again.

What about sideways? Yes!

Heat can move up, down, or sideways, depending on the situation. The idea that hot air has an inherent desire to flow up is a misconception that we in the data center airflow management business would like to see dissipate.

Temperature difference is the major factor with regards to the direction and rate of heat transfer. Because air tends to move towards thermal equilibrium, it is important to maintain physical separation of hot and cold air in data centers; the need for hot and cold air separation was the reason that the data center containment industry came into existence. The laws of thermodynamics state that air moves from areas of higher temperature towards areas of lower temperature. Air is a fluid that accounts for both density and buoyancy. When air is heated the molecules move around faster, which causes it to expand, and as it expands its density becomes lower. The warmer, lower density air will rise above the denser, cooler air.

Pressure is another determining factor when looking at air movement. The flow of air from areas of high pressure to areas of low pressure is an embodiment of Newton’s third law. Equilibrium is what also drives movement between areas of differing pressure, so uninhibited air will continuously move from high to low pressure until equilibrium is reached. This movement towards equilibrium is also known as expansion.

Principles of air movement:
1) Heat Transfer:
a. Conduction: Air flows from a higher temperature region to a lower temperature between mediums
that make physical contact.
b. Convection: Heat transfer due to the movement of a fluid; can be free/natural, or forced.
2) Air flows from a higher pressure to a lower pressure


What does this have to do with data center airflow management?

The data center containment industry has been inundated with graphs depicting airflow, most of which show large, sweeping lines indicating the flow of air. In most cases, the airflow depicted is a result of a mechanical device, usually a fan. The data presented by these graphs tends to lead one to believe that mechanically induced airflow will sufficiently separate hot exhaust air from cold intake air. In real-world scenarios, air curtains are inefficient and ineffective.

Modern mechanical air conditioning systems rely on four sided duct systems to deliver supply air to the source of the heat load, and the return is moved by the same means. This is the only way to ensure the separation of supply and return airflow. Systems administrators and building managers should be dubious of airflow management systems that require an increase in energy to accomplish air separation. Instead, it is best to apply the simplest principles of airflow when designing a system aimed at full separation of supply and return airflow.

If you would like to learn more about the flow of air, please see the following link:

Learn How Air Moves Through This Incredible Optical Device

http://9gag.tv/p/aK3pOe/learn-how-air-moves-through-this-incredible-optical-device

AirflowAirflow ManagementContainmentdata centerData Center Containmentdata center coolingLarry MainersSubzero Engineering
Extending the Capacity of the Data Center Using Hot or Cold Aisle Containment

Extending the Capacity of the Data Center Using Hot or Cold Aisle Containment

What correlation does consistent supply air across the face of the rack have to do with increased data center capacity?

Hot or Cold Aisle Containment can significantly increase the capacity of a data center when all U’s are fully populated due to consistent intake temperatures across the rack face.

Additionally, when cooling energy can be converted to IT equipment this too can elongate the life of a data center that is running out of power.

Problem Statement – Air Stratification
Most data centers without containment have air stratification. Air stratification occurs when supply and return air mix. This creates several temperature layers along the intake face of the rack. It is not uncommon for the temperature at the bottom of the rack to be 8 to 10 degrees colder than the top. As a result, many data centers have implemented policies that do not allow the top 6 to 8 U’s to be populated. This can decrease the data centers IT equipment capacity by 16%. Capacity from a space perspective is one thing, but when the unpopulated U’s are potentially high density systems the lost space is amplified.

Click here to read the latest Subzero Engineering White Paper “Extending the Capacity of a Data Center”.

 

Airflow ManagementCold Aisle Containmentdata centerData Center CapacityData Center ContainmentHot Aisle ContainmentLarry MainersSubzero Engineering
Datacenter Revamps Cut Energy Costs At CenturyLink

Datacenter Revamps Cut Energy Costs At CenturyLink

September 3, 2014 by Timothy Prickett Morgan — EnterpriseTech Datacenter Edition— http://www.enterprisetech.com/2014/09/03/datacenter-revamps-cut-energy-costs-centurylink/ 

It is probably telling that these days datacenter managers think of the infrastructure under their care more in terms of the juice it burns and not by counting the server, storage, and switch boxes that consume that electricity and exhale heat. Ultimately, that power draw is the limiting factor in the scalability of the datacenter and using that power efficiently can boost processing and storage capacity and also drop profits straight to the bottom line.

Three years ago, just as it was buying public cloud computing provider Savvis for $2.5 billion, CenturyLink took a hard look at its annual electric bill, which was running at $80 million a year across its 48 datacenters. At the time, CenturyLink had just finished acquiring Qwest Communications, giving it a strong position in the voice and data services for enterprises and making it the third largest telecommunications company in the United States. CenturyLink, which is based in Monroe, Louisiana, also provides Internet service to consumers and operates the PrimeTV and DirectTV services; it has 47,000 employees and generated $18.1 billion in revenues in 2013.

One of the reasons why CenturyLink has been able to now expand to 57 datacenters – it just opened up its Toronto TR3 facility on September 8 – comprising close to 2.6 million square feet of datacenter floor space is that it started tackling the power and cooling issues three years ago.

The facilities come in various shapes and sizes, explains Joel Stone, vice president of global data center operations for the CenturyLink Technology Solutions division. Some are as small as 10,000 square feet, others are more than ten times that size. Two of its largest facilities are located in Dallas, Texas, weighing in at 110,000 and 153,700 square feet and both rated at 12 megawatts. The typical facility consumes on the order of 5 megawatts. CenturyLink uses some of that datacenter capacity to service its own telecommunications and computing needs, but a big chunk of that power goes into its hosting and cloud businesses which in turn provide homes for the infrastructure of companies from every industry and region. CenturyLink’s biggest customers come from the financial services, healthcare, online games, and cloud businesses, Stone tells EnterpriseTech. Some of these customers have only one or two racks of capacity, whole others contract for anywhere from 5 megawatts to 7 megawatts of capacity. Stone’s guess is that all told, the datacenters have hundreds of thousands of servers, but again, that is not how CenturyLink, or indeed any datacenter facility provider, is thinking about it. What goes in the rack is the customers’ business, not CenturyLink’s.

“We are loading up these facilities and trying to drive our capacity utilization upwards,” says Stone. And the industry as a whole does not do a very good job of this. Stone cites statistics from the Uptime Institute, which surveyed colocation facilities, wholesale datacenter suppliers, and enterprises actually use only around 50 percent of the power that comes into the facilities. “We are trying to figure out how we can get datacenters packed more densely. Space is usually the cheapest part of the datacenter, but the power infrastructure and the cooling mechanicals are where the costs reside unless you are situated in Manhattan where space is such a premium. We are trying to drive our watts per square foot higher.”

While server infrastructure is getting more powerful in terms of core counts and throughput and storage is getting denser and, in the case of flash-based or hybrid flash-disk arrays, are getting faster, the workloads are growing faster and therefore the overall power consumption for the infrastructure as a whole in the datacenter continues to grow.

“People walk into datacenters and they have this idea that they should be cold – but they really shouldn’t be,” says Stone. “Servers operate optimally in the range of 77 to 79 degrees Fahrenheit. If you get much hotter than that, then the server fans have to kick on or you might have to move more water or chilled air. The idea is to get things optimized. You want to push as little air and flow as little water as possible. But there is no magic bullet that will solve this problem.”

Companies have to do a few things at the same time to try to get into that optimal temperature zone, and CenturyLink was shooting for around 75 degrees at the server inlet compared to 68 degrees in the initial test in the server racks at a 65,0000 square foot datacenter in Los Angeles. Here’s a rule of thumb: For every degree Fahrenheit that the server inlet temperature was raised in the datacenter, it cut the power bill by 2 percent. You can’t push it too far, of course, or you will start impacting the reliability of the server equipment. (The supplied air temperature in this facility was 55 degrees and the server inlet temperature was 67 degrees before the energy efficiency efforts got under way.)

The first thing is to control the airflow in the datacenter better, and the second is to measure the temperature of the air more accurately at the server so cooling can be maintained in a more balanced way across the facility.

CenturyLink started work on hot aisle and cold aisle containment in its facilities three and a half years ago, and the idea is simple enough: keep the hot air coming from the back of the racks from mixing with the cold air coming into the datacenter from chillers. The containment project is a multi-year, multi-million dollar effort, and CenturyLink is working with a company called SubZero Engineering to add containment to its aisles. About 95 percent of its facilities now have some form of air containment, and most of them are doing hot aisle containment.

“If we can isolate the hot aisles, that gives us a little more ride through from the cold aisles if we were to have some sort of event,” Stone explains. But CenturyLink does have some facilities that, just by the nature of their design, do cold aisle containment instead. (That has the funny effect of making the datacenter feel hotter because people walk around the hot aisles instead of the cold ones and sometimes gives the impression that these are more efficient. But both approaches improve efficiency.) The important thing about the SubZero containment add-ons to rows of racks, says Stone, is that they are reusable and reconfigurable, so as customers come and go in the CenturyLink datacenters they can adjust the containment.

Once the air is contained, then you can dispense cold air and suck out hot air on a per-row basis and fine-tune the distribution of air around the datacenter. But to do that, you need to get sensors closer the racks. Several years ago, it was standard to have temperature sensors mounted on the ceiling, walls, or columns of datacenters, but more recently, after starting its aisle containment efforts, CenturyLink tapped RF Code to add its wireless sensor tags to the air inlets on IT racks to measure their temperature precisely rather than using an average of the ambient air temperature from the wall and ceiling sensors. This temperature data is now fed back into its building management system, which comes from Automated Logic Control, a division of the United Technologies conglomerate. (Stone said that Eaton and Schneider Electric also have very good building management systems, by the way.)

The energy efficiency effort doesn’t stop here. CenturyLink is not looking at retrofitting its CRAC and CRAH units – those are short for Computer Room Air Conditioner and Computer Room Air Handler – with variable speed drives. Up until recently, CRAC and CRAH units were basically on or off, but now they can provide different levels of cooling. Stone says that running a larger number of CRAH units at a lower speeds provides better static air pressure in the datacenter and uses less energy than having a small number of larger units running faster. (In the latter case, extra cooling capacity is provided through extra units, and in the former it is provided by ramping up the speed of the CRAH units rather than increasing their number.) CenturyLink is also looking at variable speed pumps and replacing cooling towers fans in some facilities.

“We are taking a pragmatic, planned approach across our entire footprint, and we have gone into the areas where we are paying the most for power or have the highest datacenter loads and tackling those facilities first,” says Stone. The energy efficiency efforts in the CenturyLink datacenters have to have a 24 month ROI for them to proceed.

In its Chicago CH2 datacenter (one of three around that Midwestern metropolis and one of the largest run by CenturyLink in its fleet of facilities), it did aisle containment, RF Code sensors, variable speed CRAC units, variable speed drives on the pumps, and replaced the cooling tower fans with more aerodynamic units that ran slower and yet pulled the more air through the cooling towers. This facility, which is located out near O’Hare International Airport, has 163,747 square feet of datacenter space, has a total capacity of 17.6 megawatts, and can deliver 150 watts per square foot.

CenturyLink reduced the load in that CH2 facility by 7.4 million kilowatt-hours per year, and Stone just last month collected on a $534,000 rebate check from Commonwealth Edison, the power company in the Chicago area. All of these upgrades in the CH2 facility cost roughly $2.4 million, and with the rebates and the power savings the return on investment was on the order of 21 months – and that is before the rebate was factored in.

Airflow ManagementCenturyLinkCold Aisle ContainmentContainmentcut energydata centerData Center Containmentdatacenterenergy costsHot Aisle ContainmentSubzero Engineering
Mind the Gap – The importance of gap-free data center containment door systems, by Larry Mainers

Mind the Gap – The importance of gap-free data center containment door systems, by Larry Mainers


No visit to London England is complete without seeing the London Underground, or as it is affectionately called “The Tube”. The tube connects the greater part of London with 270 stations. It’s an easy and inexpensive way to get around London town.

In 1969 an audible and/or visual warning system was added to prevent passengers’ feet from getting stuck between the platform and the moving train. The phrase “Mind the gap” was coined and has become associated with the London Underground ever since.

“Mind the gap” had been imitated all over the world. You will find versions of it in France, Hong Kong, Singapore, New Delhi, Greece, Sweden, Seattle, Brazil, Portugal, New York, Sydney, Berlin, Madrid, Amsterdam, Buenos Aires, and Japan.

It is my hope that this phrase can be embraced by the data cent er industry. Gaps in airflow management and especially when containment is deployed are an easy way to lose both energy and overall cooling effectiveness. We need a “Mind the gap” philosophy in airflow management, especially containment doors. Why doors? Because of the door’s moving parts it is less expensive for manufacturers to leave a gap then to have a door that fully seals. Data center managers who “Mind the gap” should insist on door systems that don’t leak.

Just how important is it to “Mind the gap” in your data center containment system? One way to see the importance of eliminating gaps is to take a look around your own house. How many acceptable gaps do you have around your windows? Do you conclude that since the window is keeping most of the conditioned air inside that a few gaps around the windows will not matter much? I doubt it. The fact is, utility companies have been known to provide rebates for the use of weather stripping to ensure a ll gaps are filled. Gaps equal waste. Over time any waste, no matter how small, ends up being substantial.

Gaps become an even more important area to fill when you consider that most contained aisles are under positive pressure. Positive pressure can increase the leakage fourfold. A cold aisle that is oversupplied should move air through IT equipment and not aisle end doors.

It’s important that we all “Mind the gap” in our data center containment doors. In this way we both individually and collectively save an enormous amount of energy, just as the world’s mass transit systems, like ‘The Tube’, do for us each and every day.

Airflow Managementaisle end doorsData Center ContainmentData Center Containment DoorsMind the GapSubzero Engineering
What’s new at Subzero Engineering for 2014

What’s new at Subzero Engineering for 2014

At Subzero Engineering we are always looking for new ways to improve our products, by making them more energy efficient, NFPA compliant, and adding more standard features. This year is no exception!

We have been working hard taking our world class products and ma king them even better. Here are a few of the changes we are making for 2014.

Product Announcements

• New Polar Cap Retractable Roof – The first fully NFPA compliant containment roof system
• New Arctic Enclosure Sizes Available – Two new 48U cabinets available
• Power Management – We now offer a full line of Raritan power products
• New Elite Series Doors – All of our doors have a sleek new design & come with extra features, standard
• New Panel Options – 3MM Acrylic, 4MM Polycarbonate, 3MM FM4910

Click here to learn more.

Airflow Managementaisle end doorsCold Aisle ContainmentEnclosuresHot Aisle ContainmentPower ManagementraritanRetractable Roofserver racksSubzero Engineering