Lost Password

News

Data Center
Educational Article

Rack Hygiene – Stop the Madness!

Stop the madness — Rack hygiene is a must for every data center!

We’ve all heard of personal hygiene, and perhaps you’ve heard of mental hygiene, but what about rack hygiene? 

Please don’t look this up in a dictionary it will only confuse you. The word hygiene is now associated with data center cabinets or racks; which is a good thing. Why? Because the word hygiene makes people think of practices that maintain health and prevents disease. The word cleanliness also comes to mind. We all want clean bodies and minds. What about your data center racks?

Think about it this way… What happens to your mental state when you must make a cable change and the cable management system looks like a bowl of spaghetti? How’s your mental health now? Is it possible that your thoughts are moving to the dark side, unclean? Would you like to meet the guy in purchasing that saved the company $200.00 per cabinet who has no clue about the time lost in network cable mining?

Stop the madness! Rack hygiene is a must for every data center. Saving a few dollars on cabinets without cable management systems is nothing short of crazy.

The time wasted on unmanaged cable during the life of the cabinets will easily outweigh the additional costs for cable management.

What we’ve learned over the years is that asking technicians to mastermind a cable management program without the proper tools is like going into battle with a slingshot instead of a rifle.

My mom use to say in her lilting voice, “A place for everything and everything in its place.” Mom was borrowing an expression that came from the 1800s that has been attributed to many sources. My favorite is the quote from Masterman Ready, or the Wreck of the Pacific in 1842 that uses the expression in a nautical context “In a well-conducted man-of-war every thing is in its place, and there is a place for every thing”.

Boats don’t have much room, so its imperative to stow everything is such a way that it can be easily found and ready for use.

The same can be said for cabinets, there is no room for clutter. A properly organized cabinet goes a long way in new equipment deployment, as well as tracking down outages.

The point we want to make is that rack hygiene and cable management begins during the purchasing phase of the racks and the cabinets. Not all cable management systems are created equal, nor for the same purpose. Here are some important variables to consider:

• Vertical cable managers
• Horizontal cabling systems
• Backbone cable installations
• Copper
• Fiber
• Maintenance holes
• Bonding & grounding
• Support facilities such as raceways, cable trays holes coring, slot and sleeves
• Fastener types
• Wireless systems

Take the time to design a cabinet that makes cable hygiene easy. Without it your technician’s mental health will be anything but clean!

Data Center
Educational Article

Seeing is Believing

Subzero Data Center cold and/or hot aisle containment is the best way to lower intake temperature to IT equipment.

What the Evidence Shows in Real Time

The IBM data center efficiency group in New York wanted the same proof. Gerry Weber an engineering consultant at IBM along with other monitoring technicians recorded a time-lapse video that shows the containment install along side of the temperature changes.

In the video, you can see the temperature dropped nearly 14 degrees in a 5 hour period! What the video does not display is that the temperature across the face of the IT intake did not vary more than one degree.

Subzero Engineering has similar data from numerous data centers with an average of a 10 degree in supply temperature. At the same time the intake Relative Humidity Levels were increased by over 20%.

What does this mean for data center operators?

  1. Consistent supply temperatures
  2. Increase use of rack space due to consistent supply temperatures at the top of the rack.
  3. Predictable supply temperatures make it easy to anticipate cooling solutions when an increase of thermal load or kW is introduced to the space.
  4. Maximize cooling efficiency by adopting ASHRAE increase in temperature.
  5. Convert cooling energy to IT equipment.
Data Center
Educational Article

The Correlation of Data Center Best Practices and Disaster Recovery

By Subzero Engineering CEO, Larry Mainers

There is a correlation between a data center’s attention to industry best practices and its ability to rebound from the disaster

A significant part of my career has been spent responding to, and assisting with data center disaster recovery. I have witnessed some rather spectacular disaster recovery situations, from the Bank of New York adjacent to the fallen World Trade Center towers, to the government offices that were affected by the Oklahoma City bombing and the river that flowed through the Estes Trucking data center in northern Virginia.

Through the years, I have noticed an interesting trend in the ability to recover high tech sites from disaster: there is a correlation between a data center’s attention to industry best practices and its ability to rebound from the disaster. A company that had instituted a cable management strategy had a greater ability to recover from a flood than a company who hadn’t prioritized cable management.   Similarly, the company that employed airflow management prior to an event was more prepared during recovery efforts than those that had no defined cooling strategy.

When disaster strikes a data center, one of the most profound effects on employees is the changed appearance of their site; nothing looks the same. Instead of clean walls, floors and ceilings, the facility is often extremely dirty, with missing tiles and computers covered in disaster residue. I have noticed that such stark changes can initially cause managers to delay key disaster recovery decisions, negatively affecting the recovery effort.

In contrast, those managers who are up to date on industry best practices are better able to react during disaster recovery. Such managers are better attuned to the condition of their data centers, and don’t require extra time to recreate the conditions of the data center prior to the disaster. Adhering to industry best practices reduces the reaction time to a disaster, making the recovery process faster more efficient.

When data center managers stay up to date with important best practices, it will benefit them in being prepared for the unexpected.

Data Center
Product Insight

Subzero’s Polar PDUs

Power management in the data center is all about being connected.

Whether you need to connect past technology to current standards or prepare current technology for what’s coming in the future — you need a Power Management Partner that provides cutting edge products along with the ability to choose functionality. You also need the ability to correlate cooling airflow with power distribution.

Power & Cooling — You can’t have one without the other.

Partner with Subzero for your Power Management Solution.

Subzero Engineering now combines our cutting edge containment and cabinet solutions with power management. These items combined create the most powerful ‘plug and play’ solution in the industry.

Power + InfraStrut Cabinets + Containment

Over 200 configurations of the Polar PDUs are available to be custom configured to our Arctic Enclosures that are then modularly connected to our containment for a precise fit and function. Designed, manufactured, and shipped together means faster delivery and setup.

Polar Power — Turn IT on.

Data Center
Educational Article

The Role of CFDs in Containment

An interview with Gordon Johnson who is a certified data center design professional, Data Center Energy Practitioner (DCEP), CFD and electrical engineer regarding the use of CFD’s and containment.

An interview with a certified data center design professional regarding the use of CFD’s and containment

Data center airflow management engineers have used Computational Fluid Dynamics (CFD) programs for years to determine the complicated movement of airflow in data centers. CFD models pinpoint areas where airflow can be improved in order to provide a consistent cooling solution and energy savings.

Gordon, what is the principle way CFD’s are used with regard to containment?

We use CFD’s to determine two basic data sets. The first is the baseline, or the current airflow pattern. This initial CFD model shows supply intake temperatures to each cabinet. This model also determines the effectiveness of each AC unit as it relates to airflow volume, return air temperature, delta T, and supply air temperature.

The second model is the proposed design of the CFD engineer who uses the information from the base model to enact airflow management best practices to separate supply from return airflow. Typically several models are created in order to adjust airflow volume, set point temperatures, and adjust individual aisle supply volume.

Gordon, Are there situations in which the CFD engineer does not recommend containment?

Not really, because the entire basis of airflow management is the full separation of supply and return airflow. Anytime these two airflows mix there is a loss of energy and consistent supply temperature to the IT thermal load.

We have seen CFD’s used by manufactures to prove product effectiveness. What are some ways CFD’s are made to exaggerate product effectiveness?

Exaggerations usually stem from the principle known as GIGO, short for Garbage In, Garbage Out. This refers to the fact that computers operate by logical processes, and thus will unquestioningly process unintended, even nonsensical input data (garbage in) and produce undesired, often nonsensical output (garbage out).

Let me give you an example. Recently I recreated a CFD model that was used to explain the effectiveness of airflow deflectors. The purpose of the CFD was to show the energy savings difference between airflow deflectors and full containment. We found that certain key data points were inserted into the models that do not reflect industry standards. Key settings were adjusted to fully optimize energy savings without regard to potential changes to the environment. Any potentially adverse effects to the cooling system’s ability to maintain acceptable thermal parameters, due to environmental changes, are not revealed in the CFD model. Thus, the model was operating on a fine line that could not be adjusted without a significant impact on its ability to cool the IT load.

Can you give us any specifics?

The airflow volume was manually changed from 1 kW at 154 CFM to 1 kW at 120 CFM. Industry standard airflow is 154 CFM. The formula most commonly used is as such:

120 CFM airflow does not give the cooling system any margin for potential changes to the environment.

Another key area of unrealistic design is the placement of cabinet thermal load and high volume grates. The base model places high kW loads in specific, isolated areas surrounded by high volume grates. What then happens, if additional load is placed in areas of low volume airflow? Any changes to the rack kW in areas without high volume grates could not be accounted for. At the end of the day, any changes to the IT load would require an additional airflow management audit to determine what changes would affect the cooling solution. Thus, the proposed model is unrealistic because no data center would propose a cooling solution that would require regular modifications.

Are you recommending a CFD study every time you make changes to the data center thermal load?

No. a full separation supply and return airflow eliminates the guesswork with regards to the effect of air mixture. It also eliminates the need of specific high volume perforated tiles or grates to be placed in front of high kW loads. Instead, a CFD model would incorporate expected increases to the aisle thermal load. This falls in line with the “plus 1” kind of approach to cooling. Creating a positive pressure of supply air has many additional benefits, such as lowering IT equipment fan speed, and ensuring consistent supply temperature across the face of the IT intake.

Data centers should not be operated with little margin for changes or adjustments to the thermal load. That is why I always recommend a full containment solution with as close to 0% leakage as possible.  This is always the most efficient way to run a data center, and always yields the best return on investment. The full containment solution, with no openings at the aisle-end doors or above the cabinets, will easily allow the contained cold aisles to operate with a slightly greater supply of air than is demanded.  This in turn ensures that the cabinets in the fully contained aisle have a minimum temperature change from the bottom to the top of the rack, which allows the data center operator to easily choose predictable and reliable supply temperature set points for the cooling units.  The result?  Large energy savings, lower mean time between failures, and a more reliable data center.

What do you recommend as to the use of CFD studies and containment?

It’s important to create both an accurate baseline and a sustainable cooling solution design. This model will give data center operators a basis for an accurate representation of how things are being cooled. The proposed cooling solution can be used in numerous ways:

  • Accurate energy savings
  • Safe set point standards
  • Future cabinet population predictions
  • The ability to cool future kW increases
  • Identify and eliminate potential hot spots

Subzero Engineering endorses accurate and realistic CFD modeling that considers real world situations in order to create real world solutions.

Data Center
Educational Article

The Truth About Fans in the Data Center

And how this influences data center airflow management.

Sorry sports fans… this is not about your favorite team. Instead we are going to explore the fascinating world of mechanical fans.

How many times have you seen vender illustrations of fans pushing air in long blue lines from perforated raised floor tiles into the intake of a rack? The truth is that air does not move in such a way.  Calculating the airflow induced by one particular fan at any given distance away from the fan, about any point of the fans face is a very involved set of calculations.

Traditional thermal designs for fans were originally measured as jet velocity of water jets. This presented a close estimate, but inaccurate data. A recent study in 2012 helped in creating very accurate formulas as to fan outlet velocity and distributions. (Gurevicha, Eli, and Michael Likovb. “Fan Outlet Velocity Distributions and Calculations.”)

Generally, volumetric flow rate and distance traveled decreases when contained air enters ambient room air, and this is why mechanical air contractors use ductwork or a contained plenum to direct supply air to the thermal load. Increasing the velocity of air in order to reach the thermal load, instead of using a duct system, is considered inefficient.

It’s important to understand the relationship of mechanical air movement from fans and what actually happens to the airflow. The issue with fans is the manufacturer’s stated CFM (Cubic Feet per Minute) capacity, and the distance of air movement that the fan is capable of will carry it. This value reflects what the fan is able to produce in a given test environment. Manufacturer stated air displacement (CFM) is based on what is called normal temperature and pressure conditions (NTP). 

 The actual volume of air that a fan can displace varies due to two factors:

1) Air density (hot, low density or cold, high density)
2) Air pressure (positive or negative)

Thus, it is important to determine the manufacturer’s test conditions for the fan, and then compare the data to the actual planned environment in which the fan will operate.

For example, when considering the installation of a fan in the subfloor to move subfloor air into the cold aisle, the first question that should be addressed is: “what is the temperature of the air and head pressure that the fan will operate in?”

Why? The temperature of the air will determine its density when confined to a constant volume. In most cases, the subfloor air is denser, which is good.  Thus the more important question will be about the subfloor pressure. It is not unusual to have negative pressure areas in the subfloor due to high velocity air streams. Bernoulli’s principle explains our concern, in that an increase of air speed will result in a decrease of air pressure. Additionally, when two air streams of high velocity air intersect from opposing directions, the result is often a subfloor vortex, resulting in the reversal of current.

So what’s the point? Imagine putting a raised floor fan system over an area with negative pressure. This would negatively affect the fan’s ideal operating conditions.

Consider this, what is the typical reason for using additional fans to move air into the cold aisle? Most likely the unassisted perforated tile or grate is not able to deliver sufficient airflow to the thermal load of the racks. What if this is based on inadequate subfloor pressure? If that is the case, adding a fan assisted raised floor panel will require taking into consideration the fan NTP. Also it will drastically and unpredictably impact other areas of the data center as you “rob Peter to pay Paul” so to speak.

Consider the following subfloor airflow management strategies:

1) Eliminate high velocity air: This will ensure a more balanced delivery of air due to a nominalized subfloor pressure.
2) Cold Aisle Containment: Instead of designing rack cooling by placing an airflow-producing raised floor tile at the feet of each rack, why not create a cold aisle that is not dependent on perforated tile placement?

Cold aisle containment creates a small room of supply air that can be accessed by all IT equipment fans. Instead of managing each supply raised floor tile, the only requirement is ensuring positive air pressure in the aisle. Cold aisle containment systems provide several benefits: most contained cold aisles will only have a one-degree differential from the bottom to the top of the rack, and the cold aisle containment does not require high air velocity, which can create other airflow management problems, such as bypassing IT equipment intake.

Understanding the NTP conditions of IT equipment cooling fans is an important aspect of data center airflow management. For example, in order to properly adjust CRAC unit set points, it is important to know the temperature at which the supply air’s density will drop below each fan’s NTP conditions.  It is possible to lower the supply temperature to a level at which an increase in fan speed would be required to make up for the less dense airflow, potentially offsetting any energy savings from a higher cooling set point.

Simply adding fans to cool IT equipment is not a quick fix; it is imperative to first understand why sufficient airflow is not available. It is important to understand the fan’s NTP in the proposed environment, and to see if you can supply IT equipment with consistent airflow by simply separating supply and return air through data center containment. Containment can prevent the unnecessary use of additional electricity that is required to operate fans, saving money and electricity in the long term.

If you really want to understand airflow, CFD is the next step. There’s only so much you can see with sensors and best guesses. Computational Fluid Dynamics gives you a full picture—how air actually moves, where it stalls, and what’s really driving thermal behavior. It’s a powerful tool for turning theory into data-driven action. Learn more about our CFD capabilities.

Data Center
Educational Article

The Truth Behind Data Center Airflow Management: It’s Complicated

What do principles of air movement have to do with data center airflow management?

Does hot air rise? The answer of course is “yes”.

Does hot air fall? The answer is yes again.

What about sideways? Yes!

Heat can move up, down, or sideways, depending on the situation. The idea that hot air has an inherent desire to flow up is a misconception that we in the data center airflow management business would like to see dissipate.

Temperature difference is the major factor with regards to the direction and rate of heat transfer. Because air tends to move towards thermal equilibrium, it is important to maintain physical separation of hot and cold air in data centers; the need for hot and cold air separation was the reason that the data center containment industry came into existence. The laws of thermodynamics state that air moves from areas of higher temperature towards areas of lower temperature. Air is a fluid that accounts for both density and buoyancy. When air is heated the molecules move around faster, which causes it to expand, and as it expands its density becomes lower. The warmer, lower density air will rise above the denser, cooler air.

Pressure is another determining factor when looking at air movement. The flow of air from areas of high pressure to areas of low pressure is an embodiment of Newton’s third law. Equilibrium is what also drives movement between areas of differing pressure, so uninhibited air will continuously move from high to low pressure until equilibrium is reached. This movement towards equilibrium is also known as expansion.

Principles of air movement:

1) Heat Transfer:
a. Conduction: Air flows from a higher temperature region to a lower temperature between mediums that make physical contact.
b. Convection: Heat transfer due to the movement of a fluid; can be free/natural, or forced.
2) Air flows from a higher pressure to a lower pressure


What does this have to do with data center airflow management?

The data center containment industry has been inundated with graphs depicting airflow, most of which show large, sweeping lines indicating the flow of air. In most cases, the airflow depicted is a result of a mechanical device, usually a fan. The data presented by these graphs tends to lead one to believe that mechanically induced airflow will sufficiently separate hot exhaust air from cold intake air. In real-world scenarios, air curtains are inefficient and ineffective.

Modern mechanical air conditioning systems rely on four sided duct systems to deliver supply air to the source of the heat load, and the return is moved by the same means. This is the only way to ensure the separation of supply and return airflow. Systems administrators and building managers should be dubious of airflow management systems that require an increase in energy to accomplish air separation. Instead, it is best to apply the simplest principles of airflow when designing a system aimed at full separation of supply and return airflow.

Data Center
Educational Article

Extending the Capacity of the Data Center Using Hot or Cold Aisle Containment

What correlation does consistent supply air across the face of the rack have to do with increased data center capacity?

Hot or Cold Aisle Containment can significantly increase the capacity of a data center when all U’s are fully populated due to consistent intake temperatures across the rack face.

Additionally, when cooling energy can be converted to IT equipment this too can elongate the life of a data center that is running out of power.

Problem Statement – Air Stratification
Most data centers without containment have air stratification. Air stratification occurs when supply and return air mix. This creates several temperature layers along the intake face of the rack. It is not uncommon for the temperature at the bottom of the rack to be 8 to 10 degrees colder than the top. As a result, many data centers have implemented policies that do not allow the top 6 to 8 U’s to be populated. This can decrease the data centers IT equipment capacity by 16%. Capacity from a space perspective is one thing, but when the unpopulated U’s are potentially high density systems the lost space is amplified.

Data Center
Success Story

Datacenter Revamps Cut Energy Costs At CenturyLink

by Timothy Prickett Morgan — EnterpriseTech Datacenter Edition

Subzero Engineering’s Containment Solutions contributed to Century Links hefty return on investment

It is probably telling that these days datacenter managers think of the infrastructure under their care more in terms of the juice it burns and not by counting the server, storage, and switch boxes that consume that electricity and exhale heat. Ultimately, that power draw is the limiting factor in the scalability of the datacenter and using that power efficiently can boost processing and storage capacity and also drop profits straight to the bottom line.

Three years ago, just as it was buying public cloud computing provider Savvis for $2.5 billion, CenturyLink took a hard look at its annual electric bill, which was running at $80 million a year across its 48 datacenters. At the time, CenturyLink had just finished acquiring Qwest Communications, giving it a strong position in the voice and data services for enterprises and making it the third largest telecommunications company in the United States. CenturyLink, which is based in Monroe, Louisiana, also provides Internet service to consumers and operates the PrimeTV and DirectTV services; it has 47,000 employees and generated $18.1 billion in revenues in 2013.

One of the reasons why CenturyLink has been able to now expand to 57 datacenters – it just opened up its Toronto TR3 facility on September 8 – comprising close to 2.6 million square feet of datacenter floor space is that it started tackling the power and cooling issues three years ago.

The facilities come in various shapes and sizes, explains Joel Stone, vice president of global data center operations for the CenturyLink Technology Solutions division. Some are as small as 10,000 square feet, others are more than ten times that size. Two of its largest facilities are located in Dallas, Texas, weighing in at 110,000 and 153,700 square feet and both rated at 12 megawatts. The typical facility consumes on the order of 5 megawatts. CenturyLink uses some of that datacenter capacity to service its own telecommunications and computing needs, but a big chunk of that power goes into its hosting and cloud businesses which in turn provide homes for the infrastructure of companies from every industry and region. CenturyLink’s biggest customers come from the financial services, healthcare, online games, and cloud businesses, Stone tells EnterpriseTech. Some of these customers have only one or two racks of capacity, whole others contract for anywhere from 5 megawatts to 7 megawatts of capacity. Stone’s guess is that all told, the datacenters have hundreds of thousands of servers, but again, that is not how CenturyLink, or indeed any datacenter facility provider, is thinking about it. What goes in the rack is the customers’ business, not CenturyLink’s.

“We are loading up these facilities and trying to drive our capacity utilization upwards,” says Stone. And the industry as a whole does not do a very good job of this. Stone cites statistics from the Uptime Institute, which surveyed colocation facilities, wholesale datacenter suppliers, and enterprises actually use only around 50 percent of the power that comes into the facilities. “We are trying to figure out how we can get datacenters packed more densely. Space is usually the cheapest part of the datacenter, but the power infrastructure and the cooling mechanicals are where the costs reside unless you are situated in Manhattan where space is such a premium. We are trying to drive our watts per square foot higher.”

While server infrastructure is getting more powerful in terms of core counts and throughput and storage is getting denser and, in the case of flash-based or hybrid flash-disk arrays, are getting faster, the workloads are growing faster and therefore the overall power consumption for the infrastructure as a whole in the datacenter continues to grow.

“People walk into datacenters and they have this idea that they should be cold – but they really shouldn’t be,” says Stone. “Servers operate optimally in the range of 77 to 79 degrees Fahrenheit. If you get much hotter than that, then the server fans have to kick on or you might have to move more water or chilled air. The idea is to get things optimized. You want to push as little air and flow as little water as possible. But there is no magic bullet that will solve this problem.”

Companies have to do a few things at the same time to try to get into that optimal temperature zone, and CenturyLink was shooting for around 75 degrees at the server inlet compared to 68 degrees in the initial test in the server racks at a 65,0000 square foot datacenter in Los Angeles. Here’s a rule of thumb: For every degree Fahrenheit that the server inlet temperature was raised in the datacenter, it cut the power bill by 2 percent. You can’t push it too far, of course, or you will start impacting the reliability of the server equipment. (The supplied air temperature in this facility was 55 degrees and the server inlet temperature was 67 degrees before the energy efficiency efforts got under way.)

The first thing is to control the airflow in the datacenter better, and the second is to measure the temperature of the air more accurately at the server so cooling can be maintained in a more balanced way across the facility.

CenturyLink started work on hot aisle and cold aisle containment in its facilities three and a half years ago, and the idea is simple enough: keep the hot air coming from the back of the racks from mixing with the cold air coming into the datacenter from chillers. The containment project is a multi-year, multi-million dollar effort, and CenturyLink is working with a company called SubZero Engineering to add containment to its aisles. About 95 percent of its facilities now have some form of air containment, and most of them are doing hot aisle containment.

“If we can isolate the hot aisles, that gives us a little more ride through from the cold aisles if we were to have some sort of event,” Stone explains. But CenturyLink does have some facilities that, just by the nature of their design, do cold aisle containment instead. (That has the funny effect of making the datacenter feel hotter because people walk around the hot aisles instead of the cold ones and sometimes gives the impression that these are more efficient. But both approaches improve efficiency.) The important thing about the SubZero containment add-ons to rows of racks, says Stone, is that they are reusable and reconfigurable, so as customers come and go in the CenturyLink datacenters they can adjust the containment.

Once the air is contained, then you can dispense cold air and suck out hot air on a per-row basis and fine-tune the distribution of air around the datacenter. But to do that, you need to get sensors closer the racks. Several years ago, it was standard to have temperature sensors mounted on the ceiling, walls, or columns of datacenters, but more recently, after starting its aisle containment efforts, CenturyLink tapped RF Code to add its wireless sensor tags to the air inlets on IT racks to measure their temperature precisely rather than using an average of the ambient air temperature from the wall and ceiling sensors. This temperature data is now fed back into its building management system, which comes from Automated Logic Control, a division of the United Technologies conglomerate. (Stone said that Eaton and Schneider Electric also have very good building management systems, by the way.)

The energy efficiency effort doesn’t stop here. CenturyLink is not looking at retrofitting its CRAC and CRAH units – those are short for Computer Room Air Conditioner and Computer Room Air Handler – with variable speed drives. Up until recently, CRAC and CRAH units were basically on or off, but now they can provide different levels of cooling. Stone says that running a larger number of CRAH units at a lower speeds provides better static air pressure in the datacenter and uses less energy than having a small number of larger units running faster. (In the latter case, extra cooling capacity is provided through extra units, and in the former it is provided by ramping up the speed of the CRAH units rather than increasing their number.) CenturyLink is also looking at variable speed pumps and replacing cooling towers fans in some facilities.

“We are taking a pragmatic, planned approach across our entire footprint, and we have gone into the areas where we are paying the most for power or have the highest datacenter loads and tackling those facilities first,” says Stone. The energy efficiency efforts in the CenturyLink datacenters have to have a 24 month ROI for them to proceed.

In its Chicago CH2 datacenter (one of three around that Midwestern metropolis and one of the largest run by CenturyLink in its fleet of facilities), it did aisle containment, RF Code sensors, variable speed CRAC units, variable speed drives on the pumps, and replaced the cooling tower fans with more aerodynamic units that ran slower and yet pulled the more air through the cooling towers. This facility, which is located out near O’Hare International Airport, has 163,747 square feet of datacenter space, has a total capacity of 17.6 megawatts, and can deliver 150 watts per square foot.

CenturyLink reduced the load in that CH2 facility by 7.4 million kilowatt-hours per year, and Stone just last month collected on a $534,000 rebate check from Commonwealth Edison, the power company in the Chicago area. All of these upgrades in the CH2 facility cost roughly $2.4 million, and with the rebates and the power savings the return on investment was on the order of 21 months – and that is before the rebate was factored in.

Data Center
Product Insight

Don’t cage your computer!

Subzero Engineering is partnering with Colo providers in creating cageless solutions for their customers.

Here’s what we are doing. We have combined Subzero’s aisle end doors, with auto close and locking features, along with airflow management cabinets that safely lock each cabinet to create a safe Colo environment that does not require cages.

• Locking Aisle End Doors
• Locking Cabinets
• Auto Close
• Complete Airflow Separation

Advances in containment and cabinets have created a fully secure colo environment without traditional wired cages. Instead secure aisle end doors, retractable roofs, and biometric locks create an easy to deploy, secure space for IT equipment.

A typical deployment includes:

• Locks can range from simple keyed, to biometric
• Auto closing doors that prevent accidental access
• Locking aisle end doors
• Locking cabinets
• Retractable roof system