Posts

Defining Your Edge – Re-Thinking the Concept of Micro Data Center Designs

Defining Your Edge – Re-Thinking the Concept of Micro Data Center Designs

By Sam Prudhomme, Vice President of Sales & Marketing, Subzero Engineering

For many years the industry has been in a deep discussion about the concept of edge computing. Yet the definition varies from vendor to vendor, creating confusion in the market, especially where end-users are concerned. In fact, within more traditional or conservative sectors, some customers are yet to truly understand how the edge relates to them, meaning the discussion needs to change, and fast.

According to Gartner, “the edge is the physical location where things and people connect with the networked, digital world, and by 2022, more than 50% of enterprise-generated data will be created and processed outside the data center or cloud.” All of this data invariably needs a home, and depending on the type of data that is secured, whether it’s business or mission-critical, the design and location of its home will vary.

Autonomous vehicles are but one example of an automated, low- latency and data-dependent application. The real-time control data required to operate the vehicle is created, processed and stored via two-way communications at a number of local and roadside levels. On a city-wide basis, the data produced by each autonomous vehicle will be processed, analyzed, stored and transmitted in real-time, in order to safely direct the vehicle and manage the traffic. Yet on a national level, the data produced by millions of AVs could be used to shape transport infrastructure policy and redefine the automotive landscape globally.

Each of these processing, analysis, and storage locations requires a different type of facility to support its demand. Right now, data centers designed to meet the needs of standard or enterprise business applications are plentiful. However, data centers designed for dynamic, real-time data delivery, provisioning, processing, and storage are in short supply.

That’s partly because of the uncertainty over which applications will demand such infrastructure and, importantly, over what sort of timeframe. However, there’s also the question of flexibility. Many of the existing micro data center solutions are unable to meet the demands of edge or, more accurately, localized, low-latency applications, which also require high levels of agility and scalability. This is due to their pre-determined or specified approach to design and infrastructure components.

Traditionally, the market has been met with small-scale, edge applications, which have been deployed in pre-populated, containerized solutions. A customer is often required to confirm to a standard shape or size and there’s no flexibility in terms of their modularity, components, or make-up. So how do we change the thinking?

A Flexible Edge

Standardization has, in many respects, been crucial to our industry. It offers a number of key benefits, including the ability to replicate systems predictably across multiple locations. But when it comes to the edge, some standardized systems aren’t built for the customer – they’re a product of vendor collaboration: One that’s also accompanied by high-costs and long lead times.

On the one hand, having a box with everything in it can undoubtedly solve some pain points, especially where integration is concerned. But what happens if the customer has its own alliances, or may not need all of the components? What happens if they run out of capacity in one site? Those original promises of scalability, or flexibility disappear, leaving the customer with just one option – to buy another container. One might consider that that rigidity, when it comes to ‘standardization’, can often be detrimental to the customer.

There is, however, the possibility that such modular, customizable, and scalable micro data center architectures can meet the end user’s requirements perfectly, allowing end-users to truly define and embrace their edge.

Is There a Simpler Way?

Today forecasting growth is a key challenge for customers. With demands increasing to support a rapidly developing digital landscape, many will have a reasonable idea of what capacity is required today. But predicting how it will grow over time is far more difficult, and this is where modularity is key.

For example, pre-pandemic, a content delivery network, with capacity located near large users groups may have found itself swamped with demand in the days of lockdown. Today, they may be considering how to scale up local data center capacity quickly and incrementally to meet customer expectations, without deploying additional infrastructure across more sites.

There is also the potential of 5G-enabled applications, so how does one define what’s truly needed to optimize and protect the infrastructure in a manufacturing environment. Should an end-user purchase a containerized micro data center because that’s what’s positioned as the ideal solution? Or, should they customize and engineer a solution that can grow incrementally with demands? Or would it be more beneficial to deploy a single room that offers a secure, high-strength, and walk-able roof that can host production equipment?

The point here is that when it comes to micro data centers, a one-size-fits-all approach does not work. End-users need the ability to choose their infrastructure based on their business demands – whether they be in industrial manufacturing, automotive, telco, or colocation environments. But how can users achieve this?

Infrastructure Agnostic Architectures

At Subzero Engineering, we believe that vendor-agnostic, flexible micro data centers are the future for the industry. For years we’ve been adding value to customers, and building containment systems around their needs, without forcing their infrastructure to fit into boxes.

We believe users should have the flexibility to utilize their choice of best-in-class data center components, including the IT stack, the uninterruptible power supply (UPS), cooling architecture, racks, cabling, or fire suppression system. So by taking an infrastructure-agnostic approach, we give customers the ability to define their edge, and use resilient, standardized, and scalable infrastructure in a way that’s truly beneficial to their business.

By taking this approach, we’re also able to meet demands for speed to market, delivering a fully customized solution to site within six weeks. Furthermore, by adopting a modular architecture that includes a stick-built enclosure, and the ability to incorporate a cleanroom, and a walk-able, mezzanine roof, users can scale as demands require it, and without the need to deploy additional containerized systems.

This approach alone offers significant benefits, including a 20-30% cost-saving, compared with conventional ‘pre-integrated’, micro data center designs.

For too long now, our industry has been shaped by vendors that have forced customers to base decisions on systems which are constrained by the solutions they offer. We believe now is the time to disrupt the market, eliminate this misalignment, and enable customers to define their edge as they go.

By providing customers with the physical data center infrastructure they need, no matter their requirements, we can help them plan for tomorrow. As I said, standardization can offer many benefits, but not when it’s detrimental to the customer.

 

Click here to download a pdf version of this article.

AisleFrameCold Aisle Containmentdata centerData Center ContainmentData Center Infrastructure Managementedgeedge data centerEssential Micro Data CenterHot Aisle ContainmentInfrastructure Conveyancemicro data center
Subzero Engineering Launches ‘Essential Micro Data Center’, Allowing Users to Define Their Own Edge

Subzero Engineering Launches ‘Essential Micro Data Center’, Allowing Users to Define Their Own Edge

  • Turnkey solution is building and infrastructure agnostic, providing 20%-30% cost-savings compared to other solutions in the market.
  • Standardized solution meets demanding timescales, shipping within as little as 36 hours, while fully customized micro data centers can be delivered, installed and operational in 4-6 weeks.
  • Provides flexible micro data center system for colocation, 5G, retail, enterprise and industrial applications.

 

October 13, 2021Subzero Engineering, a leading provider of data center containment solutions, has today introduced its Essential Micro Data Center, the world’s first modular, vendor agnostic and truly flexible modular micro data center architecture. Available for order in the United States of America, United Kingdom and Europe, the Essential Micro Data Center meets customer demands for a standardized, premium quality, cost-competitive and quick-to-install edge infrastructure system, that provides a reduced total cost of ownership of between 20%-30%.

Based on its Essential Series and AisleFrame product lines, the Essential Micro Data Center is a small-footprint, on-premises data center, engineered for distributed and remote infrastructure environments. Its modular architecture includes white-glove installation and support, power, cooling, infrastructure conveyance and containment. All of which are housed within a pre-fabricated, factory-assembled, modular room, and shipped flat-packed to site.

With increased requirements for real-time data processing, low latency, greater security and automation, the Essential Micro Data Center ensures predictability and performance for distributed applications. Furthermore, its customizable, modular design offers a fast, flexible and easy-to-build micro data center system, perfectly suited for colocation, 5G, retail, enterprise and industrial environments.

 

Strength, security, customization

The Essential Micro Data Center is comprised of two-parts including a physically secure, modular room containing critical power and cooling infrastructure, and Subzero’s high-strength AisleFrame. Using this approach, the Essential Micro Data Center can support a variety of load requirements and includes built-in, customizable containment, integrated with self-supporting ceiling modules and insert panels available in ABS, acrylic, polycarbonate, aluminum or glass.

The pre-fabricated system can accommodate all ladder racking, busway, fiber trays and infrastructure necessary for micro data center applications, and offers support for hot or cold aisle applications, regardless of cooling methodology. For example, the high-strength ceiling can support a range of cooling systems, including overhead Computer Room Air Conditioning (CRAC) units. This feature offers complete customization for users who can deploy their infrastructure in aisle, row or rack configurations.

Further, its flexible, vendor-agnostic design provides users with the ability to custom-specify their own choice of power and cooling infrastructure. This approach helps overcome the challenge of having to use inflexible, pre-specified power and cooling systems in a containerized system, while retaining the ability to standardize, repeat and scale quickly, as business requirements change.

“The Essential Micro Data Center’s flexible design makes it a perfect fit for customers searching for an alternative to the obstinate and expensive, pre-integrated solutions currently available,” said Sam Prudhomme, Vice President of Sales and Marketing, Subzero Engineering. “Our vendor-agnostic approach to component specification, combined with rapid speed of and installation and lower TCO, ensures customers can truly define and scale the edge on their own terms.”

The Subzero Engineering Essential Micro Data Center joins its recently launched Essentials Series, demonstrating the company’s commitment to delivering customer-focused, efficient and precision-engineered digital infrastructure solutions.

To learn more, click here.

AisleFrameCold Aisle ContainmentData Center ContainmentData Center Infrastructure Managementedgeedge data centerEssential Micro Data CenterHot Aisle ContainmentInfrastructure Conveyancemicro data center
AisleFrame from Subzero – The Scalable Containment System

AisleFrame from Subzero – The Scalable Containment System

Build Faster, Build Better

AisleFrame by Subzero takes a simplistic approach to a typically complex design. The flexible system is designed to provide a complete solution for not only aisle containment, but also provide a sleek floor-supported platform that serves as the infrastructure carrier for busway, cable tray, and fiber runner.

AisleFrame delivers and endless array of fixing options for cable tray, fiber runner, busway, and more.

Fast design, manufacturing, and installation times are all expertly handled by Subzero.

AisleFrame is a completely customizable freestanding support structure, built to support your critical environment from the ground up.

Subzero will ensure your AisleFrame addresses all your specific deployment needs from Engineering to Implementation.

For more information on AisleFrame click here.

AisleFrameData Center ContainmentHot Aisle Containmenthyperscalescalable containmentscalable data centers
Let Subzero help make your data center more efficient.

Let Subzero help make your data center more efficient.

Our team of experts can design a custom data center solution that can be installed in just a few weeks.

• Hot & Cold Aisle Containment
• Isolated Equipment Containment
• CFDs (Computational Fluid Dynamics)
• Power
• Cabinets
• Energy Assessments

Cabinets

The Subzero Arctic Enclosure was designed to support the dynamic needs of today’s data centers by supporting airflow management needs out of the box. This enclosure can support all types of data center demands from low density, to high density; data closets, to enterprise data centers.

• 81%+ Open Perforation Pattern
• All 5 Airflow Areas Sealed
• Chimney Cabinets Available
• Static Weight Load Capacity: 3,000 lbs.
• Dynamic Weight Load Capacity: 2,400 lbs.
• Available in white, black & color matching

InfraStrut Technology
Four sides of 1-5/8″ strut on the top of each cabinet can connect cable trays, power equipment, and containment systems. Spring nuts used for drill-free and easy installation.

 

Power

Subzero now combines our cutting edge containment and cabinet solutions with power management. These items combined create the most powerful ‘plug and play’ solution in the industry. Over 200 configurations of the Polar PDUs are available to be custom configured.

• Remote Monitoring and Alarms
• Easy To Read Central Display
• Secure Array – Connect Up To 32 PDUs
• Quick and Easy Network Setup
• HAC Ready – High Temperature Rating Up To 149°F
• Basic Polar PDU
• Monitored Polar PDU
• Monitored Plus Polar PDU
• Switched Polar PDU
• Switched Plus Polar PDU

 

Containment

Subzero’s cutting edge containment is custom built to meet our customer’s most daunting challenges. Hot Aisle Containment, Cold Aisle Containment, Isolated Equipment Containment, Doors, Roofs, Retractable Roofs, Floor Panels, Above Rack Panels… We have your data center covered.
Containment Benefits

• Reduced Energy Consumption
• Increased Rack Population
• Increased Equipment Up-time
• Longer Hardware Life
• Increased Cooling Capacity
• Consistent Acceptable Supply to IT Intake
• More Power Available for IT Equipment

Cold Aisle Containmentdata centerdata center cabinetsData Center Containmentdata center racksHot Aisle ContainmentInfraStrut TechnologyPDUPolar PDUPower ManagementSubzeroSubzero Engineering
Load IT Up. Turn IT On. Keep IT Cool.

Load IT Up. Turn IT On. Keep IT Cool.

Visit us at Data Center World
March 14 – 18, 2016 • Las Vegas • Booth #601

Cabinets
The Subzero Arctic Enclosure was designed to support the dynamic needs of today’s data centers by supporting airflow management needs out of the box. This enclosure can support all types of data center demands from low density, to high density; data closets, to enterprise data centers.

Power
Subzero now combines our cutting edge containment and cabinet solutions with power management. These items combined create the most powerful ‘plug and play’ solution in the industry. Over 200 configurations of the Polar PDUs are available to be custom configured.

Containment
Subzero’s cutting edge containment is custom built to meet our customer’s most daunting challenges. Hot Aisle Containment, Cold Aisle Containment, Isolated Equipment Containment, Doors, Roofs, Retractable Roofs, Floor Panels, Above Rack Panels… We have your data center covered.

While at the show, join us for the following presentations:
PIS 4:
NFPA Compliant Containment
Larry Mainers
Wednesday, March 16 10:45 – 11:45

PCE 2.1:
The Co-relationship of Containment and Computational Fluid Dynamics
Gordon Johnson
Tuesday, March 15 9:30-10:30

Cold Aisle Containmentdata centerdata center cabinetsData Center Containmentdata center coolingdata center racksHot Aisle ContainmentPDUPower ManagementSubzeroSubzero Engineering
Seeing is Believing

Seeing is Believing

Subzero Data Center cold and/or hot aisle containment is the best way to lower intake temperature to IT equipment.

What the Evidence Shows in Real Time

The IBM data center efficiency group in New York wanted the same proof. Gerry Weber an engineering consultant at IBM along with other monitoring technicians recorded a time-lapse video that shows the containment install along side of the temperature changes.

In the video, you can see the temperature dropped nearly 14 degrees in a 5 hour period! What the video does not display is that the temperature across the face of the IT intake did not vary more than one degree.

Subzero Engineering has similar data from numerous data centers with an average of a 10 degree in supply temperature. At the same time the intake Relative Humidity Levels were increased by over 20%.

What does this mean for data center operators?

  1. Consistent supply temperatures
  2. Increase use of rack space due to consistent supply temperatures at the top of the rack.
  3. Predictable supply temperatures make it easy to anticipate cooling solutions when an increase of thermal load or kW is introduced to the space.
  4. Maximize cooling efficiency by adopting ASHRAE increase in temperature.
  5. Convert cooling energy to IT equipment.
Cold Aisle ContainmentContainmentCooling EfficiencyCooling Energydata centerData Center Containmentdata center coolingEnergy EfficiencyHot Aisle ContainmentIBMSubzero EngineeringSupply Temperatures
The Role of CFDs in Containment

The Role of CFDs in Containment

Data center airflow management engineers have used Computational Fluid Dynamics (CFD) programs for years to determine the complicated movement of airflow in data centers. CFD models pinpoint areas where airflow can be improved in order to provide a consistent cooling solution and energy savings.

We interviewed Gordon Johnson who is a certified data center design professional, Data Center Energy Practitioner (DCEP), CFD and electrical engineer regarding the use of CFD’s and containment.


Gordon, what is the principle way CFD’s are used with regard to containment?

We use CFD’s to determine two basic data sets. The first is the baseline, or the current airflow pattern. This initial CFD model shows supply intake temperatures to each cabinet. This model also determines the effectiveness of each AC unit as it relates to airflow volume, return air temperature, delta T, and supply air temperature.

The second model is the proposed design of the CFD engineer who uses the information from the base model to enact airflow management best practices to separate supply from return airflow. Typically several models are created in order to adjust airflow volume, set point temperatures, and adjust individual aisle supply volume.


Gordon, Are there situations in which the CFD engineer does not recommend containment?

Not really, because the entire basis of airflow management is the full separation of supply and return airflow. Anytime these two airflows mix there is a loss of energy and consistent supply temperature to the IT thermal load.

We have seen CFD’s used by manufactures to prove product effectiveness. What are some ways CFD’s are made to exaggerate product effectiveness?

Exaggerations usually stem from the principle known as GIGO, short for Garbage In, Garbage Out. This refers to the fact that computers operate by logical processes, and thus will unquestioningly process unintended, even nonsensical input data (garbage in) and produce undesired, often nonsensical output (garbage out).

Let me give you an example. Recently I recreated a CFD model that was used to explain the effectiveness of airflow deflectors. The purpose of the CFD was to show the energy savings difference between airflow deflectors and full containment. We found that certain key data points were inserted into the models that do not reflect industry standards. Key settings were adjusted to fully optimize energy savings without regard to potential changes to the environment. Any potentially adverse effects to the cooling system’s ability to maintain acceptable thermal parameters, due to environmental changes, are not revealed in the CFD model. Thus, the model was operating on a fine line that could not be adjusted without a significant impact on its ability to cool the IT load.


Can you give us any specifics?

The airflow volume was manually changed from 1 kW at 154 CFM to 1 kW at 120 CFM. Industry standard airflow is 154 CFM. The formula most commonly used is as such:

Calculation

120 CFM airflow does not give the cooling system any margin for potential changes to the environment.

Another key area of unrealistic design is the placement of cabinet thermal load and high volume grates. The base model places high kW loads in specific, isolated areas surrounded by high volume grates. What then happens, if additional load is placed in areas of low volume airflow? Any changes to the rack kW in areas without high volume grates could not be accounted for. At the end of the day, any changes to the IT load would require an additional airflow management audit to determine what changes would affect the cooling solution. Thus, the proposed model is unrealistic because no data center would propose a cooling solution that would require regular modifications.


Are you recommending a CFD study every time you make changes to the data center thermal load?

No. a full separation supply and return airflow eliminates the guesswork with regards to the effect of air mixture. It also eliminates the need of specific high volume perforated tiles or grates to be placed in front of high kW loads. Instead, a CFD model would incorporate expected increases to the aisle thermal load. This falls in line with the “plus 1” kind of approach to cooling. Creating a positive pressure of supply air has many additional benefits, such as lowering IT equipment fan speed, and ensuring consistent supply temperature across the face of the IT intake.

Data centers should not be operated with little margin for changes or adjustments to the thermal load. That is why I always recommend a full containment solution with as close to 0% leakage as possible.  This is always the most efficient way to run a data center, and always yields the best return on investment. The full containment solution, with no openings at the aisle-end doors or above the cabinets, will easily allow the contained cold aisles to operate with a slightly greater supply of air than is demanded.  This in turn ensures that the cabinets in the fully contained aisle have a minimum temperature change from the bottom to the top of the rack, which allows the data center operator to easily choose predictable and reliable supply temperature set points for the cooling units.  The result?  Large energy savings, lower mean time between failures, and a more reliable data center.


What do you recommend as to the use of CFD studies and containment?

It’s important to create both an accurate baseline and a sustainable cooling solution design. This model will give data center operators a basis for an accurate representation of how things are being cooled. The proposed cooling solution can be used in numerous ways:

  • Accurate energy savings
  • Safe set point standards
  • Future cabinet population predictions
  • The ability to cool future kW increases
  • Identify and eliminate potential hot spots

Subzero Engineering endorses accurate and realistic CFD modeling that considers real world situations in order to create real world solutions.

Airflow ManagementCFDCold Aisle ContainmentComputational Fluid DynamicsContainmentdata centerData Center Containmentdata center coolingGordon JohnsonHot Aisle ContainmentLarry MainersSubzero Engineering
Extending the Capacity of the Data Center Using Hot or Cold Aisle Containment

Extending the Capacity of the Data Center Using Hot or Cold Aisle Containment

What correlation does consistent supply air across the face of the rack have to do with increased data center capacity?

Hot or Cold Aisle Containment can significantly increase the capacity of a data center when all U’s are fully populated due to consistent intake temperatures across the rack face.

Additionally, when cooling energy can be converted to IT equipment this too can elongate the life of a data center that is running out of power.

Problem Statement – Air Stratification
Most data centers without containment have air stratification. Air stratification occurs when supply and return air mix. This creates several temperature layers along the intake face of the rack. It is not uncommon for the temperature at the bottom of the rack to be 8 to 10 degrees colder than the top. As a result, many data centers have implemented policies that do not allow the top 6 to 8 U’s to be populated. This can decrease the data centers IT equipment capacity by 16%. Capacity from a space perspective is one thing, but when the unpopulated U’s are potentially high density systems the lost space is amplified.

Click here to read the latest Subzero Engineering White Paper “Extending the Capacity of a Data Center”.

 

Airflow ManagementCold Aisle Containmentdata centerData Center CapacityData Center ContainmentHot Aisle ContainmentLarry MainersSubzero Engineering
Datacenter Revamps Cut Energy Costs At CenturyLink

Datacenter Revamps Cut Energy Costs At CenturyLink

September 3, 2014 by Timothy Prickett Morgan — EnterpriseTech Datacenter Edition— http://www.enterprisetech.com/2014/09/03/datacenter-revamps-cut-energy-costs-centurylink/ 

It is probably telling that these days datacenter managers think of the infrastructure under their care more in terms of the juice it burns and not by counting the server, storage, and switch boxes that consume that electricity and exhale heat. Ultimately, that power draw is the limiting factor in the scalability of the datacenter and using that power efficiently can boost processing and storage capacity and also drop profits straight to the bottom line.

Three years ago, just as it was buying public cloud computing provider Savvis for $2.5 billion, CenturyLink took a hard look at its annual electric bill, which was running at $80 million a year across its 48 datacenters. At the time, CenturyLink had just finished acquiring Qwest Communications, giving it a strong position in the voice and data services for enterprises and making it the third largest telecommunications company in the United States. CenturyLink, which is based in Monroe, Louisiana, also provides Internet service to consumers and operates the PrimeTV and DirectTV services; it has 47,000 employees and generated $18.1 billion in revenues in 2013.

One of the reasons why CenturyLink has been able to now expand to 57 datacenters – it just opened up its Toronto TR3 facility on September 8 – comprising close to 2.6 million square feet of datacenter floor space is that it started tackling the power and cooling issues three years ago.

The facilities come in various shapes and sizes, explains Joel Stone, vice president of global data center operations for the CenturyLink Technology Solutions division. Some are as small as 10,000 square feet, others are more than ten times that size. Two of its largest facilities are located in Dallas, Texas, weighing in at 110,000 and 153,700 square feet and both rated at 12 megawatts. The typical facility consumes on the order of 5 megawatts. CenturyLink uses some of that datacenter capacity to service its own telecommunications and computing needs, but a big chunk of that power goes into its hosting and cloud businesses which in turn provide homes for the infrastructure of companies from every industry and region. CenturyLink’s biggest customers come from the financial services, healthcare, online games, and cloud businesses, Stone tells EnterpriseTech. Some of these customers have only one or two racks of capacity, whole others contract for anywhere from 5 megawatts to 7 megawatts of capacity. Stone’s guess is that all told, the datacenters have hundreds of thousands of servers, but again, that is not how CenturyLink, or indeed any datacenter facility provider, is thinking about it. What goes in the rack is the customers’ business, not CenturyLink’s.

“We are loading up these facilities and trying to drive our capacity utilization upwards,” says Stone. And the industry as a whole does not do a very good job of this. Stone cites statistics from the Uptime Institute, which surveyed colocation facilities, wholesale datacenter suppliers, and enterprises actually use only around 50 percent of the power that comes into the facilities. “We are trying to figure out how we can get datacenters packed more densely. Space is usually the cheapest part of the datacenter, but the power infrastructure and the cooling mechanicals are where the costs reside unless you are situated in Manhattan where space is such a premium. We are trying to drive our watts per square foot higher.”

While server infrastructure is getting more powerful in terms of core counts and throughput and storage is getting denser and, in the case of flash-based or hybrid flash-disk arrays, are getting faster, the workloads are growing faster and therefore the overall power consumption for the infrastructure as a whole in the datacenter continues to grow.

“People walk into datacenters and they have this idea that they should be cold – but they really shouldn’t be,” says Stone. “Servers operate optimally in the range of 77 to 79 degrees Fahrenheit. If you get much hotter than that, then the server fans have to kick on or you might have to move more water or chilled air. The idea is to get things optimized. You want to push as little air and flow as little water as possible. But there is no magic bullet that will solve this problem.”

Companies have to do a few things at the same time to try to get into that optimal temperature zone, and CenturyLink was shooting for around 75 degrees at the server inlet compared to 68 degrees in the initial test in the server racks at a 65,0000 square foot datacenter in Los Angeles. Here’s a rule of thumb: For every degree Fahrenheit that the server inlet temperature was raised in the datacenter, it cut the power bill by 2 percent. You can’t push it too far, of course, or you will start impacting the reliability of the server equipment. (The supplied air temperature in this facility was 55 degrees and the server inlet temperature was 67 degrees before the energy efficiency efforts got under way.)

The first thing is to control the airflow in the datacenter better, and the second is to measure the temperature of the air more accurately at the server so cooling can be maintained in a more balanced way across the facility.

CenturyLink started work on hot aisle and cold aisle containment in its facilities three and a half years ago, and the idea is simple enough: keep the hot air coming from the back of the racks from mixing with the cold air coming into the datacenter from chillers. The containment project is a multi-year, multi-million dollar effort, and CenturyLink is working with a company called SubZero Engineering to add containment to its aisles. About 95 percent of its facilities now have some form of air containment, and most of them are doing hot aisle containment.

“If we can isolate the hot aisles, that gives us a little more ride through from the cold aisles if we were to have some sort of event,” Stone explains. But CenturyLink does have some facilities that, just by the nature of their design, do cold aisle containment instead. (That has the funny effect of making the datacenter feel hotter because people walk around the hot aisles instead of the cold ones and sometimes gives the impression that these are more efficient. But both approaches improve efficiency.) The important thing about the SubZero containment add-ons to rows of racks, says Stone, is that they are reusable and reconfigurable, so as customers come and go in the CenturyLink datacenters they can adjust the containment.

Once the air is contained, then you can dispense cold air and suck out hot air on a per-row basis and fine-tune the distribution of air around the datacenter. But to do that, you need to get sensors closer the racks. Several years ago, it was standard to have temperature sensors mounted on the ceiling, walls, or columns of datacenters, but more recently, after starting its aisle containment efforts, CenturyLink tapped RF Code to add its wireless sensor tags to the air inlets on IT racks to measure their temperature precisely rather than using an average of the ambient air temperature from the wall and ceiling sensors. This temperature data is now fed back into its building management system, which comes from Automated Logic Control, a division of the United Technologies conglomerate. (Stone said that Eaton and Schneider Electric also have very good building management systems, by the way.)

The energy efficiency effort doesn’t stop here. CenturyLink is not looking at retrofitting its CRAC and CRAH units – those are short for Computer Room Air Conditioner and Computer Room Air Handler – with variable speed drives. Up until recently, CRAC and CRAH units were basically on or off, but now they can provide different levels of cooling. Stone says that running a larger number of CRAH units at a lower speeds provides better static air pressure in the datacenter and uses less energy than having a small number of larger units running faster. (In the latter case, extra cooling capacity is provided through extra units, and in the former it is provided by ramping up the speed of the CRAH units rather than increasing their number.) CenturyLink is also looking at variable speed pumps and replacing cooling towers fans in some facilities.

“We are taking a pragmatic, planned approach across our entire footprint, and we have gone into the areas where we are paying the most for power or have the highest datacenter loads and tackling those facilities first,” says Stone. The energy efficiency efforts in the CenturyLink datacenters have to have a 24 month ROI for them to proceed.

In its Chicago CH2 datacenter (one of three around that Midwestern metropolis and one of the largest run by CenturyLink in its fleet of facilities), it did aisle containment, RF Code sensors, variable speed CRAC units, variable speed drives on the pumps, and replaced the cooling tower fans with more aerodynamic units that ran slower and yet pulled the more air through the cooling towers. This facility, which is located out near O’Hare International Airport, has 163,747 square feet of datacenter space, has a total capacity of 17.6 megawatts, and can deliver 150 watts per square foot.

CenturyLink reduced the load in that CH2 facility by 7.4 million kilowatt-hours per year, and Stone just last month collected on a $534,000 rebate check from Commonwealth Edison, the power company in the Chicago area. All of these upgrades in the CH2 facility cost roughly $2.4 million, and with the rebates and the power savings the return on investment was on the order of 21 months – and that is before the rebate was factored in.

Airflow ManagementCenturyLinkCold Aisle ContainmentContainmentcut energydata centerData Center Containmentdatacenterenergy costsHot Aisle ContainmentSubzero Engineering
What’s new at Subzero Engineering for 2014

What’s new at Subzero Engineering for 2014

At Subzero Engineering we are always looking for new ways to improve our products, by making them more energy efficient, NFPA compliant, and adding more standard features. This year is no exception!

We have been working hard taking our world class products and ma king them even better. Here are a few of the changes we are making for 2014.

Product Announcements

• New Polar Cap Retractable Roof – The first fully NFPA compliant containment roof system
• New Arctic Enclosure Sizes Available – Two new 48U cabinets available
• Power Management – We now offer a full line of Raritan power products
• New Elite Series Doors – All of our doors have a sleek new design & come with extra features, standard
• New Panel Options – 3MM Acrylic, 4MM Polycarbonate, 3MM FM4910

Click here to learn more.

Airflow Managementaisle end doorsCold Aisle ContainmentEnclosuresHot Aisle ContainmentPower ManagementraritanRetractable Roofserver racksSubzero Engineering