Defining Your Edge – Re-Thinking the Concept of Micro Data Center Designs

Defining Your Edge – Re-Thinking the Concept of Micro Data Center Designs

By Sam Prudhomme, Vice President of Sales & Marketing, Subzero Engineering

For many years the industry has been in a deep discussion about the concept of edge computing. Yet the definition varies from vendor to vendor, creating confusion in the market, especially where end-users are concerned. In fact, within more traditional or conservative sectors, some customers are yet to truly understand how the edge relates to them, meaning the discussion needs to change, and fast.

According to Gartner, “the edge is the physical location where things and people connect with the networked, digital world, and by 2022, more than 50% of enterprise-generated data will be created and processed outside the data center or cloud.” All of this data invariably needs a home, and depending on the type of data that is secured, whether it’s business or mission-critical, the design and location of its home will vary.

Autonomous vehicles are but one example of an automated, low- latency and data-dependent application. The real-time control data required to operate the vehicle is created, processed and stored via two-way communications at a number of local and roadside levels. On a city-wide basis, the data produced by each autonomous vehicle will be processed, analyzed, stored and transmitted in real-time, in order to safely direct the vehicle and manage the traffic. Yet on a national level, the data produced by millions of AVs could be used to shape transport infrastructure policy and redefine the automotive landscape globally.

Each of these processing, analysis, and storage locations requires a different type of facility to support its demand. Right now, data centers designed to meet the needs of standard or enterprise business applications are plentiful. However, data centers designed for dynamic, real-time data delivery, provisioning, processing, and storage are in short supply.

That’s partly because of the uncertainty over which applications will demand such infrastructure and, importantly, over what sort of timeframe. However, there’s also the question of flexibility. Many of the existing micro data center solutions are unable to meet the demands of edge or, more accurately, localized, low-latency applications, which also require high levels of agility and scalability. This is due to their pre-determined or specified approach to design and infrastructure components.

Traditionally, the market has been met with small-scale, edge applications, which have been deployed in pre-populated, containerized solutions. A customer is often required to confirm to a standard shape or size and there’s no flexibility in terms of their modularity, components, or make-up. So how do we change the thinking?

A Flexible Edge

Standardization has, in many respects, been crucial to our industry. It offers a number of key benefits, including the ability to replicate systems predictably across multiple locations. But when it comes to the edge, some standardized systems aren’t built for the customer – they’re a product of vendor collaboration: One that’s also accompanied by high-costs and long lead times.

On the one hand, having a box with everything in it can undoubtedly solve some pain points, especially where integration is concerned. But what happens if the customer has its own alliances, or may not need all of the components? What happens if they run out of capacity in one site? Those original promises of scalability, or flexibility disappear, leaving the customer with just one option – to buy another container. One might consider that that rigidity, when it comes to ‘standardization’, can often be detrimental to the customer.

There is, however, the possibility that such modular, customizable, and scalable micro data center architectures can meet the end user’s requirements perfectly, allowing end-users to truly define and embrace their edge.

Is There a Simpler Way?

Today forecasting growth is a key challenge for customers. With demands increasing to support a rapidly developing digital landscape, many will have a reasonable idea of what capacity is required today. But predicting how it will grow over time is far more difficult, and this is where modularity is key.

For example, pre-pandemic, a content delivery network, with capacity located near large users groups may have found itself swamped with demand in the days of lockdown. Today, they may be considering how to scale up local data center capacity quickly and incrementally to meet customer expectations, without deploying additional infrastructure across more sites.

There is also the potential of 5G-enabled applications, so how does one define what’s truly needed to optimize and protect the infrastructure in a manufacturing environment. Should an end-user purchase a containerized micro data center because that’s what’s positioned as the ideal solution? Or, should they customize and engineer a solution that can grow incrementally with demands? Or would it be more beneficial to deploy a single room that offers a secure, high-strength, and walk-able roof that can host production equipment?

The point here is that when it comes to micro data centers, a one-size-fits-all approach does not work. End-users need the ability to choose their infrastructure based on their business demands – whether they be in industrial manufacturing, automotive, telco, or colocation environments. But how can users achieve this?

Infrastructure Agnostic Architectures

At Subzero Engineering, we believe that vendor-agnostic, flexible micro data centers are the future for the industry. For years we’ve been adding value to customers, and building containment systems around their needs, without forcing their infrastructure to fit into boxes.

We believe users should have the flexibility to utilize their choice of best-in-class data center components, including the IT stack, the uninterruptible power supply (UPS), cooling architecture, racks, cabling, or fire suppression system. So by taking an infrastructure-agnostic approach, we give customers the ability to define their edge, and use resilient, standardized, and scalable infrastructure in a way that’s truly beneficial to their business.

By taking this approach, we’re also able to meet demands for speed to market, delivering a fully customized solution to site within six weeks. Furthermore, by adopting a modular architecture that includes a stick-built enclosure, and the ability to incorporate a cleanroom, and a walk-able, mezzanine roof, users can scale as demands require it, and without the need to deploy additional containerized systems.

This approach alone offers significant benefits, including a 20-30% cost-saving, compared with conventional ‘pre-integrated’, micro data center designs.

For too long now, our industry has been shaped by vendors that have forced customers to base decisions on systems which are constrained by the solutions they offer. We believe now is the time to disrupt the market, eliminate this misalignment, and enable customers to define their edge as they go.

By providing customers with the physical data center infrastructure they need, no matter their requirements, we can help them plan for tomorrow. As I said, standardization can offer many benefits, but not when it’s detrimental to the customer.

 

Click here to download a pdf version of this article.

AisleFrameCold Aisle Containmentdata centerData Center ContainmentData Center Infrastructure Managementedgeedge data centerEssential Micro Data CenterHot Aisle ContainmentInfrastructure Conveyancemicro data center
Subzero Engineering Launches ‘Essential Micro Data Center’, Allowing Users to Define Their Own Edge

Subzero Engineering Launches ‘Essential Micro Data Center’, Allowing Users to Define Their Own Edge

  • Turnkey solution is building and infrastructure agnostic, providing 20%-30% cost-savings compared to other solutions in the market.
  • Standardized solution meets demanding timescales, shipping within as little as 36 hours, while fully customized micro data centers can be delivered, installed and operational in 4-6 weeks.
  • Provides flexible micro data center system for colocation, 5G, retail, enterprise and industrial applications.

 

October 13, 2021Subzero Engineering, a leading provider of data center containment solutions, has today introduced its Essential Micro Data Center, the world’s first modular, vendor agnostic and truly flexible modular micro data center architecture. Available for order in the United States of America, United Kingdom and Europe, the Essential Micro Data Center meets customer demands for a standardized, premium quality, cost-competitive and quick-to-install edge infrastructure system, that provides a reduced total cost of ownership of between 20%-30%.

Based on its Essential Series and AisleFrame product lines, the Essential Micro Data Center is a small-footprint, on-premises data center, engineered for distributed and remote infrastructure environments. Its modular architecture includes white-glove installation and support, power, cooling, infrastructure conveyance and containment. All of which are housed within a pre-fabricated, factory-assembled, modular room, and shipped flat-packed to site.

With increased requirements for real-time data processing, low latency, greater security and automation, the Essential Micro Data Center ensures predictability and performance for distributed applications. Furthermore, its customizable, modular design offers a fast, flexible and easy-to-build micro data center system, perfectly suited for colocation, 5G, retail, enterprise and industrial environments.

 

Strength, security, customization

The Essential Micro Data Center is comprised of two-parts including a physically secure, modular room containing critical power and cooling infrastructure, and Subzero’s high-strength AisleFrame. Using this approach, the Essential Micro Data Center can support a variety of load requirements and includes built-in, customizable containment, integrated with self-supporting ceiling modules and insert panels available in ABS, acrylic, polycarbonate, aluminum or glass.

The pre-fabricated system can accommodate all ladder racking, busway, fiber trays and infrastructure necessary for micro data center applications, and offers support for hot or cold aisle applications, regardless of cooling methodology. For example, the high-strength ceiling can support a range of cooling systems, including overhead Computer Room Air Conditioning (CRAC) units. This feature offers complete customization for users who can deploy their infrastructure in aisle, row or rack configurations.

Further, its flexible, vendor-agnostic design provides users with the ability to custom-specify their own choice of power and cooling infrastructure. This approach helps overcome the challenge of having to use inflexible, pre-specified power and cooling systems in a containerized system, while retaining the ability to standardize, repeat and scale quickly, as business requirements change.

“The Essential Micro Data Center’s flexible design makes it a perfect fit for customers searching for an alternative to the obstinate and expensive, pre-integrated solutions currently available,” said Sam Prudhomme, Vice President of Sales and Marketing, Subzero Engineering. “Our vendor-agnostic approach to component specification, combined with rapid speed of and installation and lower TCO, ensures customers can truly define and scale the edge on their own terms.”

The Subzero Engineering Essential Micro Data Center joins its recently launched Essentials Series, demonstrating the company’s commitment to delivering customer-focused, efficient and precision-engineered digital infrastructure solutions.

To learn more, click here.

AisleFrameCold Aisle ContainmentData Center ContainmentData Center Infrastructure Managementedgeedge data centerEssential Micro Data CenterHot Aisle ContainmentInfrastructure Conveyancemicro data center
Containment at the Edge – Making the Edge Efficient, Scalable, and Sustainable

Containment at the Edge – Making the Edge Efficient, Scalable, and Sustainable

Each day, technology touches nearly every aspect of our lives in one way or the other. For example, how many times a day do each of us access one or more apps on our smart phone? This trend of needing, creating, transferring, and accessing data in fractions of a second isn’t going away either. According to Gartner Research, by 2020, internet capable devices worldwide reached over 20 billion, and this number is expected to double by 2025. It is also estimated that approximately 463 exabytes of data (1 exabyte is equivalent to 1 billion gigabytes) will be generated each day by people as of 2025, that’s the equivalent of 212,765,957 DVDs per day!1 Along with this increase comes the need to have this data as fast as possible, with minimum delay or latency, something most of today’s data centers are not capable of.

The increase in data and the need for high-speed data transfers has inspired the recent trend known as edge computing. What exactly is the edge? What is an edge data center? How are edge data centers evolving and how can facility and data center managers be ready without being left behind? What about the challenge of making a resilient, modular, and scalable edge data center while maintaining high efficiency and reliability? This paper will answer these and many more questions about the edge in the following topics:

  • What is an Edge Data Center
  • The Evolution of Edge Computing
  • How Organizations are Responding to Edge Data Centers
  • Solving the Challenge of Modular and Scalable Edge Infrastructures
  • Reliability and Efficiency Needed at the Edge
  • Containment’s Critical Role in Edge Deployments
  • Bridging the Gap to the Edge, Now and Future

Read the full white paper here.

ContainmentData Center Containmentdata center efficiencyedgeedge computingedge data centerefficientinfrastructuremicro data centerscalablesustainable
5 Steps to Improving Data Center Performance & Energy Efficiency

5 Steps to Improving Data Center Performance & Energy Efficiency

By Sam Prudhomme, Vice President of Sales & Marketing

Recently, I was interviewed by a Computer Weekly journalist, Fleur Doidge, who was writing an article focusing on the quick wins when it comes to improving systems performance in the data center. Inspired by our discussion, I’ve come up with Subzero’s top 5 ways to boost data center utilization.

The data center industry, quite rightly, has an ongoing, major focus on how it can improve both the performance and energy efficiency of its facilities. That’s partly down to the perception that our industry are major consumers of energy and contribute a high volume of carbon emissions each year.

According to an article published in Science Magazine, data centers account for around 1% of global energy use. It’s clear that we need to improve our environmental performance and to ensure we never forget we’re part of the sustainability solution, but we should also remember that data center performance and energy efficiency improvements make great business sense.

While there are many, many issues to consider as part of a comprehensive, long-term strategy to both improve data center performance as well as to achieve carbon neutral status, this article focuses on the ‘low hanging fruit’ – relatively simple actions, which will have an immediate positive impact on your facility, and with an ROI measured in months rather than years.

Step 1 – As Easy as (free) CFD

Those of you who know Subzero Engineering well will not be surprised that Step 1 involves a Environmental Impact Evaluation (CFD) of your data center. We believe it all starts with the data and offer this service for free. It is a simple, efficient, and super-fast way of discovering just how your data center is performing right now – where the power is, where the heat is, and isn’t, hence where the cold air does or doesn’t need to be.

Step 2 – Using the Data

Once the Environmental Impact Evaluation (CFD) has been carried out, you’ll be armed with a large quantity of data about how your data center is performing. It’s highly likely that you’ll be presented with some really quick wins. For example, you’ll discover where the hotspots (points of efficiency leakage) are; and part of the solution may be something as simple as installing any necessary blanking panels.

Then again, the CFD data may highlight that Row 5 in Rack 6 is running 15 degrees hotter than anywhere else in the data center. You’ll be able to decide whether you need to move this stack to a better location where more cooling is available, or maybe you just need to open up the grate to optimize or increase the airflow.

Step 3 – The 3 ms: Measuring, monitoring & Modulation

A data center is a live environment. So, although the CFD analysis can identify and help to resolve what we might call any ‘permanent’ power and cooling issues, it’s essential that you monitor and measure the performance of the power and cooling plant in real-time. This is because data center variables such as the IT load and operating temperatures are in constant flux. With the right system you are able to modulate the airflow accordingly. For example, if the cooling needs to react to the load inside each rack and cabinet, as well as respond to the impact of, say, an extremely hot outside temperature.

Rather than blast a load of cold air into the data center and ‘hope’ that it keeps the IT hardware within operating tolerances, with the right monitoring solution, you can be confident that you can modulate the cold air as required right down to the rack level. This ensures that the cooling usage is as effective and energy efficient as possible.

Step 4 – Contain Your Excitement

How would you like to reduce your PUE by 0.4? Or to achieve a 29% reduction in data center energy consumption? Well, these are the average savings we achieve for our customers when they deploy one of our containment solutions.

The initial Environmental Impact Evaluation (CFD) we carry out also proves how this can be achieved – it compares and contrasts hot vs cold aisle containment and containment vs no containment. Furthermore, a containment solution ensures that Steps 1-3 really do achieve the maximum performance and energy efficiency improvements within the data center.

Without containment, you’ll still have hotspots – separating hot and cold air will be hit and miss and far from being optimized.

With containment, you can bring down the power consumption to cooling ratio close to a 1:1 match in KW consumed – that’s how the energy consumption/utility bill reduction is achieved.

As for the PUE reduction? Well, that’s achieved by smarter, more efficient use of an optimized combination of chilled water and the air conditioning fans. The US Environmental Protection Agency (EPA) estimates that containment can reduce fan energy consumption by up to 25% and deliver 20% savings at the cold water chiller.1

One final containment benefit – we supplied a containment solution to a colo customer and by separating the hot and cold air in their facility, we helped them to not only eliminate hotspots, but also to increase rack density by an average of 14%.

Step 5 – Turn the Lights out

This breadth of knowledge brings me to my final data center performance and energy efficiency improvement: turn out the lights. By that, I actually mean remove anything incandescent and go with an LED retrofit kit within your existing tray system. And then automate the lighting system.

It may be a while before a true lights out data center becomes the norm, if ever, but that’s no excuse not to ensure that your lighting system is as energy efficient as possible. By using LEDs and only using the lights when needed you’ll improve your energy efficiency as well as your bank balance.

While Subzero Engineering’s major focus is data center consultancy, using CFD analysis and containment solutions, to help drive performance and efficiency improvements we can also help owners/operators with their critical power infrastructure, DCIM and other solutions as required.

Download the 5 Steps to Improving Data Center Performance & Energy Efficiency pdf here.

 

References

1Recalibrating global data center energy-use estimates – Science Magazine 2018

CFDContainmentdata centerdata center energy consumptiondata center performanceEnergy EfficiencyEnvironmental Impact EvaluationPUErack densitysustainability
Containment Helps Data Centers Go Green

Containment Helps Data Centers Go Green

A Subzero White Paper by Gordon Johnson

Data centers are a huge part of today’s economy, with both businesses and people connected 24/7. However, along with this usage comes a huge drain on our energy resources. Recent studies show that energy consumed by data centers in the U.S. alone has doubled over the last five years. With the growth of cloud computing and High Performance Computing (HPC) and the energy required to operate them, this trend is not disappearing anytime soon. Fortunately, many realize that this high level of energy consumption cannot continue indefinitely, and the push for greener and more environmentally friendly data centers is being taken seriously.

What can data center and facility managers do to stop this runaway train? While there are several options to get greener and thus lower the overall cost to operate a data center, this paper specifically focuses on containment. Why? Containment is the fastest, easiest, and most cost effective strategy to going green while simultaneously lowering operating costs without adding additional CapEx to the data center. In addition, containment makes other options either possible or economically feasible. This paper will show why this is true, while discussing the following topics:

  • Why Being Green Matters
  • Containment is the Smallest Action with the Greatest Outcome
  • Containment = High Efficiency = Green Data Center
  • Containment’s Role in HPC
  • Efficiency: Full Containment Versus Partial Containment
  • Efficiency: Cold Aisle Containment Versus Hot Aisle Containment
  • CFD Predicts Energy Savings & Environmental Footprint

Read the full white paper.

 

About the Author
Gordon Johnson is the Senior CFD Engineer at Subzero Engineering, and is responsible for planning and managing all CFD related jobs in the U.S. and worldwide. He has over 25 years of experience in the data center industry which includes data center energy efficiency assessments, CFD modeling, and disaster recovery. He is a certified U.S. Department of Energy Data Center Energy Practitioner (DCEP), a certified Data Centre Design Professional (CDCDP), and holds a Bachelor of Science in Electrical Engineering from New Jersey Institute of Technology. Gordon also brings his knowledge and ability to teach the fundamentals of data center energy efficiency to numerous public speaking events annually.

ContainmentData Center ContainmentEnergy EfficiencyPUEroi
Data Center Containment 101

Data Center Containment 101

A Subzero White Paper by Gordon Johnson

Regardless of if we’re entering a data center for the first time or have been doing so for
years, most data centers have something in common. As you walk through rows of racks,
you’ll alternate between cold and hot aisles. You’ll hear expressions like “CRACs”, “PUE”,
“White Space”, “Cold Aisle Containment”, “Hot Aisle Containment”, and many more. The
purpose of this White Paper is to assist those new to the data center and those assigned
with making key decisions to get the most out of existing “legacy” and newly designed data
centers.

Since energy efficiency and data reliability are key goals for anyone managing or associated
with data centers, how can we achieve both in the shortest amount of time while getting
the quickest ROI (Return of Investment)? When is it more appropriate to use one type of
containment instead of another type? Which saves more money? This paper will answer
these and other questions.

Read the full white paper.

About the Author
Gordon Johnson is the Senior CFD Engineer at Subzero Engineering, and is responsible
for planning and managing all CFD related jobs in the U.S. and worldwide. He has over
25 years of experience in the data center industry which includes data center energy
efficiency assessments, CFD modeling, and disaster recovery. He is a certified U.S.
Department of Energy Data Center Energy Practitioner (DCEP), a certified Data Centre
Design Professional (CDCDP), and holds a Bachelor of Science in Electrical Engineering
from New Jersey Institute of Technology. Gordon also brings his knowledge and ability
to teach the fundamentals of data center energy efficiency to numerous public speaking
events annually.

ContainmentData Center ContainmentEnergy EfficiencyPUEroi
Containment’s Role in Energy Efficiency and Rapid ROI

Containment’s Role in Energy Efficiency and Rapid ROI

A Subzero White Paper by Gordon Johnson

Everyone today is interested in saving money, and that’s especially true in data centers. Between the cost of electricity and the increasing trend for higher power densities per rack (20 kW and above is no longer uncommon), the desire to be energy efficient and to reduce cost on the annual utility bill is a major concern throughout the data center industry.

So what can be done to save energy and thus save money? How can we lower our PUE (Power Usage Effectiveness) while increasing energy efficiency without sacrificing reliability? What technology will deliver a rapid ROI, often between 6 and 18 months? Containment is the answer.

How does containment provide energy savings for data centers? Is there a way to estimate the annual savings and PUE for containment installations? This White Paper will provide an answer to these questions.

Read the full white paper.

 

About the Author
Gordon Johnson is the Senior CFD Engineer at Subzero Engineering, and is responsible for planning and managing all CFD related jobs in the U.S. and worldwide. He has over 25 years of experience in the data center industry which includes data center energy efficiency assessments, CFD modeling, and disaster recovery. He is a certified U.S. Department of Energy Data Center Energy Practitioner (DCEP), a certified Data Centre Design Professional (CDCDP), and holds a Bachelor of Science in Electrical Engineering from New Jersey Institute of Technology. Gordon also brings his knowledge and ability to teach the fundamentals of data center energy efficiency to numerous public speaking events annually.

ContainmentData Center ContainmentEnergy EfficiencyPUEroi
Take Some of the Guesswork out of Data Center Efficiency

Take Some of the Guesswork out of Data Center Efficiency

Subzero understands how complicated it can be when trying to figure out how to save energy and money in your data center and we are here to help make it easier. Our Senior CFD Manager, Gordon Johnson, has created an easy-to-use calculator suite that can help you figure out some of those complicated scenarios and point you in the right direction.

The Subzero Calculator Suite consists of 6 calculators and can be downloaded in both US units and SI units.

1. ROI Calculator
Calculate the annual cost of operating your data center and estimate the yearly savings after installing containment and increasing the supply temperature from the CRACs. You can also estimate new PUE after containment and the ROI payback of a containment project based on the total cost.

2. VFD Calculator
Lowering the fan speeds on CRACs, especially after installing containment, saves energy and money. This calculator allows you to enter before and after fan speed, CRAC CFM, and fan motor power, and via the Fan Affinity Law provides the new fan motor power and the annual $ savings based on the new fan speed.

3. CRAC Annual Cost Calculator
Determine the total annual cost of running a CRAC unit by entering a few details.

4. CRAC Cooling Calculator
Determine the true kW of cooling from any CRAC or cooling unit based on airflow (CFM) and the Delta T (return air temperature – supply air temperature) across the CRAC.

5. Airflow CFM Calculator
Determine airflow in CFM needed to cool a rack based on the rack’s kW and the Delta T (temperature rise of air through the servers). Determine if you meet your design cooling capacity of supply airflow from the CRACs versus demand airflow from the IT equipment.

6. UPS kW Calculator
Determine the UPS Heat Load (kW) based on user inputs of UPS power rating, UPS % load, and UPS % efficiency.

Download the Calculator Suite

 

Need more help?
Let one of our DCEP, CDCDP Certified Engineers help your data center reach its full potential.

SERVICES WE OFFER
Computational Fluid Dynamics – Subzero Engineering offers Computational Fluid Dynamics (CFD) services from accredited CDCDP, DCEP professionals, thus providing a comprehensive approach to modeling the airflow, temperature, and creating an accurate energy profile of a data center.

Through state-of-the-art software, we construct a 3D layout of your data center. This layout models the hot and cold airflow within your facility, as well as the impact of load distribution. This allows Subzero’s engineering team to develop a baseline from which improvements can be noted and potential savings calculated. The engineers that perform the CFD services for Subzero have CDCDP (Certified Data Center Design Professional) and DCEP (Data Center Energy Practitioner) accreditation.

Energy Assessments – With the ever-increasing rise in data center energy consumption, all in the industry as well as the United States Department of Energy (DOE) have been looking at ways, strategies and programs to reduce this consumption.

Subzero Engineering has sent a team through the DCEP training and has been helping customers realize their potential savings. Utilizing the DOE toolset DC Profiler, Utilizing the DOE toolset DC Profiler, Subzero’s DCEP Certified Engineers can help you with many important items.

CFDsComputational Fluid Dynamicsdata center energy efficiencyenergy assessmentEnergy EfficiencySubzero Calculator Suite
Warming the Data Center

Warming the Data Center

For decades the idea of running a hot or warm data center was unthinkable; driving data center managers to create a ”meat locker” like environment – the colder, the better.

Today, the idea of running a warm data center has finally gotten some traction. Major companies like eBay, Facebook, Amazon, Apple, and Microsoft are now operating their data centers at temperatures higher than what was considered possible only a few years ago.

Why? And more importantly… How?

The “why” is easy.
For every degree the set point is raised, the cost of cooling the servers goes down 4%-8% depending on the data center location and cooling design. Additionally, some data centers can take advantage of free cooling cycles when the server intake temperatures increase. This is of course taking into account the manufacturers recommended temperature settings, and not surpassing them.

Now on to the “how”. Or we might ask why now? What changed?
The answer has to do with the ability to provide a consistent server intake temperature. Inconsistent intake temperatures are a result of return and supply airflows mixing. When this happens it creates “hot spots”, which causes cooling problems. Without a consistent supply temperature the highest temperature in those “hot spots” would determine the data center cooling set point temperature, resulting in a lower set point.

A few years ago containment was introduced to the data center industry. Containment fully separates supply and return airflow, which eliminates “hot spots” and creates a consistent intake temperature. Containment is the key to accomplishing consistent intake temperatures. With consistent intake temperatures data center managers can increase cooling set points, creating a warmer data center. A warmer data center means less money spent on cooling costs.

Airflow ManagementConsistent Intake TemperaturesData Center ContainmentHot SpotsRaise Set PointSubzero EngineeringThe World's Hottest Data Centerswarmer data center
The Role of CFDs in Containment

The Role of CFDs in Containment

Data center airflow management engineers have used Computational Fluid Dynamics (CFD) programs for years to determine the complicated movement of airflow in data centers. CFD models pinpoint areas where airflow can be improved in order to provide a consistent cooling solution and energy savings.

We interviewed Gordon Johnson who is a certified data center design professional, Data Center Energy Practitioner (DCEP), CFD and electrical engineer regarding the use of CFD’s and containment.


Gordon, what is the principle way CFD’s are used with regard to containment?

We use CFD’s to determine two basic data sets. The first is the baseline, or the current airflow pattern. This initial CFD model shows supply intake temperatures to each cabinet. This model also determines the effectiveness of each AC unit as it relates to airflow volume, return air temperature, delta T, and supply air temperature.

The second model is the proposed design of the CFD engineer who uses the information from the base model to enact airflow management best practices to separate supply from return airflow. Typically several models are created in order to adjust airflow volume, set point temperatures, and adjust individual aisle supply volume.


Gordon, Are there situations in which the CFD engineer does not recommend containment?

Not really, because the entire basis of airflow management is the full separation of supply and return airflow. Anytime these two airflows mix there is a loss of energy and consistent supply temperature to the IT thermal load.

We have seen CFD’s used by manufactures to prove product effectiveness. What are some ways CFD’s are made to exaggerate product effectiveness?

Exaggerations usually stem from the principle known as GIGO, short for Garbage In, Garbage Out. This refers to the fact that computers operate by logical processes, and thus will unquestioningly process unintended, even nonsensical input data (garbage in) and produce undesired, often nonsensical output (garbage out).

Let me give you an example. Recently I recreated a CFD model that was used to explain the effectiveness of airflow deflectors. The purpose of the CFD was to show the energy savings difference between airflow deflectors and full containment. We found that certain key data points were inserted into the models that do not reflect industry standards. Key settings were adjusted to fully optimize energy savings without regard to potential changes to the environment. Any potentially adverse effects to the cooling system’s ability to maintain acceptable thermal parameters, due to environmental changes, are not revealed in the CFD model. Thus, the model was operating on a fine line that could not be adjusted without a significant impact on its ability to cool the IT load.


Can you give us any specifics?

The airflow volume was manually changed from 1 kW at 154 CFM to 1 kW at 120 CFM. Industry standard airflow is 154 CFM. The formula most commonly used is as such:

Calculation

120 CFM airflow does not give the cooling system any margin for potential changes to the environment.

Another key area of unrealistic design is the placement of cabinet thermal load and high volume grates. The base model places high kW loads in specific, isolated areas surrounded by high volume grates. What then happens, if additional load is placed in areas of low volume airflow? Any changes to the rack kW in areas without high volume grates could not be accounted for. At the end of the day, any changes to the IT load would require an additional airflow management audit to determine what changes would affect the cooling solution. Thus, the proposed model is unrealistic because no data center would propose a cooling solution that would require regular modifications.


Are you recommending a CFD study every time you make changes to the data center thermal load?

No. a full separation supply and return airflow eliminates the guesswork with regards to the effect of air mixture. It also eliminates the need of specific high volume perforated tiles or grates to be placed in front of high kW loads. Instead, a CFD model would incorporate expected increases to the aisle thermal load. This falls in line with the “plus 1” kind of approach to cooling. Creating a positive pressure of supply air has many additional benefits, such as lowering IT equipment fan speed, and ensuring consistent supply temperature across the face of the IT intake.

Data centers should not be operated with little margin for changes or adjustments to the thermal load. That is why I always recommend a full containment solution with as close to 0% leakage as possible.  This is always the most efficient way to run a data center, and always yields the best return on investment. The full containment solution, with no openings at the aisle-end doors or above the cabinets, will easily allow the contained cold aisles to operate with a slightly greater supply of air than is demanded.  This in turn ensures that the cabinets in the fully contained aisle have a minimum temperature change from the bottom to the top of the rack, which allows the data center operator to easily choose predictable and reliable supply temperature set points for the cooling units.  The result?  Large energy savings, lower mean time between failures, and a more reliable data center.


What do you recommend as to the use of CFD studies and containment?

It’s important to create both an accurate baseline and a sustainable cooling solution design. This model will give data center operators a basis for an accurate representation of how things are being cooled. The proposed cooling solution can be used in numerous ways:

  • Accurate energy savings
  • Safe set point standards
  • Future cabinet population predictions
  • The ability to cool future kW increases
  • Identify and eliminate potential hot spots

Subzero Engineering endorses accurate and realistic CFD modeling that considers real world situations in order to create real world solutions.

Airflow ManagementCFDCold Aisle ContainmentComputational Fluid DynamicsContainmentdata centerData Center Containmentdata center coolingGordon JohnsonHot Aisle ContainmentLarry MainersSubzero Engineering