With consistent intake temperatures, data centers can increase cooling set points, creating a warmer data center, which lowers cooling costs
For decades the idea of running a hot or warm data center was unthinkable; driving data center managers to create a ”meat locker” like environment – the colder, the better.
Today, the idea of running a warm data center has finally gotten some traction. Major companies like eBay, Facebook, Amazon, Apple, and Microsoft are now operating their data centers at temperatures higher than what was considered possible only a few years ago.
Why? And more importantly… How?
The “why” is easy.
For every degree the set point is raised, the cost of cooling the servers goes down 4%-8% depending on the data center location and cooling design. Additionally, some data centers can take advantage of free cooling cycles when the server intake temperatures increase. This is of course taking into account the manufacturers recommended temperature settings, and not surpassing them.
Now on to the “how”. Or we might ask why now? What changed?
The answer has to do with the ability to provide a consistent server intake temperature. Inconsistent intake temperatures are a result of return and supply airflows mixing. When this happens it creates “hot spots”, which causes cooling problems. Without a consistent supply temperature the highest temperature in those “hot spots” would determine the data center cooling set point temperature, resulting in a lower set point.
A few years ago containment was introduced to the data center industry. Containment fully separates supply and return airflow, which eliminates “hot spots” and creates a consistent intake temperature. Containment is the key to accomplishing consistent intake temperatures. With consistent intake temperatures data center managers can increase cooling set points, creating a warmer data center. A warmer data center means less money spent on cooling costs.