Stop Cooling the Data Center
Cooling is not going to be a major contributor to data center costs in five years. In fact, in five years, the temperature in a typical data center will be at least five degrees warmer than it is today.
Computer servers are being built to handle the heat. Adaptive cooling approaches keep the hot spots in check. Some data centers are simply opening the windows to let fresh air do the cooling. It’s now much easier to keep equipment at precisely the right temperature, allowing the data center manager to get back to focusing on IT—the reason they are there in the first place.
Increasing power density makes cooling placement critical
Many data centers in use today were built very conservatively in the 1980s and 1990s around unreliable equipment. Most mainframes and servers wouldn’t even run if the inlet temperature was warmer than 84F. So they cooled the entire data center, even though 99 percent of the equipment within these data centers didn’t need cooling at all. Point being, data centers can’t operate the way they did in the past. They need to be run relative to their actual cooling requirements.
Power density in the data center is increasing due to blades, more condensed and faster processors, proliferation of stacked servers, etc. These advances in technology make cooling placement critical. Instead of blasting A/C through the entire data center due to the least common denominator, adaptive cooling isolates these high-density areas to be cooled when needed. The idea is to cool specific hot spots with regularity instead of the entire room at the same level.
The rising temperature in the data center
As power density increases and servers are become much more advanced, it makes sense to raise temperature conditions in the data center. But what is the proper temperature level for cooling?
In 2008, The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) recommended thermal requirements for server environments was a low of 65F and a high of 81F. Very few data centers run at 81F. Most run at 68F.
What if we push the limits? ASHRAE’s allowable temperature for the data center is 91F. Why is the allowable higher than the recommended? ASHRAE contends that while it is possible to keep the data center that warm, server mortality rates will be higher at 91F. Many companies question whether today’s servers are still susceptible. HP and Dell warranty their servers to 95F and SGI goes up to 104F. Other industry experts agree that data center temperatures are too low.
“Running at 90F would save the world a lot of money,” James Hamilton said at the Google Data Center Summit in April of this year. “But no one is doing it because they are worried about server mortality.”
It’s also interesting to note that the telecom industry operates at a higher range (up to 104F). In fact, telecom actually imposes thermal requirements whereas there are no requirements for data center equipment. IT and telecom have to get together on this.
Due to rapidly changing business demands, equipment is deployed much faster. As a result, data center managers don’t have the capability to adjust the cooling process appropriately. Servers are replaced, on average, every three to five years. Yet the same cooling strategy stays in place. Here’s another problem: thermal requirements for the data center rise by one degree every year. So in five years, a data center should be five degrees warmer.
Opening the window to new ways of cooling the data center
Airflow is often ad hoc in the data center. Conventional layouts had hot air from one server drifting into the inlet of another. Most data center operators have rectified this with hot and cold aisles to keep the air isolated. Newer “green” data centers have energy-saving, low-carbon emitting “free cooling” technology. This is simply a fancy way of saying they have open windows.
“Outside air is an absolute must now,” said Bob Seese, chief architect at Advanced Data Centers of San Francisco. “It never made any sense not to open the window. (Chris Bowman, Sacramento Bee – “Free cooling’ with fresh air makes McClellan Park data center greener”)
The Green Grid says that in most cases, a fully developed air management strategy can produce significant and measurable economic benefits and should be the starting point when implementing a data center energy savings program.
Technological advancements such as adaptive cooling and fresh air give IT staff the freedom to focus on IT matters, instead of spending time on facilities and cooling. Pretty soon, servers will be running at 120F and the data center could feasibly be over 100F, giving new meaning to the term sweatshop.
Joe Polastre is co-founder and chief technology officer at Sentilla, which provides demand-side energy management solutions for data centers and commercial facilities.
Energy Manager News
- Insider ‘Outs’ Misleading Strategy Behind Florida’s Solar Amendment 1
- Mississippi Watchdog: Kemper Syngas Operations Could Raise Costs by 288%
- Waste-to-Energy Shows Growth in New Jersey, Maine and Florida
- Zen Ecosystems Introduces Zen HQ
- Flywheel Platform Introduced by GE
- Key Trends: Corporate Renewable Energy Procurement and Spend 2016
- Cogeneration Continues to Make Inroads
- Honeywell, OG&E Upgrading Tinker Air Force Base Assembly Plant