Containers Could Simplify Data Centre Cooling

Next week’s Data Centre World in London will have a lot of people offering very complex ideas for how to save energy and costs. But what you will hear there on data centre power consumption is actually relatively simple.

Stop over-cooling your servers.

Data centres used to throw a Watt away in cooling systems for every Watt that reached the servers. New ideas have simplified that, leading to lower Power Usage Effectiveness (PUE). The application of quite simple ideas has resulted in great strides in efficiency. And when we need to go beyond that, it turns out that we have the right technology to do it.

Just cool what needs to be cooled

At the moment, there are big savings to be made, simply because companies are applying more cooling than they should, getting the temperature of their servers way down to the lowest they can, instead of taking a more efficient route, and just applying the cooling they need.

This is gradually changing as companies are using “economisers” – systems which simply use outside air and evaporation – more widely. In the UK, there is no real reason why all data centres should not use economisers all the time. Google and others have criticised the over-prescriptive use of economisers, but in most cases, they are the best way to go.

But there is more to be done. ASHRAE, the American Society of Heating, Refrigerating and Air-Conditioning Engineers, created guidance on cooling data centres some while ago, and performed a very valuable service.

Before 2004, people operating data centres had to check the thermal (and humidity) tolerance of all their IT equipment – potentially dozens of  specifications – and then run the data centre to meet the most sensitive requirements of any kit they had.

ASHRAE got the major IT vendors together in its 9.9 technical committee, and they agreed common specifications for the thermal and humidity ranges their equipment could accept, and suddenly data centre owners had one set of operating temperatures to work within.

Switch off your chillers

The advantage of running data centres at a higher temperature is that it enables the chillers to be turned off, saving energy. Indeed, if you work out that you don’t need chillers at all, you can save the cost of buying and maintaining them.

However, there are complications. Firstly, if you turn off your chillers, you may find the fans on your servers, in the racks,  are working overtime. If you need to reduce the temperature of your servers, a few degrees cooling by server fans can cost a lot more energy than the same few degrees by an air conditioning unit.

Secondly, your servers may need to be designed differently. The processor chips can go up to about 60C, and have a heatsink attached that gives that heat off to the air. If you are using hotter air, you have to pump more of it past the heatsink, and you have to have a bigger heatsink.

What if you could run your servers with no external cooling, but they all had heatsinks which virtually doubled their size? You’d only get half as many in your racks, and they’d all have a much bigger energy use embedded in them – from materials, manufacturing and transport.

We are reaching a stage in data centre cooling where the trade-offs become quite severe. “We are starting to reach the limits of what you can do,” says Liam Newcombe, CTO of Romonet and a member of the BCS Data Centre Specialist Group.”Chiller energy is no longer the dominant energy in the system.”

Cooling where it is needed

Newcombe thinks that in future, data centres will be divided up, according to what is actually required. A new set of ASHRAE standards are refining the temperature ranges according to different types of equipment, and data centre owners could make use of that, says Newcombe.

“Most operators are running the data centre floor too cold, because they have one or two devices that can’t handle heat,” says Newcombe. Already, some data centres are separating that out, running a main floor with no chillers, and having a separate room for more sensitive equipment – whether that is tape libraries or supercomputers.

This sounds like a step back to more complexity, but actually fits well with today’s move to modular systems. If data centres are built inside a big shed from a collection of containers, then why not separate out the equipment so that only some containers need chillers.

Thanks to containers, we could tailor our cooling, getting in closer to the idea, while still keeping it simple.

Peter Judge

Peter Judge has been involved with tech B2B publishing in the UK for many years, working at Ziff-Davis, ZDNet, IDG and Reed. His main interests are networking security, mobility and cloud

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

3 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

3 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

3 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago