Data Centre Cooling – It’s Not Rocket Science

All it takes to get an efficient data centre is to apply well known technologies better, says Mike West – the man behind Europe’s most efficient data centre

Keysource does better by raising the input temperature: “According to our climate models, we would only need chillers on for 87 hours in a typical year,” said West. That’s partly helped by location: “Our climate in this country is ideal for free cooling in data centres.”

keysourcepgs1.jpg

Why not fresh air?

In theory “fresh air cooling” might take the PUE down further, by just flushing the building with air from outside, and doing away with the need for heat exchangeers. “You still have to have fans to move the air,” he said, but there would also be a heavy expense in continually filtering the air, with high quality filters that would need changing regularly. Also, there would be the expense of warming the air and humidifying it, to keep the constant levels that servers like.

Instead, Keysource prefers the standard closed loop system with heat exchangers. “We were satisfied that our cooling system is as efficient as a fresh air system – and has no contamination to deal with.”

In the PGS data centre, air is provided at 22C, which is well within the specifications of the servers, he said. That’s two degrees higher than the 20C which has become traditional, but not a big change, he said. Every degree the input temperature is raised increases the number of hours where no chilling is needed for the centre.

The main issue is keeping the “hot aisle” between the backs of the servers strictly separate from the “cold aisle” outside the server racks, he said. “The only way cold air can get back to the server is by getting heated up. It is ducted to the ceiling and back to the heat exchanger.”

Meanwhile, the cold air is supplied underneath a raised floor, and blown in through the wall, at the full height of the room, he said – so the cold aisle is effectively the whole server room, apart from the hot aisles between the server backs. While the temperature at the processors can up to 100C or 150C, the hot aisle only gets up to around 32C, he said.

No compromises?

All that is basic engineering: “It is not rocket scicne,” said West. “I’d like to say there’s a lot of intellectual property around in this field, but we’re just applying well understood engineering practice, and getting design people to work together to optimise the data centre for efficiency.”

Data centres will never trade off resiliency or performance for energy efficiency though: “IT people will never compromise on resilience – and nor should they have to,” said West. “They need to have lots of flexibility, and want to fit high density servers like blade servers in any position in the room.”

Keysource’s main help came by going outside the IT industry, where people have been delivering average data centres for a long time, and getting the best process engineering from other fields such as pharmaceuticals.

West thinks the company will get further than a PUE of 1.2, but beyond that point there is a lot of discussion about what factors to include and what to leave out of the calculation.”We’re not doing much energy recovery – and that could count as a credit,” he said. “That opens up a big discussion around renewable energy.” If renewable energy or recovered energy counted against the PUE score, then some data centres might eventually have PUE scores of less than one, he said.

For data centre owners, efficient data centres may mean they buy chillers that are -only used for 100 hours a year – and they might look at other ways to cool air down, like ice storage, West suggested.

Water cooling?

IBM has suggested that servers may all be cooled by water in future, but West is agnostic on this, but can see it might become necessary: “We’ve done some modelling, and we can get to 30 or 40kw per rack position. By that time we may need another way of removing heat from server.”

Will that kind of power density ever happen though? “Some people are living in denial about this: a couple of years ago we had two or three kiloWatt per rack position – 15-20kW is a massive increase.”