Data Centre Cooling – It’s Not Rocket Science

There’s no rocket science in cooling data centres, but very efficient ones are still a rarity. So when a record-breaking centre set up in Surrey, we were very keen to speak to the man behind it.

The Petroleum Geo-Services (PGS) data centre in Weybridge Surrey appears to be the first in Europe to have an annual efficiency score (PUE) as 1.2.

PUE is the amount of energy put into the system divided by the amount that reaches the servers – and most of today’s data centres have a score greater than two, which means less than half the power input reaches the IT kit. By contrast, 1.2 means only one fifth of a Watt goes on overheads for every Watt at the servers – and Google got gasps of surprise, when it announced last year that some if its centres had achieved that figure.

That’s a feather in the cap for Keysource, the company that built the PGS centre but it’s important not to oversimplify the discussion, said Mike West, Keysource’s managing director. A centre’s PUE depends a lot on the outside temperature, and should be quoted as an annual figure, based on a year’s figures for temperature fluctuations.

“The important factor is the annualised PUE in kwH,” said West. “Air conditioning is where the biggest gains can be made. Losses from UPS inefficiencies, and standby power, are all linear and predictable, but cooling is the area of biggest opportunity.” Because cooling depends on outside temperatures and other factors, it’s the one where extra work can get the biggest gains. “Nuances around the specialist mechanical and electrical plant can have a dramatic effect on the outcome of the facility from an efficiency and performance point of virew,” he said.

No secret sauce

Despite this, the big surprise is that, despite having a fancy name – Ecofris – Keysource’s data centre design has no “secret sauce”. Rival data centre builder Imtech ascribed the success of its Common Rail Cooling design to multi-storey architecture, but West says Keysource’s Ecofris involves no major break with earlier technologies – it just pushes them further than they have normally been pushed before.

“The biggest issue is the high density hardware,” he said. Bladecentres can pack more processing power in a smaller space, but that raises the amount of heat that needs to be disippated. The PGS data centre has around 16kW per rack position. .

The only way to get a low PUE is to cut down the amount of active cooling that needs to be done, and use “free cooling”. Instead of turning on mechanical chillers that burn power and push the PUE up. PGS only needs to turn on chillers when the ambient temperature is above 24C. Most free cooling systems so far have needed a temperature of five degrees – so they can only be used for maybe 1000 hours a year.

Page: 1 2

Peter Judge

Peter Judge has been involved with tech B2B publishing in the UK for many years, working at Ziff-Davis, ZDNet, IDG and Reed. His main interests are networking security, mobility and cloud

Recent Posts

Spyware Maker NSO Group Found Liable In US Court

Landmark ruling finds NSO Group liable on hacking charges in US federal court, after Pegasus…

2 days ago

Microsoft Diversifying 365 Copilot Away From OpenAI

Microsoft reportedly adding internal and third-party AI models to enterprise 365 Copilot offering as it…

2 days ago

Albania Bans TikTok For One Year After Stabbing

Albania to ban access to TikTok for one year after schoolboy stabbed to death, as…

2 days ago

Foldable Shipments Slow In China Amidst Global Growth Pains

Shipments of foldable smartphones show dramatic slowdown in world's biggest smartphone market amidst broader growth…

2 days ago

Google Proposes Remedies After Antitrust Defeat

Google proposes modest remedies to restore search competition, while decrying government overreach and planning appeal

2 days ago

Sega Considers Starting Own Game Subscription Service

Sega 'evaluating' starting its own game subscription service, as on-demand business model makes headway in…

2 days ago