There’s no rocket science in cooling data centres, but very efficient ones are still a rarity. So when a record-breaking centre set up in Surrey, we were very keen to speak to the man behind it.
The Petroleum Geo-Services (PGS) data centre in Weybridge Surrey appears to be the first in Europe to have an annual efficiency score (PUE) as 1.2.
PUE is the amount of energy put into the system divided by the amount that reaches the servers – and most of today’s data centres have a score greater than two, which means less than half the power input reaches the IT kit. By contrast, 1.2 means only one fifth of a Watt goes on overheads for every Watt at the servers – and Google got gasps of surprise, when it announced last year that some if its centres had achieved that figure.
That’s a feather in the cap for Keysource, the company that built the PGS centre but it’s important not to oversimplify the discussion, said Mike West, Keysource’s managing director. A centre’s PUE depends a lot on the outside temperature, and should be quoted as an annual figure, based on a year’s figures for temperature fluctuations.
“The important factor is the annualised PUE in kwH,” said West. “Air conditioning is where the biggest gains can be made. Losses from UPS inefficiencies, and standby power, are all linear and predictable, but cooling is the area of biggest opportunity.” Because cooling depends on outside temperatures and other factors, it’s the one where extra work can get the biggest gains. “Nuances around the specialist mechanical and electrical plant can have a dramatic effect on the outcome of the facility from an efficiency and performance point of virew,” he said.
No secret sauce
Despite this, the big surprise is that, despite having a fancy name – Ecofris – Keysource’s data centre design has no “secret sauce”. Rival data centre builder Imtech ascribed the success of its Common Rail Cooling design to multi-storey architecture, but West says Keysource’s Ecofris involves no major break with earlier technologies – it just pushes them further than they have normally been pushed before.
“The biggest issue is the high density hardware,” he said. Bladecentres can pack more processing power in a smaller space, but that raises the amount of heat that needs to be disippated. The PGS data centre has around 16kW per rack position. .
The only way to get a low PUE is to cut down the amount of active cooling that needs to be done, and use “free cooling”. Instead of turning on mechanical chillers that burn power and push the PUE up. PGS only needs to turn on chillers when the ambient temperature is above 24C. Most free cooling systems so far have needed a temperature of five degrees – so they can only be used for maybe 1000 hours a year.
Page: 1 2
Landmark ruling finds NSO Group liable on hacking charges in US federal court, after Pegasus…
Microsoft reportedly adding internal and third-party AI models to enterprise 365 Copilot offering as it…
Albania to ban access to TikTok for one year after schoolboy stabbed to death, as…
Shipments of foldable smartphones show dramatic slowdown in world's biggest smartphone market amidst broader growth…
Google proposes modest remedies to restore search competition, while decrying government overreach and planning appeal
Sega 'evaluating' starting its own game subscription service, as on-demand business model makes headway in…