People have been saying for some time that liquid cooling will play an increasing role in data centres. It removes heat more efficiently, and in a form where it can be re-used. In theory, quietly pumped liquid could take the place of all the large, noisy aircon systems, which make data centres the equivalent of using a giant hair dryer to keep an electric heater cool.
Despite this, the actual flow of liquid into our sites is more of a trickle than a flood. What’s going on?
Liquid cooling is slow to take off because it’s new. I know supercomputers have been liquid cooled for many years, but regular computers in racks have not, and most people rightly feel they won’t do something new till they have to.
Close-coupled liquid cooling, which gets the coolant close to the electronics, can remove heat at a rate of up to 200kW per rack. This is great but, as analyst Andrew Donoghue of 451 research points out: “Power density is down at the 3-7kW per rack level. This means that less expensive room-level cooling, when combined with good airflow management, easily and quite efficiently handles the level of cooling required.”
Projections of the need for liquid cooling have often been based on the idea that power densities are increasing rapidly, but Donoghue doesn’t see this: “our view, now widely shared, is that average power density will not rise dramatically in the near future, and many enterprises may actually level off well below 15-20kW per rack.” That’s a level where some liquid cooling (say in cabinet doors) is a good thing, but not an essential for survival.
That’s due to improvements in silicon and the way servers are built. If low-power servers such as those based on ARM processors take off, then there’s less need for extreme cooling.
Despite this, the451’s report, Liquid Cooled IT, sees a lot of potential for the technology, and a bunch of startups ready to satisfy future needs – including Britiain’s Iceotope, Denmark’s Asetek, Canada’s CoolIT and California’s Chilldyne – as well as initiatives from big players such as IBM, HP and Fujitsu.
These firms won’t see a tsunami of liquid cooling, but the technology will seep in as some data centres adopt mixed cooling. Some racks will use liquid cooling, while the rest remains on air, says Donoghue: “It may make economic sense to build, not to maximum or likely peak density or power demand, but to average density. This lowers overall construction cost significantly, but does mean that operators may need to build some specialist high-density areas. These are particularly well suited to using liquid cooling.”
Data centres are increasingly being built without chillers, using outside air for cooling. This may be fine for regular servers, but what happens when there’s a need to add something with more oomph?
High performance computing (HPC) is a big user of liquid cooling, and HPC is spreading beyond the scientific computing niche, so in future, your data centre might need an HPC enclave.
When that happens, it’s time to dabble your toes in liquid.
A version of this article appeared on Green Data Center News.
Do you know about renewable energy? Try our quiz!
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…