Liquid Cooling Servers Can Be Easy

Cooling is a major energy drain in server room, and one approach to  reducing the energy wasted is to use liquid cooling. However, liquid cooling has been complex to install and administer, as it means pumping fluid across the hot components of a system.

Green Revolution Cooling (GRC), launched in 2010, promises to make that much simpler, by laying server racks on their backs, in a bath of non-conductive cooling oil. The waste heat form liquid-cooled systems can be easier to ruse, as liquids hold heat in a concentrated form, so  GRC also proposes its system can heat surrounding offices.

We asked some questions of the company’s founder and CEO Christiaan Best.

Just drowning servers in a non-electrical conductive fluid seems pretty trivial at first glance. But is it really as easy as it looks? Is every server suitable for submerging submersion cooling?
Yes – Green Revolution Cooling produces a server-agnostic submersion cooling system, which means that virtually any OEM server is suitable for the system.

In-fact, servers are available submersion-ready from Supermicro, and GRC is working with other OEMs to provide submersion-ready platforms.

Otherwise, there are a few quick modifications that make standard servers submersible:

  • Remove all server fans: reduces server power by 10-20 percent
  • Replace soluble thermal interface material (i.e. thermal paste) with insoluble Indium alternative
  • Encapsulate hard drives (performed by GRC), or use solid state drives

Are servers that have been exposed to this kind of cooling reusable in a traditional rack?
Yes. Once removed from the coolant, servers need only drain for a few minutes before most of the coolant is removed. At this point, the server may be used in a traditional rack provided the fans are reinstalled, although residual coolant will remain.

To fully remove all traces of coolant, servers must be treated using ultrasonic cleaners by GRC technicians.

IBM is investing massively in a technology called hot water cooling. A new supercomputer in Germany is the first commercial implementation of this, and it was used three years ago in a Zurich supercomputer. Why is IBM not just submerging these Servers in fluid submersion dielectric fluid?
Green Revolution Cooling developed fluid-submersion cooling to provide the highest performance and lowest cost-per-Watt solution in existence.

While hot water cooling can certainly be impressive, it generally adds tremendous infrastructure and cost expenditures to the data centre overall. The CarnotJet system is more efficient and a small fraction of the price of more complicated systems.

Submersion cooling also enables higher output water temperatures (up to 50C) than does hot water cooling, enabling heat recapture and reuse.

Additionally, submersion cooling technology does not force the customer to use a specific server – any OEM server is supported in the system.

How many data centres are using Green Revolution Cooling at the moment? Are there any specific use cases?
Green Revolution Cooling’s technology is currently installed at twelve data centres worldwide with more than a megawatt of active capacity. Case studies exist from installations at TACC and Midas Networks, with more coming in the near future.

When I first implement this technology, how much higher are the costs for submersion cooling compared to traditional data centres? Does it require specific Buildings or special data centres?
Actually the costs are lower – much lower – upfront and long-term. From our perspective, it is the traditional data centre that requires specialised building infrastructure, not ours. For example, submersion cooling does not require CRACs, chillers, raised floors, hot aisles, cold aisles, or conditioning of any kind! Submersion cooling requires only access to water and power and protection from the elements.

Submersion cooling integrates with all existing data centre infrastructure, but it also is capable of standing alone and really providing the lowest cost solution out there today.

The CarnotJet system is priced per Watt of capacity and data centres featuring the technology frequently cost less than half what an equivalent, best-practices data centre would cost.

What is the everyday operation of a data centre like with fluid submersion?
There are certainly differences between operation of a submersion cooling data centre and a traditional air-cooled data centre.

Drawing from several years of installation history, we find that the learning curve for the technology is very quick and every possible issue that arises already has an established solution.

The first thing that is noticeable in a submersion cooling data centre is the lack of noise. Unlike traditional data centres, one may whisper in a GRC data centre and be heard.

Otherwise, there are a few operational tips and tricks that need to be adopted:

  • Server maintenance: Servers are removed and placed on rack rails above the tank. Components may be readily hot-swapped and serviced. It requires less than a minute to swap a DIMM module, roughly the same time as with an air-cooled server
  • Coolant drips: Secondary containment is installed beneath submersion racks, providing a space for accidental coolant drips.

What do the OEMs (Dell, Intel, etc.) think about this technology?
Supermicro is the first server OEM to provide submersion-ready servers to our customers. Testing is underway at two of the three largest server manufacturers and two of the three largest chip-makers. These OEMs are involved with the technology and believe it will solve many of their customers’ problems.

Can traditional UPS-Systems be used? What happens during system failure?
Yes, the system will function with traditional UPS systems. The technology is designed to integrate with all traditional data centre infrastructure, including battery backup, generators, etc. Air-handling equipment of any kind, however, is not required.

How far can you scale this system and what are the limitations?
The basic building block of the technology is four 42U or 55U racks, each with a capacity of 10-100kW depending on customer need. There are no limitations on the size of the system as each “Quad” is independently controlled and managed. For example, if a customer requires 100 racks, the customer can simply purchase 25 Quad systems.

Interview by Martin Schindler, silicon.de

Martin Schindler, silicon.de

Recent Posts

Apple, Google Mobile Ecosystems Should Be Investigated, CMA Told

CMA receives 'provisional recommendation' from independent inquiry that Apple,Google mobile ecosystem needs investigation

1 day ago

Australia Rejects Elon Musk Claim About Social Media Ban For Under-16s

Government minister flatly rejects Elon Musk's “unsurprising” allegation that Australian government seeks control of Internet…

1 day ago

Northvolt Files For Bankruptcy Protection In US

Northvolt files for Chapter 11 bankruptcy protection in the United States, and CEO and co-founder…

1 day ago

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

2 days ago

Former Policy Boss At X, Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

2 days ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

2 days ago