Sun Cools Down the Data Centre
The IT industry has a poor record of wasting energy in data centres, says server maker Sun Microsystems. But the company wants to make amends: it has attacked the source of the problem, and rewritten the rules for cooling and power.
“Most data centres have a PUE of around 2 – so for every Watt that goes to the servers, one Watt has to go the chillers,” he says. In a short time at Sun, Nelson got PUEs of 1.28, much closer to the elusive ideal figure of 1.0, when every Watt would go to the servers.
With a PUE of 1.28, a 798kW data centre needs about 1MW of power, instead of the more usual 1.5MW, he says: “You can shed half a megaWatt of power, to run the exact same equipment.” In the US, that works out at $400,000 savings a year (just over 287,200 GBP a today’s exchange rate).
The secret is in getting the cooling where it is needed. Most data centres are just a big room, cooled by air conditioning. If one server heats up, the whole room has to be cooled to compensate. Nelson put the servers cabinets in “pods” so cooling units can go where they are needed, and uses racks with integrated cooling, so he doesn’t have to cool the whole room. “It’s a room within a room,” he explains. “We have closely-coupled cooling, right next to the heat source – we grow the cooling where we need it.”
Pods take the place of underfloor wiring – something which the room designers only gradually caught up with: “We built a pod-based datacentre in Blackwater, Camberley, but I didn’t have complete control, and it ended up with raised floors. That cost an extra $50 [35 GBP] per square foot. In Colorado [a 76,000 square feet centre], we saved $4 million by leaving out the raised floor” (2.87 million GBP).
A pod might have 20 racks, and start with a demand for 100kW, but that could triple as servers are added, and the cooling has to grow to cope. “The secret sauce is flexibility,” he says – but it’s not really a secret. Flexible cooling needs flexible power, and Sun uses a top-of-the-range electrical power “bus” from Universal Electric, called Starline.
Starline works like domestic track-lighting, so Nelson can put power outlets anywhere. “A new power outlet is just a can that snaps in,” says Nelson. “It uses less copper, and we can put it where we need it. And the cans can be re-used anywhere.”
That doesn’t just save power, he points out. It lets a company quickly re-configure a datacentre to match a corporate restructure. Sun took the concept further with its Modular Datacenter product, built inside a shipping container – later followed by other vendors. Deploying it just means delivering the container and hooking up the power and network.
What will come after pods? It could be water-cooling, he says: “There will be a place where air will no longer solve the problem.”
Whatever does come next will emerge from Data Center Pulse, a body Nelson co-founded for users, that works on future challenges in a series of “Chill-Offs” at international conferences. “They’re pushing the envelope on fanless cooling and passive cooling,” he says.
The next Chill-Off, on 17th-19th February in Santa Clara, will be exciting, he says. But a few steps back from the leading edge of cooling, the EU code of practice could be just as important to the rank and file of IT managers, in raising awareness of what power efficiency can do to save money and make IT sustainable.