High performance computing (HPC) has been concerned about energy efficiency for a while. But does it have a different emphasis from the rest of the data centre world?
It’s not so big on the cloud – or at least the public cloud – for one thing. General Motors last week announced a big consolidation – into its own data centres, not into cloud providers. GM is reversing the last 20 years of its IT history, during which it has accumulated 23 data centres, all of them run by partners in outsourcing deals, according to Ars Technica. These are not cloud services, but more old-fashioned data centers, and GM is consolidating onto an internal cloud, to save 70 percent of its IT power, in a scheme which should pay for itself in three years.
The efficiency comes from eliminating wide-area network (WAN) traffic between the data centers, putting it onto on-campus 10Gig fiber, and building a cloud that operates to GM’s specifications, using the best cooling and power infrastructure (fairly standard hot aisle containment and flywheel-based UPS).
GM’s data includes a lot of classic HPC jobs, such as car design files, as well as newer big data applications such as a Hadoop-based data warehouse.
Part of the the GM story is a suggestion that co-located servers wouldn’t meet the firm’s needs fully – something which I’ve heard before, and is borne out in this report from the Data Center Efficiency Summit. A service with multiple customers can’t tailor what it offers so everyone gets the data centre they want.
But there are other ways to do all this, and it’s clear that co-location people are doing the best they can to sell customers something like their own custom green data centre.
HPC efficiency is coming under scrutiny because hardware has become cheaper, and it can be sold more widely – as long as the energy cost is not a barrier – and there are plenty of models for that.
But is HPC really so special? German researchers from the Leibniz Supercompuiting Center see a need for special tools for more HPC efficiency, according to an HPC Wire report. HPC systems have normally been developed without much concern about power – they’re dealing with such important calculations – and the Germans have developed a tool called Power Data Aggregation (PowerDAM), with some support from European government research money, that rmake the HPC infrastructure aware of the power it uses.
The tool looks at the power needed to achieve a given solution, working directly from sensors in the HPC system.
But maybe HPC isn’t so special – this tool is not exclusively useful for HPC. The creators say it “is capable of monitoring not only HPC systems but any other infrastructure that can be represented as a hierarchical tree”.
This article appeared on Green Data Center News
Its a bird, it’s a plane…. it’s a supercomputer! Try our quiz!
China fires back after US Commerce Dept says it is considering new restrictions on Chinese…
After extradition to the United States, disgraced founder of Terraform Labs Do Kwon pleads not…
Legal ceasefire. IBM and GlobalFoundries have settled their respective lawsuits against each other after years…
Trade war latest sees Beijing proposing export restrictions on some tech used to make battery…
Settlement reached after Apple was alleged to have routinely recorded private conversations after unintentional activation…
20-year old US Army solider arrested for selling and leaking sensitive customer call records stolen…