High performance computing (HPC) has been concerned about energy efficiency for a while. But does it have a different emphasis from the rest of the data centre world?
It’s not so big on the cloud – or at least the public cloud – for one thing. General Motors last week announced a big consolidation – into its own data centres, not into cloud providers. GM is reversing the last 20 years of its IT history, during which it has accumulated 23 data centres, all of them run by partners in outsourcing deals, according to Ars Technica. These are not cloud services, but more old-fashioned data centers, and GM is consolidating onto an internal cloud, to save 70 percent of its IT power, in a scheme which should pay for itself in three years.
The efficiency comes from eliminating wide-area network (WAN) traffic between the data centers, putting it onto on-campus 10Gig fiber, and building a cloud that operates to GM’s specifications, using the best cooling and power infrastructure (fairly standard hot aisle containment and flywheel-based UPS).
GM’s data includes a lot of classic HPC jobs, such as car design files, as well as newer big data applications such as a Hadoop-based data warehouse.
Part of the the GM story is a suggestion that co-located servers wouldn’t meet the firm’s needs fully – something which I’ve heard before, and is borne out in this report from the Data Center Efficiency Summit. A service with multiple customers can’t tailor what it offers so everyone gets the data centre they want.
But there are other ways to do all this, and it’s clear that co-location people are doing the best they can to sell customers something like their own custom green data centre.
HPC efficiency is coming under scrutiny because hardware has become cheaper, and it can be sold more widely – as long as the energy cost is not a barrier – and there are plenty of models for that.
But is HPC really so special? German researchers from the Leibniz Supercompuiting Center see a need for special tools for more HPC efficiency, according to an HPC Wire report. HPC systems have normally been developed without much concern about power – they’re dealing with such important calculations – and the Germans have developed a tool called Power Data Aggregation (PowerDAM), with some support from European government research money, that rmake the HPC infrastructure aware of the power it uses.
The tool looks at the power needed to achieve a given solution, working directly from sensors in the HPC system.
But maybe HPC isn’t so special – this tool is not exclusively useful for HPC. The creators say it “is capable of monitoring not only HPC systems but any other infrastructure that can be represented as a hierarchical tree”.
This article appeared on Green Data Center News
Its a bird, it’s a plane…. it’s a supercomputer! Try our quiz!
CMA receives 'provisional recommendation' from independent inquiry that Apple,Google mobile ecosystem needs investigation
Government minister flatly rejects Elon Musk's “unsurprising” allegation that Australian government seeks control of Internet…
Northvolt files for Chapter 11 bankruptcy protection in the United States, and CEO and co-founder…
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…