Intel Workstation Clusters For Low-Cost HPC Power


Intel has been running an internal pilot programme to show that workstation clusters can offer enterprises access to high-performance computing (HPC) power they would not have access to normally.

The idea is to give what is becoming known as the “missing middle”, businesses and researchers who lack the money or access to HPC environments their workloads demand, the capabilities they need.

HPC Power Without A Data Centre

The pilot is being run by silicon design teams inside Intel, said Shesha Krishnapura, senior principal engineer for Intel’s IT Engineering Group. Intel runs more than 100,000 servers, 60,000 of which are multicore systems used for silicon processor design by about 20,000 engineers.

The design teams are located in multiple sites all over the world and not every site has access to a local data centre, he said. The problems that arise are ones of latency and space. As the workloads grow, more pressure is put on the data centres. In addition, with the kind of work the designers do, latency of 10 milliseconds or more can have a significantly negative impact on the work.

For the past six months, Intel has been working with the design team on a concept officials are calling CCC, or Cubicle Clustered Computing.

Traditionally, engineers use high-end laptops to access back-end blade servers. In the CCC pilot, workstations are configured to exactly the same specifications as the servers. Those workstations are placed in each cubicle and secured so there can be no physical access.

The engineers then access the compute power of the workstations, housed locally, rather than servers that are in data centres further away.

“The network latency [issue] is gone, because [the workstation] is local,” Krishnapura said.

Data storage is still provided by the data centres, for security reasons, but the compute power is in the workstations. The systems support the IPMI management specification and can be managed remotely.

Intel said such an environment could be a boon for businesses in a host of areas, including financial services, oil and gas, and fluid dynamics. These areas are seeing a growing need for access to large amounts of compute power, but may not have the means to get it.

Having that local compute power will also be important as the industry continues its move to exascale computing, systems that can handle a million trillion calculations per second. Few people will have access to the first exascale systems but, when systems are more widely deployed with 40 or 50 out there, businesses and researchers will need local compute power to crunch the data they get from the exascale systems.


Jeffrey Burt

Jeffrey Burt is a senior editor for eWEEK and contributor to TechWeekEurope

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

2 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

2 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

3 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago