Categories: CloudDatacentreServer

Nivida Brings HGX-1 Hyperscale GPU To Microsoft Project Olympus Open Source Server Hardware

Microsoft’s Project Olympus open source server hardware initiative is set to receive a boost from Nvidia’s graphics card tech to harness parallel processing for driving artificial intelligence in cloud computing.

Nvidia has unveiled blueprints for its HGX-1 hyperscale graphics processing unit, which will form part of the hardware platform being created by Project Olympus  in collaboration with the Open Compute Project, to effectively create a standardised foundation of modular server architecture on which cloud providers and enterprises can configure their data centres around.

For Microsoft, Project Olympus could have the knock-on effect of introducing standardisation of modular hardware into data centres and help reduce the cost of Redmond’s Azure cloud expansion.

Powering cloud AI

Nvidia’s HGX-1 GPU accelerator will bring in the capabilities to handle machine learning and AI workloads in cloud computing environments.

Given the need for high levels of data throughput in the training and development of AI, particularly with the use of deep learning algorithms, using the parallel processing capabilities of GPUs over the more sequential procession of central processing units (CPUs), effectively provides a wider pipeline of data to feed into AI systems.

“The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast-growing machine learning workloads, and its unique design allows it to be easily adopted into existing data centers around the world,” said Kushagra Vaid, general manager and distinguished engineer at Microsoft’s Azure Hardware Infrastructure division.

On the specifications side, the HGX-1  sports eight of Nvidia’s Tesla P100 GPUs and has a switching design based on the PCIe standard and Nvidia’s NVLink interconnectivity technology, which Nvidia said enables a CPU to dynamically connect to any number of GPUs.

This should provide cloud vendors with a standardised infrastructure on which to offer users a range of GPU and CPU configurations for their cloud instances.

In practical use, the HGX-1 is being targeted as a means to power AI development relating to things like autonomous driving, boosted voice recognition, and data analytics.

The addition of HGX-1 into the Project Olympus architecture has the overall goal of providing a modular and flexible architecture which both enterprises and startup working on AI systems and services can configure to fit the specific machine learning workloads they are running.

Nvidia is not just providing the HGX-1 to Microsoft, but is also joining Project Olympus, and will contribute to its long-term goal of pushing open source hardware designs to innovate around server and data centre architecture, in a fashion similar to how open source software allows for flexibility and new developments within cloud environments.

Try our cloud computing quiz!

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

2 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

2 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

2 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago