Categories: CloudDatacentreServer

Nivida Brings HGX-1 Hyperscale GPU To Microsoft Project Olympus Open Source Server Hardware

Microsoft’s Project Olympus open source server hardware initiative is set to receive a boost from Nvidia’s graphics card tech to harness parallel processing for driving artificial intelligence in cloud computing.

Nvidia has unveiled blueprints for its HGX-1 hyperscale graphics processing unit, which will form part of the hardware platform being created by Project Olympus  in collaboration with the Open Compute Project, to effectively create a standardised foundation of modular server architecture on which cloud providers and enterprises can configure their data centres around.

For Microsoft, Project Olympus could have the knock-on effect of introducing standardisation of modular hardware into data centres and help reduce the cost of Redmond’s Azure cloud expansion.

Powering cloud AI

Nvidia’s HGX-1 GPU accelerator will bring in the capabilities to handle machine learning and AI workloads in cloud computing environments.

Given the need for high levels of data throughput in the training and development of AI, particularly with the use of deep learning algorithms, using the parallel processing capabilities of GPUs over the more sequential procession of central processing units (CPUs), effectively provides a wider pipeline of data to feed into AI systems.

“The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast-growing machine learning workloads, and its unique design allows it to be easily adopted into existing data centers around the world,” said Kushagra Vaid, general manager and distinguished engineer at Microsoft’s Azure Hardware Infrastructure division.

On the specifications side, the HGX-1  sports eight of Nvidia’s Tesla P100 GPUs and has a switching design based on the PCIe standard and Nvidia’s NVLink interconnectivity technology, which Nvidia said enables a CPU to dynamically connect to any number of GPUs.

This should provide cloud vendors with a standardised infrastructure on which to offer users a range of GPU and CPU configurations for their cloud instances.

In practical use, the HGX-1 is being targeted as a means to power AI development relating to things like autonomous driving, boosted voice recognition, and data analytics.

The addition of HGX-1 into the Project Olympus architecture has the overall goal of providing a modular and flexible architecture which both enterprises and startup working on AI systems and services can configure to fit the specific machine learning workloads they are running.

Nvidia is not just providing the HGX-1 to Microsoft, but is also joining Project Olympus, and will contribute to its long-term goal of pushing open source hardware designs to innovate around server and data centre architecture, in a fashion similar to how open source software allows for flexibility and new developments within cloud environments.

Try our cloud computing quiz!

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

6 hours ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

8 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

10 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

11 hours ago