Nvidia Introduces Next-Generation ‘Blackwell’ AI Chips

Nvidia on Monday announced a next-generation platform for its market-leading graphics processing units (GPUs) and new software building on its success as a provider of artificial intelligence (AI) hardware platforms.

The announcements were made at at Nvidia’s GPU Technology Conference (GTC) developer show in San Jose, the first to be held in person since 2019.

The new platform, “Blackwell”, powers the B200 GPU that is the successor to the current generation of “Hopper”-based H100.

“Hopper is fantastic, but we need bigger GPUs,” chief executive Jensen Huang told attendees at a keynote speech.

Nvidia’s GB200 Grace Blackwell Superchip. Image credit: Nvidia

Power boost

The update is the first since Hopper was announced in 2022, and is the first since the boom in AI triggered by OpenAI’s public launch of ChatGPT in late 2022.

The B200 offers up to 20 petaflops of power, compared to 4 petaflops for the H100, allowing companies to develop larger and more complex AI models, Nvidia said.

The new chip includes a transformer engine specifically built to run transformer-based AIs such as ChatGPT.

Nvidia said cloud providers including Amazon, Google, Microsoft, and Oracle planned to deploy the GB200 Grace Blackwell Superchip, which includes two B200 GPUs along with an ARM-based Grace CPU.

Nvidia’s GB200 NVLink 2 server. Image credit: Nvidia

Cloud deployment

Amazon Web Services plans to build a server cluster of 20,000 GB200 chips, the company said.

The firm said such a system could power a 27 trillion-parameter model, much bigger than the biggest models of today, such as OpenAI’s GPT-4, which reportedly has 1.7 trillion parameters.

The new GPUs will also power the GB200 NVLink 2, with 72 Blackwell GPUs and other Nvidia components designed for AI models.

Along with the new hardware, Nvidia is expanding its software business to capitalise on its leading place with AI developers, who rely on Nvidia’s CUDA software platform.

Software

A new offering called NIM inference microservices provides cloud-native microservices for more than two dozen popular foundation models, including Nvidia-built models, the company said.

Inference refers to the process of running an AI model, as opposed to developing it, and NIM is designed to allow users to operate AI models on a range of Nvidia hardware that they might own, either locally or in the cloud, rather than being restricted to renting capacity from a third-party cloud provider.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Programmer Sentenced To Five Years In Prison For Bitcoin Laundering

Ilya Lichtenstein sentenced to five years in prison for hacking into a virtual currency exchange…

47 mins ago

Hate Speech Watchdog CCDH To Quit Musk’s X

Target for Elon Musk's lawsuit, hate speech watchdog CCDH, announces its decision to quit X…

18 hours ago

Meta Fined €798m Over Alleged Facebook Marketplace Violations

Antitrust penalty. European Commission fines Meta a hefty €798m ($843m) for tying Facebook Marketplace to…

19 hours ago

Elon Musk Rebuked By Italian President Over Migration Tweets

Elon Musk continues to provoke the ire of various leaders around the world with his…

20 hours ago

VW, Rivian Launch Joint Venture, As Investment Rises To $5.8 Billion

Volkswagen and Rivian officially launch their joint venture, as German car giant ups investment to…

21 hours ago

AMD Axes 4 Percent Of Staff, Amid AI Chip Focus

Merry Christmas staff. AMD hands marching orders to 1,000 employees in the led up to…

1 day ago