Amazon used its annual AWS re:Invent conference in Las Vegas to confirm two new families of AI focused chips.

Amazon Web Services announced the next generation of two AWS-designed chip families – AWS Graviton4 and AWS Trainium2 – which AWS said would deliver “advancements in price performance and energy efficiency for a broad range of customer workloads, including machine learning (ML) training and generative artificial intelligence (AI) applications.”

It comes after AWS and Nvidia announced “an expansion of their strategic collaboration to deliver the most advanced infrastructure, software, and services to power customers’ generative artificial intelligence (AI) innovations.”

AWS Graviton4 and AWS Trainium2 (prototype).
Image credit Business Wire

New chips

AWS said that the new Graviton4, which is based on an ARM architecture, provides up to 30 percent better compute performance, 50 percent more cores, and 75 percent more memory bandwidth than current generation Graviton3 processors, which it said will deliver improved price performance and energy efficiency for workloads running on Amazon EC2.

Meanwhile Amazon’s own Trainium2 is designed to deliver up to 4x faster training than first generation Trainium chips and will be able to be deployed in EC2 UltraClusters of up to 100,000 chips. AWS said this will make it possible to train foundation models (FMs) and large language models (LLMs) in a fraction of the time, while improving energy efficiency up to 2x.

“Silicon underpins every customer workload, making it a critical area of innovation for AWS,” said David Brown, vice president of Compute and Networking at AWS. “By focusing our chip designs on real workloads that matter to customers, we’re able to deliver the most advanced cloud infrastructure to them.”

“Graviton4 marks the fourth generation we’ve delivered in just five years, and is the most powerful and energy efficient chip we have ever built for a broad range of workloads,” said Brown. “And with the surge of interest in generative AI, Tranium2 will help customers train their ML models faster, at a lower cost, and with better energy efficiency.”

Thousands of customers

At the moment, AWS offers more than 150 different Graviton-powered Amazon EC2 instance types and has built more than 2 million Graviton processors, and has more than 50,000 customers.

EC2 UltraClusters of Trainum2 are designed to deliver the highest performance, most energy efficient AI model training infrastructure in the cloud.

All of this is aided after AWS said it will offer access to Nvidia’s latest H200 AI graphics processing units.

And as part of its deepening relationship with Nvidia, AWS said it will operate more than 16,000 Nvidia GH200 Grace Hopper Superchips, which contain Nvidia GPUs and Nvidia’s ARM-based general-purpose processors.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

32 mins ago

Tech Minister Admits UK Social Media Ban For Under-16s “On The Table”

Following Australia? Technology secretary Peter Kyle says possible ban on social media for under-16s in…

21 hours ago

Northvolt Appoints Restructuring Expert For Main Battery Plant

Restructuring expert appointed to oversea Northvolt's main facility in northern Sweden, amid financial worries

22 hours ago