Jensen Huang, the chief executive of artificial intelligence (AI) chipmaker Nvidia, asked SK Hynix to advance the delivery of next-generation high-bandwidth memory (HBM) chips by six months, the memory chip producer said, emphasising the explosive demand for the building blocks of AI infrastructure.
“The last time I met with Huang, he asked me if we could supply HBM4 six months earlier than the date we have agreed upon,” said SK Group chairman Chey Tae-won at an event in Seoul.
“I asked the SK Hynix CEO whether it’s possible, and he said he will try, so we are working to move the date up by six months.”
He joked that he is “a bit nervous” to meet Huang again because “we’re worried he might ask us to speed it up even further”.
Chey’s remarks at SK AI Summit 2024 show how producers are doubling down to ramp production of key AI infrastructure, including GPU accelerator chips and HBM memory.
Nvidia is the leading producer of GPUs for the AI industry, with more than 80 percent market share, while SK Hynix has become a key producer of HBM memory chips as Samsung has lagged in their production.
SK Hynix began manufacturing HBM3E, the current cutting-edge of the technology, in September, and is aiming to produce 12-layer HBM4 chips next year, with 16-layer HBM4 chips to follow in 2026.
In a pre-recorded video clip Huang said HBM had enabled “super Moore’s law” with AI accelerators.
Moore’s law is the observation that the number of transistors in an integrated circuit doubles about every two years, often used as a way of referring to the quickly increasing power of computer chips.
“When we moved from coding to machine learning, it changed the computer architecture fairly profoundly. And the work that we did with HBM memories has really made it possible for us to achieve what appears to be super Moore’s law,” Huang said.
“We wish that we got more bandwidth with lower energy. So the roadmap that SK Hynix is on is super aggressive and is super necessary.”
Chey also identified challenges faced by the AI industry, including the lack of “killer use cases” and revenue models to recoup heavy infrastructure investments, limited chip manufacturing capacity and the necessity of constnatly supplying AI systems with high-quality human-generated data.
He also spoke about the heavy energy requirements of AI data centres, saying SK Hynix has invested in gas turbines with carbon-capturing technology and small, modular nuclear reactors.
Lawsuit filed in London against Microsoft alleges customers using rival cloud services, have to pay…
Judge in Delaware for the second time rules against the record-breaking $56 billion pay package…
Beijing bans exports to US of key materials after Biden administration imposes more restrictions on…
New round of US semiconductor export restrictions designed to hamper Beijing's capacity to produce high-end…
Lender KfW is to be reimbursed by the German government more than €600 million ($629…
OpenAI's bid to convert to a 'for-profit' organisation is opposed by Elon Musk and co…