OpenAI Creates AI Safety Committee After Criticism

OpenAI has formed a safety committee to address AI safety concerns, as it said it is training its next-generation “frontier” model to succeed the GPT-4 model that currently drives ChatGPT.

The new Safety and Security Committee is to be led by board members including chief executive Sam Altman, chairman Bret Taylor and four OpenAI technical and policy experts, as well as board members Adam D’Angelo, the chief executive of Quora, and former Sony general counsel Nicole Seligman.

It is to consult with outside experts including forme US National Security Agency cybersecurity director Rob Joyce and former Justice Department official John Carlin and to advise the board on “critical safety and security decisions” for projects and operations, OpenAI said.

The committee is to initially evaluate and further develop OpenAI’s existing processes and safeguards and make recommendations to the board in 90 days.

OpenAI chief technical officer Mira Murati introduces GPT-4o in May 2024. Image credit: OpenAI

Rogue superintelligence

The company said it would publicly release the recommendations it is adopting “in a manner that is consistent with safety and security”.

OpenAI also said it has “recently begun training its next frontier model”, adding, “We welcome a robust debate at this important moment.”

AI safety includes issues such as ensuring AI benefits humanity – a key goal of the non-profit organisation that remains the sole shareholder of the for-profit concern OpenAI Global – and has had a thorny history during the company’s transition to a profit-making enterprise since 2018.

Safety concerns also include issues around artificial general superintelligence, or AI systems that are much more intelligent than humans, something OpenAI has previously said “could arrive this decade”.

“How do we ensure AI systems much smarter than humans follow human intent?” OpenAI asked when forming its “superalignment” team in July of last year, the group that previously oversaw AI safety.

Commercial transition

“Humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence,” the company said at the time. “We need new scientific and technical breakthroughs.”

The danger, the company said at the time, was that “the vast power of superintelligence could… be very dangerous, and could lead to the disempowerment of humanity or even human extinction”.

OpenAI disbanded the superalignment team earlier this month after the researchers who led it, Jan Leike and Ilya Sutskever, left the company, with Leike criticising OpenAI for allowing safety to “take a backseat to shiny products”.

Leike said on Tuesday he was joining OpenAI rival Anthropic to “continue the superalignment mission”.

Gil Luria, managing director or investment bank DA Davidson, said the new committee “signifies OpenAI completing a move to becoming a commercial entity, from a more undefined non-profit-like entity” and “should help streamline product development while maintaining accountability”.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

2 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

3 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

4 hours ago