OpenAI has suffered another high-level departure after a safety executive left the AI pioneer – just days after launch of its latest AI model, GPT-4o.

Jan Leike (alongside Ilya Sutskever) had been the co-leaders of OpenAI’s “superalignment” project, that had been launched in 2023 to steer and control AI systems. At the time, OpenAI said it would commit 20 percent of its computing power to the initiative over four years.

But last week Leike announced he was departing OpenAI and he publicly criticised senior management, after a disagreement over key aims had reached “breaking point”.

Image credit: Sam Altman

Safety claims

Last week also saw the departure of OpenAI’s co-founder and chief scientist Ilya Sutskever, months after he had played a role in the shock firing and rehiring of CEO Sam Altman last November.

Jan Leike tweeted last Friday on X (formerly Twitter) that “yesterday was my last day as head of alignment, superalignment lead, and executive @OpenAI.”

“Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us,” he wrote in a series of tweets.

“I joined because I thought OpenAI would be the best place in the world to do this research,” he wrote. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

He claimed that “over the past years, safety culture and processes have taken a backseat to shiny products” at OpenAI.

He also raised concerns that the company was not devoting enough resources to preparing for a possible future “artificial general intelligence” (AGI) that could be smarter than humans.

OpenAI response

OpenAI at the weekend defended its safety practices, and Sam Altman tweeted he appreciated Leike’s commitment to “safety culture” and said he was sad to see him leave,

“I’m super appreciative of @janleike’s contributions to openai’s alignment research and safety culture, and very sad to see him leave,” Altman tweeted. “He’s right we have a lot more to do; we are committed to doing it. i’ll have a longer post in the next couple of days.”

On Saturday, OpenAI co-founder Greg Brockman tweeted a statement attributed to both himself and Altman on X, in which they asserted that OpenAI has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”

But CNBC has reported, citing a person familiar with the situation, that OpenAI has disbanded the team focused on the long-term risks of artificial intelligence just one year after the company announced the group.

The source said some of the team members are being reassigned to multiple other teams within the company.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Tech Minister Admits UK Social Media Ban For Under-16s “On The Table”

Following Australia? Technology secretary Peter Kyle says possible ban on social media for under-16s in…

20 hours ago

Northvolt Appoints Restructuring Expert For Main Battery Plant

Restructuring expert appointed to oversea Northvolt's main facility in northern Sweden, amid financial worries

21 hours ago

CMA Halts Google Anthropic Investigation

British competition watchdog decides Alphabet's partnership with AI startup Anthropic does not qualify for investigation

23 hours ago