OpenAI has suffered another high-level departure after a safety executive left the AI pioneer – just days after launch of its latest AI model, GPT-4o.

Jan Leike (alongside Ilya Sutskever) had been the co-leaders of OpenAI’s “superalignment” project, that had been launched in 2023 to steer and control AI systems. At the time, OpenAI said it would commit 20 percent of its computing power to the initiative over four years.

But last week Leike announced he was departing OpenAI and he publicly criticised senior management, after a disagreement over key aims had reached “breaking point”.

Image credit: Sam Altman

Safety claims

Last week also saw the departure of OpenAI’s co-founder and chief scientist Ilya Sutskever, months after he had played a role in the shock firing and rehiring of CEO Sam Altman last November.

Jan Leike tweeted last Friday on X (formerly Twitter) that “yesterday was my last day as head of alignment, superalignment lead, and executive @OpenAI.”

“Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us,” he wrote in a series of tweets.

“I joined because I thought OpenAI would be the best place in the world to do this research,” he wrote. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

He claimed that “over the past years, safety culture and processes have taken a backseat to shiny products” at OpenAI.

He also raised concerns that the company was not devoting enough resources to preparing for a possible future “artificial general intelligence” (AGI) that could be smarter than humans.

OpenAI response

OpenAI at the weekend defended its safety practices, and Sam Altman tweeted he appreciated Leike’s commitment to “safety culture” and said he was sad to see him leave,

“I’m super appreciative of @janleike’s contributions to openai’s alignment research and safety culture, and very sad to see him leave,” Altman tweeted. “He’s right we have a lot more to do; we are committed to doing it. i’ll have a longer post in the next couple of days.”

On Saturday, OpenAI co-founder Greg Brockman tweeted a statement attributed to both himself and Altman on X, in which they asserted that OpenAI has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”

But CNBC has reported, citing a person familiar with the situation, that OpenAI has disbanded the team focused on the long-term risks of artificial intelligence just one year after the company announced the group.

The source said some of the team members are being reassigned to multiple other teams within the company.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

EU Proposes Import Duties On China’s Temu, Shein, AliExpress – Report

European Union is reportedly drawing up plans to impose custom duties on a number of…

6 hours ago

Biden Administration Revoked Eight Licences For Huawei In 2024

US continues to restrict Hauwei's access to American technology after revoking eight export licences this…

7 hours ago

Salesforce Shareholders Reject CEO Marc Benioff’s Compensation Plan

Compensation packages for Salesforce's top executives are rejected by shareholders and investors, despite backing from…

8 hours ago

Tesla Sales Fall For Second Straight Quarter

Despite price cuts, Tesla's global sales fall for the second straight quarter in another sign…

9 hours ago

Apple To Gain Observer Role On OpenAI’s Board – Report

Apple executive Phil Schiller reportedly will have observer role on OpenAI’s board as part of…

12 hours ago

Japan’s Government Ends Use Of Floppy Disks

Decades overdue. Japanese government finally realises it is no longer the 1990s and confirms it…

13 hours ago