Hacker Stole OpenAI Internal Documents – Report
Security breach reportedly occurred at OpenAI last year, with a hacker stealing internal documents, but no source code
It has been reported in media outlets that OpenAI’s internal messaging systems were compromised last year by a hacker.
The New York Times, citing two sources, reported the security breach at the ChatGPT maker last year had revealed internal discussions among researchers and other employees, but not the code behind OpenAI’s systems.
It should be noted that OpenAI last month in June appointed a senior US military figure to its board of directors, in order to help better protect ChatGPT from cyberthreats and improve AI safety.
Security breach
OpenAI appointed retired US Army General Paul M. Nakasone, who was pivotal in the creation of US Cyber Command and was its longest-serving commander.
General Nakasone was also the director of the National Security Agency (NSA) before he retired in February 2024.
A few weeks later and the New York Times reported that early last year, a hacker gained access to the internal messaging systems of OpenAI, and stole details about the design of the company’s AI technologies.
The hacker allegedly lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its artificial intelligence.
According to the NYT, OpenAI executives had revealed the incident to staff during an all-hands meeting at the company’s San Francisco offices in April 2023 and informed its board of directors.
But the executives decided not to share the news publicly because no information about customers or partners had been stolen, the two sources told the NYT. The executives reportedly did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government.
OpenAI also did not inform the FBI or any other law enforcement agencies.
OpenAI security
After the breach, Leopold Aschenbrenner, an OpenAI technical program manager focused on ensuring that future AI technologies do not cause serious harm, sent a memo to OpenAI’s board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets, the NYT reported.
Aschenbrenner then alleged that OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated.
“We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation,” an OpenAI spokeswoman, Liz Bourgeois, told the New York Times. Referring to the company’s efforts to build artificial general intelligence, a machine that can do anything the human brain can do, she added, “While we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work. This includes his characterisations of our security, notably this incident, which we addressed and shared with our board before he joined the company.”
It comes after OpenAI earlier this year said it had disrupted five covert influence operations that sought to use its AI models for “deceptive activity” across the internet.
In May 2024 OpenAI formed a safety committee to oversee safety of ‘superintelligent’ AIs after disbanding its ‘superalignment’ team amidst criticism over priorities.