Microsoft has said nation-state hackers are already utilising large language models such as OpenAI’s ChatGPT, to refine and improve their cyberattacks.
Microsoft Threat Intelligence and OpenAI made the claim in respective blog posts, which said “malicious actors will sometimes try to abuse our tools to harm others, including in furtherance of cyber operations.”
“In partnership with Microsoft Threat Intelligence, we have disrupted five state-affiliated actors that sought to use AI services in support of malicious cyber activities,” said OpenAI. “We also outline our approach to detect and disrupt such actors in order to promote information sharing and transparency regarding their activities.”
OpenAI and Microsoft then went to identify particular hacking groups, and said both it and Redmond had disrupted five state-affiliated malicious actors.
These included two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard.
The identified OpenAI accounts associated with these actors were terminated.
According to both firms:
OpenAI said that “although the capabilities of our current models for malicious cybersecurity tasks are limited, we believe it’s important to stay ahead of significant and evolving threats.”
It said that to respond to the threat, OpenAI has taking a multi-pronged approach to combating malicious state-affiliate actors’ use of its platform. This includes monitoring and disrupting malicious state affiliated actors; working with industry-partners with the AI ecosystem; learning safety mitigations, and being publicly transparent about potential misuses of AI.
“The vast majority of people use our systems to help improve their daily lives, from virtual tutors for students to apps that can transcribe the world for people who are seeing impaired,” said OpenAI. “As is the case with many other ecosystems, there are a handful of malicious actors that require sustained attention so that everyone else can continue to enjoy the benefits.”
“Although we work to minimise potential misuse by such actors, we will not be able to stop every instance,” it added. “But by continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else.”
China’s US embassy spokesperson Liu Pengyu told Reuters it opposed “groundless smears and accusations against China” and advocated for the “safe, reliable and controllable” deployment of AI technology to “enhance the common well-being of all mankind.”
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…
US prosecutors confirm earlier reports, demand Google sells off Chrome web browser and end default…