Microsoft has said nation-state hackers are already utilising large language models such as OpenAI’s ChatGPT, to refine and improve their cyberattacks.
Microsoft Threat Intelligence and OpenAI made the claim in respective blog posts, which said “malicious actors will sometimes try to abuse our tools to harm others, including in furtherance of cyber operations.”
“In partnership with Microsoft Threat Intelligence, we have disrupted five state-affiliated actors that sought to use AI services in support of malicious cyber activities,” said OpenAI. “We also outline our approach to detect and disrupt such actors in order to promote information sharing and transparency regarding their activities.”
OpenAI and Microsoft then went to identify particular hacking groups, and said both it and Redmond had disrupted five state-affiliated malicious actors.
These included two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard.
The identified OpenAI accounts associated with these actors were terminated.
According to both firms:
OpenAI said that “although the capabilities of our current models for malicious cybersecurity tasks are limited, we believe it’s important to stay ahead of significant and evolving threats.”
It said that to respond to the threat, OpenAI has taking a multi-pronged approach to combating malicious state-affiliate actors’ use of its platform. This includes monitoring and disrupting malicious state affiliated actors; working with industry-partners with the AI ecosystem; learning safety mitigations, and being publicly transparent about potential misuses of AI.
“The vast majority of people use our systems to help improve their daily lives, from virtual tutors for students to apps that can transcribe the world for people who are seeing impaired,” said OpenAI. “As is the case with many other ecosystems, there are a handful of malicious actors that require sustained attention so that everyone else can continue to enjoy the benefits.”
“Although we work to minimise potential misuse by such actors, we will not be able to stop every instance,” it added. “But by continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else.”
China’s US embassy spokesperson Liu Pengyu told Reuters it opposed “groundless smears and accusations against China” and advocated for the “safe, reliable and controllable” deployment of AI technology to “enhance the common well-being of all mankind.”
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…