The United States, United Kingdom and more than a dozen of other countries have signed an agreement to bolster cybersecurity for artificial intelligence (AI).

The agreement, signed on Sunday, was described by a senior US official to Reuters as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design.”

To this end, the US Cybersecurity and Infrastructure Security Agency (CISA) and the UK’s National Cyber Security Centre (NCSC) published “new guidelines for secure AI system development will help developers of any systems that use AI make informed cyber security decisions at every stage of the development process.”

The NCSC’s headquarters in Victoria. NCSC

AI Cybersecurity

The guidelines have been signed by agencies from 18 countries, “to raise the cyber security levels of artificial intelligence and help ensure that it is designed, developed, and deployed securely.”

In addition to the United States and Britain, the 18 countries that signed on to the new guidelines include Australia, Canada, Germany, France, Italy, Japan, Norway, South Korea, the Czech Republic, Estonia, Poland, Chile, Israel, Nigeria and Singapore.

The Guidelines for Secure AI System Development have been developed by the NCSC, a part of GCHQ, and the US’s Cybersecurity and Infrastructure Security Agency (CISA) in co-operation with industry experts and 21 other international agencies and ministries from across the world – including those from all members of the G7 group of nations and from the Global South.

“The release of the Guidelines for Secure AI System Development marks a key milestone in our collective commitment – by governments across the world – to ensure the development and deployment of artificial intelligence capabilities that are secure by design,” said CISA Director Jen Easterly.

Jen Easterly

“As nations and organisations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability, and secure practices,” said Easterly.

“The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution,” said Easterly. “This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of international partnership in securing our digital future.”

Raising AI security

“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” added NCSC CEO Lindy Cameron.

“These Guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout,” said Cameron.

“I’m proud that the NCSC is leading crucial efforts to raise the AI cyber security bar: a more secure global cyber space will help us all to safely and confidently realize this technology’s wonderful opportunities,” said Cameron.

The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering,and vetting software suppliers.

However it does not tackle about the appropriate uses of AI, or how the underlying data is gathered.

Instead it looks to tackle questions about how to keep AI technology from being hijacked by hackers, and includes recommendations such as only releasing models after appropriate security testing.

Global AI regulation

The agreement comes not long after the world’s first AI safety summit was held at Bletchley Park in the UK.

The AI safety summit was attended by 28 nations and leading politicans and pubic figures including UK Prime Minister Rishi Sunak, US Vice-President Kamala Harris, European Commission president, Ursula von der Leyen, Italian prime minister Giorgia Meloni, China’s tech vice minister Wu Zhaohui, and United Nations’ Secretary-General Antonio Guterres.

The summit saw the 28 nations including the US, UK, EU, and China, sign the ‘Bletchley Declaration’, which addresses the potential risks posed by the artificial intelligence.

The Bletchley Declaration is notable as the first international declaration on AI, after countries all agreed that artificial intelligence poses a potentially catastrophic risk to humanity.

The European Union had agreed its own draft AI laws in the summer, while the Biden Administration has been seeking to pressure US lawmakers for AI regulations, and has signed an executive order for AI safeguards.

Image credit: US government

The executive order sought to reduce AI risks to consumers, workers, and minority groups while bolstering national security.

But a divided US Congress, so far, has made little headway in passing effective AI regulation.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

2 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

3 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

4 hours ago