US To Host International Network of AI Safety Institutes In November

Capgemini

The US will host the first meeting of the International Network of AI Safety Institutes, shortly after the US presidential election

The United States will host a meeting of the International Network of AI Safety Institutes, shortly after the US Presidential elections in November.

The US Commerce Department and US State Department jointly announced that US Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken will host the summit on 20-21 November. This will be just after the US Presidential election on 5 November.

The first ever AI Safety Summit took place last November, at Bletchley Park in the United Kingdom, attended by then UK Prime Minister Rishi Sunak, US Vice President Kamala Harris (who is now running for US President), as well as high profile figures, nations and AI organisations.

Understanding the New European Union AI Law
The new AI law will include regulations to govern the development of generative AI.

AI Safety Summits

That 2023 Summit in the UK had resulted in the first international declaration on AI – the so called ‘Bletchley Declaration’ – which saw attendees all agreed that artificial intelligence poses a potentially catastrophic risk to humanity, and propose a series of steps to mitigate this risk.

The second AI Safety Summit took place in South Korea in May 2024, where 16 companies at the forefront of developing AI pledged to develop the technology safely.

At the South Korea summit in May, US Secretary of Commerce Gina Raimondo had announced the launch of the International Network of AI Safety Institutes.

The initial members of the International Network of AI Safety Institutes are Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States.

China is notable by its absence, despite it attending the Bletchley Park Summit in November.

International Network of AI Safety Institutes

Now the US Commerce Department and US State Department have announced they will co-host the inaugural convening of the International Network of AI Safety Institutes on 20-21 November, in San Francisco, California.

They said the convening will “bring together technical experts on artificial intelligence from each member’s AI safety institute, or equivalent government-backed scientific office, in order to align on priority work areas for the Network and begin advancing global collaboration and knowledge sharing on AI safety.”

“AI is the defining technology of our generation,” said US Secretary of Commerce Gina Raimondo. “With AI evolving at a rapid pace, we at the Department of Commerce, and across the Biden-Harris Administration, are pulling every lever. That includes close, thoughtful coordination with our allies and like-minded partners.”

US Secretary of Commerce Gina M. Raimondo.
Image credit US Government

“We want the rules of the road on AI to be underpinned by safety, security, and trust, which is why this convening is so important,” said Raimondo. “I look forward to welcoming government scientists and technical experts from the International Network of AI Safety Institutes to the centre of American digital innovation, as we run toward the next phase of global cooperation in advancing the science of AI safety.”

Paris Feb 2025

“Strengthening international collaboration on AI safety is critical to harnessing AI technology to solve the world’s greatest challenges,” added US Secretary of State Antony J. Blinken. “The AI Safety Network stands as a cornerstone of this effort.”

The goal of this convening is to kickstart the Network’s technical collaboration ahead of the AI Action Summit in Paris in February 2025.

The US Departments said they will also invite experts from international civil society, academia, and industry to join portions of the event to help inform the work of the Network and ensure a robust view of the latest developments in the field of AI.

Last October President Joe Biden had signed an executive order on AI that requires developers of the most powerful AI systems to share safety test results and other information with the government.

US president Joe Biden and vice president Kamala Harris. Image credit: US government
Image credit: US government

Last week the US Commerce Department said it was proposing to require detailed reporting requirements for advanced AI developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks.