UK AI Safety Institute To Open Office In US

Understanding the New European Union AI Law

Seeking collaboration on AI regulation, UK’s AI Safety Institute to cross Atlantic and will open office in San Francisco

The United Kingdom continues to push for collaboration with international partners to regulate the fast moving field of artificial intelligence (AI).

The British government announced that it will “expand across the Atlantic to broaden technical expertise and cement its position as global authority on AI Safety.”

This will entail opening an office of the UK’s AI Safety Institute in San Francisco in the summer, so as to “engage with the world’s largest AI labs headquartered in both London and San Francisco.”

Bletchley Declaration

Last month both the United Kingdom and United States had signed a landmark agreement to work together on testing advanced artificial intelligence (AI).

That agreement saw the UK and US AI Safety Institutes pledge to work seamlessly with each other, partnering on research, safety evaluations, and guidance for AI safety.

It comes after last year’s AI Safety Summit in the UK, where big name companies including Amazon, Google, Facebook parent Meta Platforms, Microsoft and ChatGPT developer OpenAI all agreed to voluntary safety testing for AI systems, resulting in the so called ‘Bletchley Declaration.’

That agreement was backed by the EU and 10 countries including China, Germany, France, Japan, the UK and the US.

A new international AI Safety Summit is due to take place this week in Seoul.

British Prime minister Rishi Sunak had stated last year that he wanted the UK to be the “geographical home” of coordinated international efforts to regulate AI.

Now the British government has said that in an effort to cement its position as global authority on AI Safety, the UK AI Safety Institute is crossing the Atlantic.

The UK AI Safety Institute has also just published its first ever AI safety testing results on publicly-available models.

US office

Technology Secretary Michelle Donelan announced on Monday that the AI Safety Institute will open its first overseas office in San Francisco this summer.

The British government said the US office “marks a pivotal step that will allow the UK to tap into the wealth of tech talent available in the Bay Area, engage with the world’s largest AI labs headquartered in both London and San Francisco, and cement relationships with the United States to advance AI safety for the public interest.”

The office is recruiting the first team of technical staff headed up by a Research Director.

It will be a complementary branch of the Institute’s London HQ, which will continue to scale and acquire the necessary expertise to assess the risks of frontier AI systems.

By expanding its foothold in the US, the Institute will establish a close collaboration with the US, furthering the country’s strategic partnership and approach to AI safety, while also sharing research and conducting joint evaluations of AI models that can inform AI safety policy across the globe, the government stated.

“This expansion represents British leadership in AI in action,” said Secretary of State for Science and Technology Michelle Donelan. “It is a pivotal moment in the UK’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety.”

“Since the Prime Minister and I founded the AI Safety Institute, it has grown from strength to strength and in just over a year, here in London, we have built the world’s leading government AI research team, attracting top talent from the UK and beyond,” said Donelan.

“Opening our doors overseas and building on our alliance with the US is central to my plan to set new, international standards on AI safety which we will discuss at the Seoul Summit this week,” said Donelan.

Meanwhile the UK has also committed to collaborating with Canada, including through their respective AI Safety Institutes, to advance their ambition to create a growing network of state backed organisations focused on AI safety and governance.

As part of this agreement, the countries will aim to share their expertise to bolster existing testing and evaluation work. The partnership will also enable secondment routes between the two countries, and work to jointly identify areas for research collaboration.