OpenAI, Anthropic To Share AI Models With US Government

Both OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models.

The US AI Safety Institute announced “agreements that enable formal collaboration on AI safety research, testing and evaluation with both Anthropic and OpenAI.”

Essentially the agreement will let the US government access major new AI models before their general release, in order to help improve their safety. This is a core goal of both the British and American AI Safety Institutes.

AI artificial intelligence

AI safety

In April 2024 both the United Kingdom and United States had signed a landmark agreement to work together on testing advanced artificial intelligence (AI).

That agreement saw the UK and US AI Safety Institutes pledge to work seamlessly with each other, partnering on research, safety evaluations, and guidance for AI safety.

It comes after last year’s AI Safety Summit in the UK, where big name companies including Amazon, Google, Facebook parent Meta Platforms, Microsoft and ChatGPT developer OpenAI all agreed to voluntary safety testing for AI systems, resulting in the so called ‘Bletchley Declaration.’

That agreement was backed by the EU and 10 countries including China, Germany, France, Japan, the UK and the US.

OpenAI, Anthropic agreement

Now according to the US AI Safety Institute, each company’s Memorandum of Understanding establishes the framework for it “to receive access to major new models from each company prior to and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks.”

“Safety is essential to fueling breakthrough technological innovation,” said Elizabeth Kelly, director of the US AI Safety Institute. “With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety.”

“These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” said Kelly.

Additionally, the US AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the UK AI Safety Institute.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Pebble Creator Debuts New Watches As ‘Labour Of Love’

Pebble creator launches two new PebbleOS-based smartwatches with 30-day battery life, e-ink screens after OS…

1 hour ago

Amazon Loses Appeal To Record EU Privacy Fine

Amazon loses appeal in Luxembourg's administrative court over 746m euro GDPR fine related to use…

2 hours ago

Nvidia, xAI Join BlackRock AI Infrastructure Project

Nvidia, xAI to participate in project backed by BlackRock, Microsoft to invest $100bn in AI…

2 hours ago

Google Agrees To $28m Settlement In Bias Case

Google agrees to pay $28m to settle claims it offered higher pay and more opportunities…

3 hours ago

Tencent Capex Triples As It Invests In AI

Chinese social media giant Tencent triples capital expenditure on AI data centres and other areas…

3 hours ago

EU Hands Apple First Interoperability Requirements

EU gives Apple demands for third-party developer access to iOS features and greater responsiveness in…

11 hours ago