OpenAI, Anthropic To Share AI Models With US Government

Both OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models.

The US AI Safety Institute announced “agreements that enable formal collaboration on AI safety research, testing and evaluation with both Anthropic and OpenAI.”

Essentially the agreement will let the US government access major new AI models before their general release, in order to help improve their safety. This is a core goal of both the British and American AI Safety Institutes.

AI safety

In April 2024 both the United Kingdom and United States had signed a landmark agreement to work together on testing advanced artificial intelligence (AI).

That agreement saw the UK and US AI Safety Institutes pledge to work seamlessly with each other, partnering on research, safety evaluations, and guidance for AI safety.

It comes after last year’s AI Safety Summit in the UK, where big name companies including Amazon, Google, Facebook parent Meta Platforms, Microsoft and ChatGPT developer OpenAI all agreed to voluntary safety testing for AI systems, resulting in the so called ‘Bletchley Declaration.’

That agreement was backed by the EU and 10 countries including China, Germany, France, Japan, the UK and the US.

OpenAI, Anthropic agreement

Now according to the US AI Safety Institute, each company’s Memorandum of Understanding establishes the framework for it “to receive access to major new models from each company prior to and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks.”

“Safety is essential to fueling breakthrough technological innovation,” said Elizabeth Kelly, director of the US AI Safety Institute. “With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety.”

“These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” said Kelly.

Additionally, the US AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the UK AI Safety Institute.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Microsoft’s Hiring Of Inflection AI Staff Does Not Meet EU Merger Thresholds

European Commission says Microsoft's hiring of Inflection AI's staff will not be investigated under EU…

8 hours ago

Google Urges London Tribunal To Dismiss Mass Lawsuit

Alphabet urges Competition Appeal Tribunal to dismiss mass lawsuit seeking up to £7bn ($9.3bn) for…

8 hours ago

US To Host International Network of AI Safety Institutes In November

The US will host the first meeting of the International Network of AI Safety Institutes,…

9 hours ago

Qualcomm Loses Appeal Over EU Antitrust Fine

EU General Court upholds European Commission €242m antitrust fine against Qualcomm, after it allegedly forced…

11 hours ago

EU Court Rules Google’s €1.49bn Fine Should Be Annulled

Google wins court challenge. Europe's second highest court rules EC's €1.49bn antitrust fine should be…

14 hours ago

Meta Bans Russian State Media Networks

Russian state media networks including RT, Rossiya Segodnya etc banned by Meta Platforms for “foreign…

15 hours ago