The Joe Biden US presidential administration has released a wide-ranging executive order on artificial intelligence (AI) that amongst other measures obliges companies developing the most powerful models to submit regular security reports to the federal government.
The 111-page document builds on an AI “Bill of Rights” issued late last year that similarly sought to address some of the technology’s main potential drawbacks while pushing to explore its benefits.
Aside from its national security provisions, the order – which has been anticipated for some time – seeks to promote competition in the artificial intelligence market while mitigating potential issues such as discrimination in areas such as housing, healthcare and justice.
It obliges private companies to submit reports on how they train and test “dual-use foundation models”, a category of AI models that it defines to include the most powerful next-generation AI systems.
The government is using the Cold War-era Defence Production Act to compel businesses to notify it when training systems that could pose serious risks to national security and to provide the results of safety tests.
An unnamed senior official told the Financial Times that the provision was “primarily” aimed at the most powerful next-generation systems and was not seen as applying to “any system currently on the market”.
The order indicates that the White House considers the rapid development of advanced cyberweapons as one of AI’s most serious risks – a theme perhaps partly inspired by the apocalyptic depiction of such an artificially intelligent weapon in last summer’s film Mission Impossible: Dead Reckoning Part 1.
Large cloud services providers such as Amazon, Microsoft and Google are to be required to notify the government each time foreign organisations rent servers to train large AI models, in a move that extends the administration’s efforts to prevent countries such as China from accessing high-end AI GPU training chips such Nvidia’s H100 and A100.
The order instructs the Commerce Department to draft guidance on adding watermarks to AI-generated content as a means of addressing “fraud and deception”, while the Federal Trade Commission is encouraged to “exercise its authorities” to promote AI industry competition.
Congress is urged to pass data privacy legislation and the order seeks an assessment of how US federal agencies collect and use commercially available personal data.
“President Biden is rolling out the strongest set of actions any government in the world has ever taken on AI safety, security and trust,” said White House deputy chief of staff Bruce Reed.
“It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks”.
This week vice president Kamala Harris is to give a speech in London about US policy before attending the UK’s AI summit at Bletchley Park, which is expected to discuss guardrails for future AI development.
To date the EU has taken the most aggressive stance on AI regulation, with its incoming AI Act, while the US has said it is continuing to assess which aspects of AI require new legislation.
The order primarily applies to federal agencies, and is intended to provide guidelines to the public sector, but the administration has made it clear that legislation will be required to fully implement its ideas.
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…
US prosecutors confirm earlier reports, demand Google sells off Chrome web browser and end default…
Following Australia? Technology secretary Peter Kyle says possible ban on social media for under-16s in…