The Joe Biden administration has released guidelines for the use of AI in critical infrastructure, amidst doubts that his policies will continue after a new administration takes over in January.
The Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure document, released by the US Department of Homeland Security, was formulated by representatives of government, industry and non-government organisations and addresses concerns that the use of AI in critical sectors could open up new means of attack.
The DHS said it sought to “understand, anticipate, and address risks that could negatively affect (critical) systems and the consumers they serve”.
The framework identifies 16 critical infrastructure sectors and notes that their use of AI “may expose critical systems to failures or manipulation”.
OpenAI, Anthropic, AWS, IBM, Microsoft, Alphabet, Northrop Grumman and others participated in formulating the guidelines since May, as part of the government’s Artificial Intelligence Safety and Security Board.
Other participants included the Center for Democracy and Technology, the Leadership Conference on Civil and Human Rights, the Stanford Human-Centered Artificial Intelligence Institute, the Brookings Institution and other state and local leaders.
The framework lays out best practices for cloud and compute infrastructure providers, AI developers, critical infrastructure owners and operators, civil society such as universities, research institutions and consumer advocates and public sector entities.
For instance, the document recommends AI developers to evaluate potential security risks in their products and keep them in line with “human-centric values” while protecting users’ privacy.
Homeland Security secretary Alejandro Mayorkas said the project aimed to produce a “living document” that would change as the industry changes.
The recommendations are the result of a Biden executive order from a year ago.
Mayorkas said the framework was the first of its kind formulated with “extensive collaboration with such a board, a broad, diverse set of stakeholders involved in the development for deployment of AI in our nation’s critical infrastructure”.
He told a press event it was “exceedingly rare” to have leading AI developers engaging directly with civil society on such issues.
Of the incoming administration’s approach to AI, Mayorkas said he believed the framework would “endure” even if the board itself were disbanded.
In November of last year the US, the UK and more than more than a dozen other countries signed an agreement to bolster cybersecurity for AI, in an agreement described as the first detailed international agreement on how to keep AI safe from rogue actors by pushing for companies to create AI systems that are “secure by design.”
Welcome to Silicon UK: AI for Your Business Podcast. Today, we explore how AI can…
Japanese tech investment firm SoftBank promises to invest $100bn during Trump's second term to create…
Synopsys to work with start-up SiMa.ai on joint offering to help accelerate development of AI…
Start-up Basis raises $34m in Series A funding round for AI-powered accountancy agent to make…
Data analytics and AI start-up Databricks completes huge $10bn round from major venture capitalists as…
Congo files legal complaints against Apple in France, Belgium alleging company 'complicit' in laundering conflict…