The Joe Biden administration has released guidelines for the use of AI in critical infrastructure, amidst doubts that his policies will continue after a new administration takes over in January.
The Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure document, released by the US Department of Homeland Security, was formulated by representatives of government, industry and non-government organisations and addresses concerns that the use of AI in critical sectors could open up new means of attack.
The DHS said it sought to “understand, anticipate, and address risks that could negatively affect (critical) systems and the consumers they serve”.
The framework identifies 16 critical infrastructure sectors and notes that their use of AI “may expose critical systems to failures or manipulation”.
OpenAI, Anthropic, AWS, IBM, Microsoft, Alphabet, Northrop Grumman and others participated in formulating the guidelines since May, as part of the government’s Artificial Intelligence Safety and Security Board.
Other participants included the Center for Democracy and Technology, the Leadership Conference on Civil and Human Rights, the Stanford Human-Centered Artificial Intelligence Institute, the Brookings Institution and other state and local leaders.
The framework lays out best practices for cloud and compute infrastructure providers, AI developers, critical infrastructure owners and operators, civil society such as universities, research institutions and consumer advocates and public sector entities.
For instance, the document recommends AI developers to evaluate potential security risks in their products and keep them in line with “human-centric values” while protecting users’ privacy.
Homeland Security secretary Alejandro Mayorkas said the project aimed to produce a “living document” that would change as the industry changes.
The recommendations are the result of a Biden executive order from a year ago.
Mayorkas said the framework was the first of its kind formulated with “extensive collaboration with such a board, a broad, diverse set of stakeholders involved in the development for deployment of AI in our nation’s critical infrastructure”.
He told a press event it was “exceedingly rare” to have leading AI developers engaging directly with civil society on such issues.
Of the incoming administration’s approach to AI, Mayorkas said he believed the framework would “endure” even if the board itself were disbanded.
In November of last year the US, the UK and more than more than a dozen other countries signed an agreement to bolster cybersecurity for AI, in an agreement described as the first detailed international agreement on how to keep AI safe from rogue actors by pushing for companies to create AI systems that are “secure by design.”
US Commerce Department finalises $6.6bn subsidy to TSMC for leading-edge chip plants in Arizona, as…
SpaceX to begin tender offer in December valuing company at $210bn, as Elon Musk's xAI…
World's biggest PC maker Lenovo beats sales predictions, raises forecast for 2025 as AI capabilities,…
Chip production slows in China in October ahead of expected export controls, while annual EV…
Apple effectively locked 40 million UK users into iCloud and overcharged them, claims £3bn legal…
Amazon launches Haul mobile experience with prices capped at $20 in face of low-cost competition…