The United States government has begun seeking public input on potential rules to govern the use of artificial intelligence (AI) in the years ahead.
The US Department of Commerce’s National Telecommunications and Information Administration (NTIA) announced on Tuesday that it had “launched a request for comment (RFC) to advance its efforts to ensure artificial intelligence (AI) systems work as claimed – and without causing harm.”
It comes after the US Chamber of Commerce last month called for the regulation of artificial intelligence technology – a surprising move considering its traditional anti-regulatory stance.
The US lobby group is concerned that AI technology could potentially become a national security risk, and is also concerned that its arrival could hurt business growth in the years ahead.
But now this week the NTIA said that the insights gathered through its RFC will inform the Biden Administration’s ongoing work to ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities.
It should be remembered that in October last year, the Biden administration had introduced a blueprint for an AI Bill of Rights.
The Bill of Rights entailed five principles that companies should consider for their products, including data privacy, protections against algorithmic discrimination, and transparency around when and how an automated system is being used.
Now the US Department of Commerce’s NTIA says that while people are already realising the benefits of AI, there are a growing number of incidents where AI and algorithmic systems have led to harmful outcomes.
The NTIA also noted that there is also growing concern about potential risks to individuals and society that may not yet have manifested, but which could result from increasingly powerful systems.
It said companies have a responsibility to make sure their AI products are safe before making them available. Businesses and consumers using AI technologies and individuals whose lives and livelihoods are affected by these systems have a right to know that they have been adequately vetted and risks have been appropriately mitigated.
“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms,” said Alan Davidson, Assistant Secretary of Commerce for Communications and Information and NTIA Administrator.
“For these systems to reach their full potential, companies and consumers need to be able to trust them,” Davidson added. “Our inquiry will inform policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems.”
The NTIA’s “AI Accountability Policy Request for Comment” seeks feedback on what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems that they work as claimed.
“As generative AI tools such as Microsoft’s ChatGPT and Google’s Bard continue to make waves, it’s natural to ask tough questions,” said Andreas Rindler, MD at BCG (Boston Consulting Group) Platinion.
“We’ve seen havoc misinformation and bias can wreak in AI technology, so we can’t just assume these tools are inherently ethical and safe,” said Rindler. “But it would be hasty to call for a blanket suspension, what’s really needed is a responsible and ethical approach that permeates every aspect of our society.”
“There are indeed many critical risks when dealing with AI,” said Rindler. “From unexpected capabilities upon deployment to its potential use as a powerful tool for phishing and fraud activities, like deepfakes, it’s unaffordable to take eyes off the ball.”
“To mitigate this, it’s critical to raise awareness among public institutions, businesses and educational settings, to understand and guide the continued development of AI without being overwhelmed by it,” Rindler concluded.
There is no shortage of suggested guidelines for AI systems at present.
In 2019 for example the House of Lords published a comprehensive report into artificial intelligence (AI) and called for an AI code of ethics.
Then in early 2020, the Trump administration said it would propose regulatory principles to govern the development and use of AI. Those regulatory principles were designed to prevent “overeach” by authorities, and the White House at the time also wanted European officials to likewise avoid aggressive approaches.
Then in July 2021 the US government announced the creation of the National Artificial Intelligence Research Resource Task Force.
The UK and European Union meanwhile have already issued their own AI proposals.
Last month a UK government white paper suggested an “adaptable” set of rules to govern AI, and outlined five principles, including safety, transparency and fairness, to guide the use of artificial intelligence in the UK, as part of a new national blueprint for the new technology.
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…