The sudden ouster of former OpenAI chief executive Sam Altman has given a renewed sense of urgency to the EU’s negotiations over its draft artificial intelligence (AI) regulations, which have stalled over how powerful foundation models such as those developed by OpenAI should be handled.
Negotiations around the upcoming AI Act have stalled since mid-November when EU Parliament negotiators walked out of a meeting with government representatives from the EU Council and European Commission in protest against the position of France, Germany and Italy that foundation models should be left to self-regulate.
Those three countries reached a more formal agreement on Monday with a joint paper shared with fellow EU governments that supports “mandatory self-regulation through codes of conduct”, rather than formal regulation, for foundation models, according to reports from multiple media outlets.
“Together we underline that the AI Act regulates the application of AI and not the technology as such,” the paper says.
“The inherent risks lie in the application of AI systems rather than in the technology itself.”
The reliance on self-regulation is an effort to ensure Europe’s own technology in the hyper-competitive area is not unduly hampered by red tape, the paper said.
The three countries said in the paper that Europe needs a “regulatory framework which fosters innovation and competition, so that European players can emerge and carry our voice and values in the global race of AI”.
For its part, the European Parliament wants regulation applied to foundation models as well as less powerful forms of AI, with an unnamed member of the parliament’s negotiating team telling Politico the paper was “a declaration of war”.
Altman’s abrupt sacking by the OpenAI board led to renewed calls from lawmakers for formal regulation for foundation models.
The situation at OpenAI shows that “we cannot rely on voluntary agreements brokered by visionary leaders”, Brando Benifei, one of the two European Parliament negotiators, told Reuters.
“Regulation, especially when dealing with the most powerful AI models, needs to be sound, transparent and enforceable to protect our society,” he said.
Alexandra van Huffelen, Dutch minister for digitalisation, said the events at OpenAI underscored the “lack of transparency” in the industry and its “dependence on a few influential companies”, which “clearly underlines the necessity of regulation”.
Gary Marcus, an AI expert at New York University, wrote on X, formerly Twitter, “We can’t really trust the companies to self-regulate AI where even their own internal governance can be deeply conflicted.
“Please don’t gut the EU AI Act; we need it now more than ever.”
Welcome to Silicon UK: AI for Your Business Podcast. Today, we explore how AI can…
Japanese tech investment firm SoftBank promises to invest $100bn during Trump's second term to create…
Synopsys to work with start-up SiMa.ai on joint offering to help accelerate development of AI…
Start-up Basis raises $34m in Series A funding round for AI-powered accountancy agent to make…
Data analytics and AI start-up Databricks completes huge $10bn round from major venture capitalists as…
Congo files legal complaints against Apple in France, Belgium alleging company 'complicit' in laundering conflict…