Artificial Intelligence Act: Industry Reaction
Industry experts react to the European Union Parliament approving landmark legislation for artificial intelligence
The European Union Parliament has this week officially approved the Artificial Intelligence Act, after three years of development and negotiations.
The passing of the Artificial Intelligence Act on Wednesday makes the EU one of the world’s pioneers in regulating the area, ahead of the US, the UK and China, which are all developing their own sets of AI rules.
The passing of the new law also prompted a large number of reactions from the industry and AI experts, some of which Silicon UK has collated below.
Industry reaction
The significance of the new EU AI regulations, why every other region is now playing catch-up, the impact this will have on UK firms in particular, and what organisations need to be doing to prepare, was commented on by Forrester’s principal analyst Enza Iannopollo.
“The adoption of the AI Act marks the beginning of a new AI era and its importance cannot be overstated,” said Iannopollo on the significance of the new law. “The EU AI Act is the world’s first and only set of binding requirements to mitigate AI risks.”
“The goal is to enable institutions to exploit AI fully, in a safer, more trustworthy, and inclusive manner,” said Forreter’s Iannopollo. “Like it or not, with this regulation, the EU establishes the ‘de facto’ standard for trustworthy AI, AI risk mitigation, and responsible AI. Every other region can only play catch-up.”
“The fact that the EU brought this vote forward by a month also demonstrates that they recognise that the technology and its adoption is moving so fast that there is no time to waste, especially when there isn’t an alternative framework available,” said Iannopollo.
Iannopollo also noted the impact of this legislation on the UK, pointing out that British firms wishing to conduct business internationally, must comply with the EU AI Act, just like their counterparts in the US and Asia.
“Despite the aspiration of becoming the ‘centre of AI regulation’, the UK has produced little so far when it comes to mitigating AI risks effectively,” said Iannopollo. “Hence, companies in the UK will have to face two very different regulatory environments to start with.”
“Over time, at least some of the work UK firms undertake to be compliant with the EU AI Act will become part of their overall AI governance strategy, regardless of UK specific requirements – or lack thereof,” said Iannopollo.
Forreter’s Iannopollo explained organisations must prepare for the new EU AI law, due to its extra territorial effect, the hefty fines, and the pervasiveness of the requirements across the ‘AI value chain’.
All of this means that most global organisations using AI must – and will – comply with the Act. Some of its requirements will be ready for enforcement later this year, he pointed out.
“There is a lot to do and little time to do it,” Forreter’s Iannopollo concluded. “Organisations must assemble their ‘AI compliance team’ to get started. Meeting the requirements effectively will require strong collaboration among teams, from IT and data science to legal and risk management, and close support from the C-suite.”
Shadow AI implications
Mark Jow, EMEA technical evangelist at network monitoring and security Gigamon, said the EU Act makes it clear that enterprise customers are responsible for ensuring awareness of when and how AI capabilities are in use, and putting in place appropriate guardrails to secure their networks and data.
This makes the prospect of Shadow AI, illicit or unreported use of AI technologies within a business, all the more alarming for security leaders, Jow warned.
“Shadow AI is the latest addition to the overall Shadow IT stable, which poses risk to organisations by introducing new back doors, footholds and data storage sites that security leaders are unaware of, and therefore unable to factor into their overall security posture,” said Jow.
“This means these applications are used without the appropriate security tools and authentication levels on devices,” said Jow. “Purchased without being formally reported to the organisation, shadow technologies often occur because users are unaware of their responsibility, feel it isn’t important, or are sometimes purchased under personal expense claims.”
“While unbudgeted costs can be a challenge of any Shadow IT, but some of the biggest risks occur when employees and departments are leveraging less official, unpaid technologies,” said Jow. “The current AI landscape offers users free programs that generally carry a higher level of security risk and are currently (largely) unregulated.”
“In addition to unauthorised access risk, Shadow AI poses wider data protection risks,” Jow cautioned. “For example, how does the business know what potential proprietary, confidential, or private information is being provided to the AI solution in order for it to formulate decisions? Is the AI solution provided by a ‘trusted’ and reputable provider from a trusted nation state, or a corporation with a good history of data protection?”
Shadow AI also presents a reputational risk to organisations Jow warned, before he addressed how organisations can get a handle on AI use.
“As ever, you can’t manage what you can’t see,” said Jow. “Having the levels of visibility to identify and target the use of AI platforms through inspection of network telemetry is crucial.”
“While humans have a whole host of reasons to fail to report AI use, network data is the source of truth,” Jow concluded. “Solutions that provide deep observability of network traffic, wherever it flows – private/public cloud and on-prem, encrypted or unencrypted, north/south and east-west, are an essential tool in the armoury of any modern organisation wishing to identify and eliminate the risks of Shadow AI.”
Prepare now, or risk fines
Meanwhile Keith Fenner, SVP and GM EMEA at risk management firm Diligent, highlighted some of the the steps businesses will need to take to ensure they remain compliant.
“With the EU AI Act now endorsed by all 27 EU Member States and approved by the EU lawmakers today, the onus is now on British and Irish businesses to prepare for compliance,” said Fenner.
“The potential for hefty fines – up to €35 million or 7 percent of global turnover for breaches – has meant becoming and remaining compliant is increasingly important,” Fenner warned.
“To best prepare, GRC (governance, risk and compliance) professionals should build and implement an AI governance strategy. This will involve mapping, classifying and categorising the AI systems that they use or are under development based on the risk levels in the framework.”
“Next, business leaders and GRC professionals will need to perform gap assessments to evaluate if current policies and regulations on privacy, security, and risk can be applied to AI,” said Fenner. “The aim is to establish a strong governance framework, encompassing both in-house and third-party AI solutions.”
“But compliance is just the tip of the iceberg,” said Fenner. “To truly thrive in this new era, UK/Irish business leaders need to reimagine their approach to AI. This means finding the right balance between innovation and regulation.”
Prepare ahead of enforcement
Alois Reitbauer, chief technology strategist at observability and security speccialist Dynatrace also explained how organisations can prepare ahead of the Act being enforced.
Reitbauer believes that there needs to be a clear understanding on what constitutes as AI model for organisations to be able to comply with the law – but also warns that the EU itself must be careful to encourage investment in AI for positive societal impact if it is to avoid falling behind the rest of the world, pointing to the UK as an example.
“There never seemed to be any doubt that European Parliament would vote in favour of the EU AI Act, but the majority outcome this morning has finally confirmed it’s coming, and we now need to prepare,” said Reitbauer.
“One of the biggest considerations that must be addressed quickly is how the regulation will be enforced,” said Reitbauer. “It’s impossible to see how organisations will be able to comply if they aren’t first clear on what constitutes as AI model, so the EU will first need to ensure that has been clearly defined. For example, will the machine learning used in our mobile phones or connected thermostats be classed as an AI system?”
“There is also a danger of the EU falling behind the rest of the world if it only considers AI as a negative force to be contained,” said Reitbauer. “It needs to balance new regulatory controls with investments that encourage research into positive use cases for AI that can help solve some of the world’s most pressing challenges.”
“This is where the UK stands ahead, as its government has stated an intent to adopt a flexible approach to AI regulation, avoiding unnecessary blanket rules and promoting opportunities,” said Reitbauer. “This makes the UK an attractive destination for firms looking to establish a base to invest in AI-based research in Europe.”
Global implications
Meanwhile Greg Hanson, GVP and Head of Sales EMEA North at software development specialist Informatica, commented on how the EU’s AI Act will impact global regulation of AI, including the challenge for local regulators to strike a balance between regulation and innovation.
“Final approval of the EU’s AI Act will resonate far beyond the region’s borders,” said Hanson. “We will likely see divergence in how countries across the globe regulate AI. This could look like different variants of the same policy that are loosely based on the EU’s AI Act. The challenge for local regulators will be striking a balance between regulation and innovation.”
“What’s clear is that large, multi-national organisations will not be able to afford to do AI regulation on a siloed project by project, country-by-country basis,” said Hanson. “It is too complex.”
“Instead, organisations will need to consider how AI regulation translates into policy and put solid foundations in place that can be easily adapted for individual regions,” said Hanson. “For example, regulators across countries are showing an appetite for transparency. To demonstrate banks are using AI safely and responsibly, they must be able to show why their AI model has made a certain decision.”
“The onus is on banks to understand what the heritage and lineage of data inputs were, ensure that the data was high quality and illustrate how AI acted to generate a particular outcome,” Hanson concluded.