But there was no mention of such a bill. Now, it is key that the UK plots its own sovereign course, but the ‘Brussels effect’ of EU AI regulation is something to reckon with – or at least to take inspiration from. Citizen and consumer interests aside, there is one thing that even the most regulation-averse businesses dislike more than legislation: uncertainty. What shape could such regulation take, stimulating innovations in trustworthy AI and balancing the needs of businesses, public organizations, citizens, and consumers?

The sole mention of AI in the King’s Speech was that the government will “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.” These are the so-called ‘frontier’ generative AI models, such as OpenAI’s GPT, Google Gemini, and Meta’s herd of LlaMA models. The EU also caters to these models, but the scope of that legislation includes all forms of AI, including smaller models and non-generative AI.

This is important as most of the AI in use today falls into that category. Risk is not just caused by the largest and smartest models but also by the not-so-smart use of more limited AI models and AI systems. The real threat is not coming from AI superintelligence but from basic artificial unintelligence.

Take, for example, a commercial risk model that is used in US healthcare to guide prevention and care decisions. According to a research article published in Science, this model was originally trained to predict (future) healthcare spending for a patient to a reasonably high accuracy. However, the issue was that spending is a biased measure for the level of illness: less money is spent on black patients versus non-black patients with similar health conditions. By predicting chronic illness instead, the algorithm selects 2.5 times more black patients.

The lesson here is not that we shouldn’t innovate healthcare with AI, as it can be a very useful tool for prevention, operations, and clinical diagnosis. The lesson is that for purposes like these, where there is a lot at stake or a substantial risk of harm, higher levels of scrutiny apply, such as requiring checks for ethical bias, proper transparency, documentation, etc.

This is exactly the route that the EU Act has taken and something that future UK regulation could take inspiration from. Ensure that all forms of AI are covered by legislation, not just generative AI or frontier models, yet take a very pragmatic, risk-based approach where high-risk uses of AI are held to higher standards.

This is good for citizens and consumers, but also for business and the economy, and it would increase consumer trust as these systems and uses become worthy of our trust. It would foster, not hinder, the innovation of sector-based, responsible use of AI. It’s time for the UK to take matters in its own hands and set out its own course to take the lead in both innovation and legislation of trustworthy AI.

Peter van der Putten, Director AI Lab, Pegasystems.

David Howell

Dave Howell is a freelance journalist and writer. His work has appeared across the national press and in industry-leading magazines and websites. He specialises in technology and business. Read more about Dave on his website: Nexus Publishing. https://www.nexuspublishing.co.uk.

Recent Posts

How Agentic AI Became the Newest Form of Business Investment

Agentic AI is revolutionizing business investment by enabling autonomous, scalable systems that require minimal human…

3 weeks ago

New EU Law Expands Digital Resilience to Third-Party Dependencies: What is the Impact on Businesses

The EU’s Digital Operational Resilience Act (DORA) sets new standards for financial services, emphasizing digital…

3 weeks ago

So long, SaaS: Klarna is right, DIY is the Future for AI-Enabled Businesses

Klarna’s bold decision to abandon SaaS giants like Salesforce and Workday signals a major shift…

3 weeks ago

Demystifying AI Models: How to Choose the Right Ones

Large Language Models (LLMs) have revolutionized artificial intelligence, transforming how businesses interact with and generate…

2 months ago

Beyond CISO Scapegoating: Cultivating Company-Wide Security Mindsets

In the evolving cybersecurity landscape, the role of the Chief Information Security Officer (CISO) has…

2 months ago

Three Key Considerations for Companies Implementing Ethical AI

Artificial Intelligence (AI) has grown exponentially, transforming industries worldwide. As its use cases expand, concerns…

2 months ago