UK Needs to Chart its Own Course Towards AI legislation

AIArtificial IntelligenceBusiness IntelligenceData StorageDigital transformationInnovationNetworksRegulationSecurity
Peter van der Putten, Director AI Lab, Pegasystems

There has been much speculation in the AI community recently: Will the new government announce plans for AI regulation in the King’s speech? The pressure is on, given that after more than six years of deliberation and negotiation, the EU AI Act will come into action on August 1st. Just days before the speech, it was reported that the AI bill was going to be one of the 35 bills included in the speech.

But there was no mention of such a bill. Now, it is key that the UK plots its own sovereign course, but the ‘Brussels effect’ of EU AI regulation is something to reckon with – or at least to take inspiration from. Citizen and consumer interests aside, there is one thing that even the most regulation-averse businesses dislike more than legislation: uncertainty. What shape could such regulation take, stimulating innovations in trustworthy AI and balancing the needs of businesses, public organizations, citizens, and consumers?

The sole mention of AI in the King’s Speech was that the government will “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.” These are the so-called ‘frontier’ generative AI models, such as OpenAI’s GPT, Google Gemini, and Meta’s herd of LlaMA models. The EU also caters to these models, but the scope of that legislation includes all forms of AI, including smaller models and non-generative AI.

This is important as most of the AI in use today falls into that category. Risk is not just caused by the largest and smartest models but also by the not-so-smart use of more limited AI models and AI systems. The real threat is not coming from AI superintelligence but from basic artificial unintelligence.

Take, for example, a commercial risk model that is used in US healthcare to guide prevention and care decisions. According to a research article published in Science, this model was originally trained to predict (future) healthcare spending for a patient to a reasonably high accuracy. However, the issue was that spending is a biased measure for the level of illness: less money is spent on black patients versus non-black patients with similar health conditions. By predicting chronic illness instead, the algorithm selects 2.5 times more black patients.

The lesson here is not that we shouldn’t innovate healthcare with AI, as it can be a very useful tool for prevention, operations, and clinical diagnosis. The lesson is that for purposes like these, where there is a lot at stake or a substantial risk of harm, higher levels of scrutiny apply, such as requiring checks for ethical bias, proper transparency, documentation, etc.

This is exactly the route that the EU Act has taken and something that future UK regulation could take inspiration from. Ensure that all forms of AI are covered by legislation, not just generative AI or frontier models, yet take a very pragmatic, risk-based approach where high-risk uses of AI are held to higher standards.

This is good for citizens and consumers, but also for business and the economy, and it would increase consumer trust as these systems and uses become worthy of our trust. It would foster, not hinder, the innovation of sector-based, responsible use of AI. It’s time for the UK to take matters in its own hands and set out its own course to take the lead in both innovation and legislation of trustworthy AI.

Peter van der Putten, Director AI Lab, Pegasystems.

 

Author

Latest Whitepapers