As AI – notably, Machine Learning continues to expand, government departments will increasingly use this technology to deliver advanced services to the public.

In their recent report Committee on Standards in Public Life concluded: “Our message to government is that the UK’s regulatory and governance framework for AI in the public sector remains a work in progress and deficiencies are notable.

“The work of the Office for AI, the Alan Turing Institute, the Centre for Data Ethics and Innovation (CDEI), and the Information Commissioner’s Office (ICO) are all commendable. But on the issues of transparency and data bias, in particular, there is an urgent need for practical guidance and enforceable regulation.”

In addition: “This review found that the Nolan Principles are strong, relevant, and do not need reformulating for AI. The Committee heard that they are principles of good governance that have stood, and continue to stand, the test of time. All seven principles will remain relevant and valid as AI is increasingly used for public service delivery.”

Commenting on the release of the report, Asheesh Mehra, CEO and Co-Founder, AntWorks said: “For ethical AI to work, the government must work with the private sector to ensure that AI solutions allow for auditability and traceability. These traceability capabilities exist on your laptop today.

“However, AI algorithms and engines don’t always work that way. Such technologies can immediately sweep up these identifying footprints. In the process, they erase the record of what occurred and the ability to assess what happened and when.

“That makes it difficult to learn from mistakes, police for possible infractions and identify those who don’t follow organizational or regulatory rules. Governments and organizations using AI should consider the importance of auditability and traceability to their enforcement and compliance efforts, and only then will AI be able to deliver transparent services to the public.”

The scale of AI development is staggering: The Office for AI estimates that AI could add £232 billion to the UK’s economy by 2030, boosting productivity in some industries by 30%. AI is also an international issue. Over 25 countries have published an AI strategy, and the European Union, United Nations, and OECD have all taken a close interest in AI governance and ethics. The question of how AI can be used effectively and ethically is of global concern, and there would be a benefit to the UK working with its international partners in a shared approach.

Ensuring when AI is used to deliver public services, this is done transparently and with full explainability are paramount. How these standards will be policed, remains to be seen, but policed they must. AI can be a black box technology. This kind of opaque deployment has no place in the public sector.

AI in public life

Silicon UK spoke with Ben Taylor, CTO, Rainbird who contributed to the report to gain their insight.

Ben Taylor, CTO, Rainbird.

In your view, do you agree that Nolan Principles are strong, relevant, and do not need reformulating for AI in the context of public services?

The Nolan principles still provide a good benchmark for public sector use of AI. As the Committee notes, Machine Learning systems pose a challenge for three of the Nolan Principles, specifically openness, accountability and objectivity. As late adopters of Machine Learning, some government bodies have poor ‘data hygiene’ which increases the potential for public-sector neural networks developing biases and thus failing to live up to the Nolan principle of objectivity.

Work by organizations such as the Office for AI, the Alan Turing Institute, the Centre for Data Ethics and Innovation (CDEI), and the Information Commissioner’s Office (ICO) is making strides towards addressing these concerns. For example, the European Commission last year set out seven principles for ethical AI (one of which was transparency), representing practical guidelines for the organization to follow in the pursuit of responsible AI implementation.

How important will explainability become as AI expands across public services?

‘Black box’ AI also poses a challenge to the essential democratic principle of accountability and renders it crucial that government departments look at more human-centric, rules-based AIs that can explain decisions in human terms.

In order to open up their decisions to public scrutiny, public sector bodies need to prioritize public standards when designing and building AI systems. They must move away from the kind of neural networks that use arcane mathematical models and instead adopt forms of AI that are explainable to non-specialists. It is not only public sector professionals that must be accountable, open and objective but also the AI systems they are using to make or guide their decisions.

Since AI is increasingly being used to help public service workers from the police to immigration officials, it is vital for their human ‘colleagues’ to be involved in customizing and auditing them. Many Machine Learning systems operate based on complex mathematical models understood only by data scientists which makes it impossible for ordinary public-sector professionals to monitor and manage them.

The difficulty in surfacing biases in complex Machine Learning platforms also prevents public sector bodies uncovering institutional biases in themselves; since machines inherit prejudices from humans, surfacing prejudice in machines can also surface previously unseen prejudice in humans.

The only way to ensure AI systems are rolled out safely in public services is a return to ‘rules-based’ AI that reflects human thinking and can, therefore, be configured and audited by relevant subject matter experts – this is especially true in government given the historically opaque nature of its inner workings and the lack of clarity around how decisions within its departments are made.

Where within public services, do you think AI will first make a significant and positive impact?

Many recent AI projects have found particular success in a focus on making customer interactions smoother. Indeed, a large proportion of AI adoption across the board (including in public services) has so far been front-end focused.

There have been many significant benefits from the adoption of this technology in terms of scaling workflows and freeing resources by automation manual, administrative tasks. Still, the media landscape has rightly focused on the various problems that have come along with it.

The public sector’s accelerating adoption of AI risks causing discrimination unless action is taken and transparent forms of AI, which can be monitored to mitigate the risks, are implemented, which is why verifying that decisions are taken according to the Nolan principles is so important.

Is the ‘coherent regulatory framework’ that the Committee on Standards in Public Life in existence? If not, what needs to change?

With the regulatory framework currently in place, businesses and government urgently need to look at combining Machine Learning with rules-based forms of AI that apply the same human logic to all decisions, eliminating the human inconsistencies and biases that creep into machine-learning models and potentially lead to discrimination.

Currently, rules-based AI systems learn how to make significant decisions from human experts, but in future, they could learn from data. Symbolic AI could also be used to audit machine-learning systems by comparing their contribution to decisions with that of other factors. This would bridge the gap between neural networks and human accountability.

Governments are now starting to understand that we need stronger and better regulation around AI. Public services must start taking action to develop new frameworks so organizations can innovate in a way that is impactful and progressive, but also responsible. Stronger regulation allows organizations and individuals alike to understand the way data should be used and shared responsibly.

Do you see any issues with the government adopting the recommendations made by the Committee?

The Committee on Standards in Public Life report found that AI poses a challenge for openness because it is extremely difficult to find out where and how new technology is being used in the public sector. Yet what is even more dangerous is that these organizations themselves do not understand how the technology is being used; it is the equivalent of the left hand, not knowing what the right is doing.

The inner workings of neural networks are so obscure and remote from everyday human reasoning that it could make decisions affecting and the operation of public services increasingly obscure and remote from everyday people too.

How will AI in the public sector be used within the next five years?

Organizations currently sit on vast amounts of data that can benefit ‘behind the scenes’ decision-making by combining with human-centric machine intelligence. New benefits in future can be derived from automating high value, predominantly transactional decisions, such as risk analysis or governance decisions, by scaling a company’s domain expertise. More companies and the government [public bodies?] need to realize that the real value of AI is in these middle or back-end processes.

Being open and transparent about how public bodies are using AI is clearly a foundation onto which these services must be built. In conclusion, Oleg Rogynskyy, CEO and Founder, People.ai said: “For AI to take off in the public sector, the public need to be aware and in control of what happens with their personal data. Public sector organizations need to be transparent about the data they collect and how this is being used. Equally important is making sure the AI models that are used are fully explainable, so the decisions they make can be questioned and reviewed if needed. This would work to dispel both concerns of accountability and bias.”

The future of public service delivery will see more Machine Learning being used – this is inevitable, as the technology can deliver massive benefits. These advantages though, must not be at the cost of unethical practices. Open, explainable AI is the way forward in the public sector.

David Howell

Dave Howell is a freelance journalist and writer. His work has appeared across the national press and in industry-leading magazines and websites. He specialises in technology and business. Read more about Dave on his website: Nexus Publishing. https://www.nexuspublishing.co.uk.

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

55 mins ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

3 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

5 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

6 hours ago