Government Needs ‘Careful Consideration’ To Avoid The Pitfalls Of AI
Government report on the future of Artificial intelligence (AI) says Whitehall must consider impact of the technology on privacy, jobs and other areas
Accountability for AIs
The second is ensuring the “concepts and mechanisms of accountability” for decision made by AIs are considered, with the report highlighting that human oversight is needed for AI decision making, say in the civil service, yet there needs to be a means to ensure humans do not blindly go along with every decision or suggestion an AI serves up.
“It is likely that many types of government decisions will be deemed unsuitable to be handed over entirely to artificial intelligence systems. There will always be a ‘human in the loop’,” the report said, yet explained there are challenges with that approach as well.
“This person’s role, however, is not straightforward. If they never question the advice of the machine, the decision has de facto become automatic and they offer no oversight. If they question the advice they receive, however, they may be thought reckless, more so if events show their decision to be poor.”
Added into the mix is the need to figure out who is accountable for any bad decision AIs come up with, something that have been a prevalent concern with the development of driverless cars.
“There will be calls for redress and compensation in the event that the use of AI causes some harm. The challenge is to establish a system that can provide this. Current approaches to liability and negligence are largely untested in this area,” the report highlighted.
This approach has been welcomed by the Digital Catapult, which is positive about the way AIs can bring benefits but champions transparency and control over the data they have access to.
“The size of the prize is huge, but transparency and control over our personal data are paramount. The good news is that the UK has world-leading capability to tackle this dilemma. Significant investment is warranted in techniques for working with large data sets while preserving privacy and techniques for clearer consent and trust when sharing personal data, that would help address the barriers to AI-powered health for the UK,” said Marko Balabanovic, chief technology officer at the Digital Catapult.
Careful consideration
Many of these problems are currently hypothetical with current AIs not posing too much of a risk to privacy or decision making as they are limited by the permission people give them. However, as AI moves from machine learning to more advanced techniques, then these concerns become more apparent.
Preemptively, Amazon, Facebook, Google, Microsoft and IBM have joined forces to create a AI organisation to explore the ethics and applications of the technology and its potential to transform industries and society.
And MPs are also calling for the government to consider the impact of AI, with the Science and Technology Committee saying action needs to be taken now to ensure the UK is ahead of the game in both benefiting from AI and mitigating its potential negative impacts.
“It is too soon to set down sector-wide regulations for this nascent field, but it is vital that careful scrutiny of the ethical, legal and societal ramifications of artificially intelligent systems begins now,” the Committee said.
With AI beginning to crop up in technology ranging from enterprise security software to smartphones, it is no wonder the there is a rising clamour to scrutinise the rise of smart systems.
How much do you know about the world’s technology leaders? Take our quiz!