AI Control Needs ‘Careful Consideration’ To Prevent Killer Computers

The evolution of artificial intelligence (AI) will require people to carefully define what they want smart machines to do to prevent them from endangering humans.

That’s according to Professor Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford. While AI is very in its early stages, offering machine learning capabilities rather than human-level intelligence, there is a chance that computer intelligences will have 25 percent of the human capacity within 15 years.

As such, AI is less of if it will happen but more so when it will happen. Furthermore, once machines get as smart as humans, Bostrom suspects they will surpass us shortly after.

“If we get to human-level intelligence, super intelligence might follow soon thereafter,” he said speaking at IP Expo 2016. “There’s a lot of room at the top for improvements above humans, it’s not as if there’s some absolute limit and humans are very close to it”

The problem with this is that as AIs get more intelligent they are likely to become more difficulty to control, particularly as they gain the human-like capability to get even smarter.

Rise of uncontrollable AIs

“One of the big issues here that’s comes into focus if you really start to think what it will mean to be able to build a machine that outstrips human intelligence is this control problem,” continued Bostrom. “How can you ensure that such a machine would be safe, that it would do what we want? Or how can you get something that is far smarter than you to do what you intend for it to do?

“It’s hard enough to get our current computers to do what we want, but here there is a whole additional level of talent; It’s almost as if the chimpanzee building humans and thinking that they will control what humans do.

“We are looking here for control methods that are scalable in the sense that they will continue to work, preferably work better, as our systems get smarter and smarter.”

Controlling the basic AIs we have today, like virtual assistants, is an easy process, explained Bostrom, but he said there are a lot of control methods that do not scale with smarter AIs meaning the super intelligent machines of the future can figure out ways around such controls.

One solution to this is to programme an AI to want the same outcomes as humans. However, this throws up problems surrounding trying to specify human morals and philosophies to be applied to hard code.

“If we are going to rely on the method of putting in an objective function and putting a really powerful optimisation process [AI] loose on that, we need to make sure what we ask for is really what we want,” said Bostrom.

He cited a theoretical example of a perfectly benevolent AI programmed to create paper clips as efficiently as possible, but lacking the right moral controls.

The AI gets to a point where it realises that by removing humans out of the equation it will have more resources to make paper clips more efficiently, and thus ends up wiping humanity off the face of the earth or uses people as fuel for its paper clip factories.

Basically, if subtle and insightful controls are not created to deal with vastly intelligent machines, several generations into the future we could find ourselves or our children accidently under the thrall of AIs, all without a Terminator in sight.

“What this illustrates is if you take some arbitrary goal like paper clips, but you could plug in almost any other goal, and you think through what is a sufficiently powerful optimisations process a super intelligence would have instrumental reasons to do in order to realise that goal,” added Bostrom.

As such, he suggested the future of controlling AIs involves looking beyond conservative assumptions that an advanced AI can be easily shut down, incapable of manipulating situations, or convincing humans to do its bidding.

However, he said the bottom line is that AI needs more research, not just in its development, but also what happens if a true AI is actually created; people basically need to stop to think what happens if they create a true AI.

Quiz: What do you know about Android?

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

2 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

2 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

2 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago