The evolution of artificial intelligence (AI) will require people to carefully define what they want smart machines to do to prevent them from endangering humans.
That’s according to Professor Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford. While AI is very in its early stages, offering machine learning capabilities rather than human-level intelligence, there is a chance that computer intelligences will have 25 percent of the human capacity within 15 years.
As such, AI is less of if it will happen but more so when it will happen. Furthermore, once machines get as smart as humans, Bostrom suspects they will surpass us shortly after.
“If we get to human-level intelligence, super intelligence might follow soon thereafter,” he said speaking at IP Expo 2016. “There’s a lot of room at the top for improvements above humans, it’s not as if there’s some absolute limit and humans are very close to it”
The problem with this is that as AIs get more intelligent they are likely to become more difficulty to control, particularly as they gain the human-like capability to get even smarter.
“It’s hard enough to get our current computers to do what we want, but here there is a whole additional level of talent; It’s almost as if the chimpanzee building humans and thinking that they will control what humans do.
“We are looking here for control methods that are scalable in the sense that they will continue to work, preferably work better, as our systems get smarter and smarter.”
Controlling the basic AIs we have today, like virtual assistants, is an easy process, explained Bostrom, but he said there are a lot of control methods that do not scale with smarter AIs meaning the super intelligent machines of the future can figure out ways around such controls.
One solution to this is to programme an AI to want the same outcomes as humans. However, this throws up problems surrounding trying to specify human morals and philosophies to be applied to hard code.
“If we are going to rely on the method of putting in an objective function and putting a really powerful optimisation process [AI] loose on that, we need to make sure what we ask for is really what we want,” said Bostrom.
He cited a theoretical example of a perfectly benevolent AI programmed to create paper clips as efficiently as possible, but lacking the right moral controls.
The AI gets to a point where it realises that by removing humans out of the equation it will have more resources to make paper clips more efficiently, and thus ends up wiping humanity off the face of the earth or uses people as fuel for its paper clip factories.
Basically, if subtle and insightful controls are not created to deal with vastly intelligent machines, several generations into the future we could find ourselves or our children accidently under the thrall of AIs, all without a Terminator in sight.
As such, he suggested the future of controlling AIs involves looking beyond conservative assumptions that an advanced AI can be easily shut down, incapable of manipulating situations, or convincing humans to do its bidding.
However, he said the bottom line is that AI needs more research, not just in its development, but also what happens if a true AI is actually created; people basically need to stop to think what happens if they create a true AI.
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…