Google DeepMind AI No Longer Needs Humans To Learn
Skynet anyone? DeepMind’s Go-playing artificial intelligence system can now learn without human assistance
Google’s DeepMind division has announced a significant breakthrough after its AI system became even smarter without any human input at all, after it learnt how to defeat the leading system playing the ancient Chinese game of Go.
Indeed, AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a world champion at Go) is now even more powerful, and it did it all by itself.
The development is sure to trigger the concerns of Elon Musk, Bill Gates and Dr Stephen Hawking, all of whom are worried that AI could signal the end of humanity.
Learning By Itself
The breakthrough was announced on a DeepMind blog post, after a paper on the subject was published in the journal Nature.
“Artificial intelligence research has made rapid progress in a wide variety of domains from speech recognition and image classification to genomics and drug discovery,” the blog says. “In many cases, these are specialist systems that leverage enormous amounts of human expertise and data.”
“However, for some problems this human knowledge may be too expensive, too unreliable or simply unavailable,” it added. “As a result, a long-standing ambition of AI research is to bypass this step, creating algorithms that achieve superhuman performance in the most challenging domains with no human input.”
The AlphaGo Zero system is geared towards playing Go. It improved itself and its ability to play the game by simply playing games against itself, without any human input.
“In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0,” said DeepMind.
AlphaGo Zero took just 21 days to learn from scratch how to become an AlphaGo master.
A video explaining the advance by AlphaGo Zero can be found here.
Deepmind said it was able to do this thanks to a “novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher.”
“The system starts off with a neural network that knows nothing about the game of Go,” said DeepMind. “It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.”
“This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again,” said the firm. “In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero.”
“After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo – which had itself defeated 18-time world champion Lee Sedol – by 100 games to 0,” said the firm.
“After 40 days of self training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as ‘Master’, which has defeated the world’s best players and world number one Ke Jie.”
Although AlphaGo Zero is geared towards the Chinese game of Go, its creators point out that it could be applied to other structured problems, such as protein folding, reducing energy consumption or searching for revolutionary new materials, and any “resulting breakthroughs have the potential to positively impact society”.
Diversive Subject
Despite DeepMind’s optimism, the subject of AI is a divisive one, with some worrying AI is likely to steal jobs from humans rather than necessarily destroy them.
DeepMind was also in trouble in the summer after the Information Commissioner’s Office ruled that the data sharing between the Royal Free NHS Trust and it was illegal.
Regardless of these concerns, it would appear that AI development is in no way slowing down.
Put your knowledge of artificial intelligence to the test. Try our quiz!