Artificial intelligence (AI) will be “either the best, or the worst thing, ever to happen to humanity” according to Professor Stephen Hawking.
The renowned theoretical physicist has previously warned that the development of intelligent machines could put humanity at risk of being the architect of its own destruction. However, Professor Hawking seems to have mellowed his stance and sees potential in the creation of AI as well as the pitfalls.
Speaking at the opening of the £10 million Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, Hawking praised the advent of an academic institution dedicated to AI research and the impact it could have.
According to The Guardian, it won Hawking’s approval to the extent he hailed the centre as “crucial to the future of our civilisation and our species”.
“We spend a great deal of time studying history,” Hawking said, “which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”
Read More: Elon Musk, SpaceX, Artificial Intelligence And Iain M Banks
This sentiment was echoed by Hawking: “I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.”
“The potential benefits of creating intelligence are huge,” he explained. “We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation. And surely we will aim to finally eradicate disease and poverty.
“Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation.”
The Professor is not alone in his views, which are in part shared by Professor Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, who highlighted the need for AIs to be carefully created in such a way that they do what humanity wants them to even as they grow ever more intelligent.
In the short term, AI is less likely to pose much of a risk to humanity as it is mostly being used to develop driverless cars and form the basis of virtual assistants like the Google Assistant. But with tech companies allying on the creation of AIs and Huawei earmarking $1 million dollars to fund the AI research of UC Berkley, there is no doubt that intelligent machines will be part of our future whether we like it or not.
How much do you know about the world’s technology leaders? Take our quiz!
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…