Stephen Hawking: AI Is ‘Either The Best Or The Worst Thing’ For Humanity

Artificial intelligence (AI) will be “either the best, or the worst thing, ever to happen to humanity” according to Professor Stephen Hawking.

The renowned theoretical physicist has previously warned that the development of intelligent machines could put humanity at risk of being the architect of its own destruction. However, Professor Hawking seems to have mellowed his stance and sees potential in the creation of AI as well as the pitfalls.

Speaking at the opening of the £10 million Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, Hawking praised the advent of an academic institution dedicated to AI research and the impact it could have.

According to The Guardian, it won Hawking’s approval to the extent he hailed the centre as “crucial to the future of our civilisation and our species”.

“We spend a great deal of time studying history,” Hawking said, “which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”

Read More: Elon Musk, SpaceX, Artificial Intelligence And Iain M Banks

AI academic exploration

The Leverhulme Centre noted it will study the impact of AI as a “potentially epoch-making technological development, both short and long term”.

This sentiment was echoed by Hawking: “I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.”

“The potential benefits of creating intelligence are huge,” he explained. “We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation. And surely we will aim to finally eradicate disease and poverty.

“Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation.”

The Professor is not alone in his views, which are in part shared by Professor Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, who highlighted the need for AIs to be carefully created in such a way that they do what humanity wants them to even as they grow ever more intelligent.

In the short term, AI is less likely to pose much of a risk to humanity as it is mostly being used to develop driverless cars and form the basis of virtual assistants like the Google Assistant. But with tech companies allying on the creation of AIs and Huawei earmarking $1 million dollars to fund the AI research of UC Berkley, there is no doubt that intelligent machines will be part of our future whether we like it or not.

How much do you know about the world’s technology leaders? Take our quiz!

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

2 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

2 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

2 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago