Stephen Hawking: AI Is ‘Either The Best Or The Worst Thing’ For Humanity

Artificial intelligence (AI) will be “either the best, or the worst thing, ever to happen to humanity” according to Professor Stephen Hawking.

The renowned theoretical physicist has previously warned that the development of intelligent machines could put humanity at risk of being the architect of its own destruction. However, Professor Hawking seems to have mellowed his stance and sees potential in the creation of AI as well as the pitfalls.

Speaking at the opening of the £10 million Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, Hawking praised the advent of an academic institution dedicated to AI research and the impact it could have.

According to The Guardian, it won Hawking’s approval to the extent he hailed the centre as “crucial to the future of our civilisation and our species”.

“We spend a great deal of time studying history,” Hawking said, “which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”

Read More: Elon Musk, SpaceX, Artificial Intelligence And Iain M Banks

AI academic exploration

The Leverhulme Centre noted it will study the impact of AI as a “potentially epoch-making technological development, both short and long term”.

This sentiment was echoed by Hawking: “I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.”

“The potential benefits of creating intelligence are huge,” he explained. “We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation. And surely we will aim to finally eradicate disease and poverty.

“Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation.”

The Professor is not alone in his views, which are in part shared by Professor Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, who highlighted the need for AIs to be carefully created in such a way that they do what humanity wants them to even as they grow ever more intelligent.

In the short term, AI is less likely to pose much of a risk to humanity as it is mostly being used to develop driverless cars and form the basis of virtual assistants like the Google Assistant. But with tech companies allying on the creation of AIs and Huawei earmarking $1 million dollars to fund the AI research of UC Berkley, there is no doubt that intelligent machines will be part of our future whether we like it or not.

How much do you know about the world’s technology leaders? Take our quiz!

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

5 hours ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

7 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

9 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

10 hours ago