A ban on offensive autonomous weapons beyond meaningful human control, such as AI drones, should be implemented to save humanity from a military AI arms race.
This is according to an open letter signed by some of the world’s greatest thinkers, including Stephen Hawking, PayPal and Tesla founder Elon Musk and Apple co-founder Steve Wozniak, amongst hundreds of others top AI and robotic scientists.
The letter, on behalf of the Future Of Life Institute, calls governments to consider the detrimental effects of artificial intelligence-controlled drones and robots.
“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable,” reads the letter, to be announced on July 28 at the International Joint Conferences for Artificial Intelligence 2015 event in Argentina.
In June, the Future Of Life Institute announced its intentions to spend a $10 million (£6.5m) donation from Elon Musk to help fund 37 projects that are dedicated to using artificial intelligence for the benefit of humankind.
Other signatories of the letter, which is titled ‘Autonomous Weapons: an Open Letter from AI & Robotics Researchers’, include political philosopher Noam Chomsky and Jaan Tallinn, the co-founder of Skype.
“Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.
“In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”
Take our London Technology quiz here!
CMA receives 'provisional recommendation' from independent inquiry that Apple,Google mobile ecosystem needs investigation
Government minister flatly rejects Elon Musk's “unsurprising” allegation that Australian government seeks control of Internet…
Northvolt files for Chapter 11 bankruptcy protection in the United States, and CEO and co-founder…
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
View Comments
There are couple things still in humanities favour:
If Microsoft powers the AI engine, at least we know it won't work.
If Apple builds the hardware, it won't risk anything that might damage its looks, and wont work in the rain.
If Samsung design it, it will be similar to Apples.
If Google is involved it will be all open source and report everything back to Google.
if Amazon is involved, it will deliver its payload the following day (possibly!)
If Oracle is involved they will take all of the above to court for using their APII.
So maybe humanity will be safe from Skynet after all!