A ban on offensive autonomous weapons beyond meaningful human control, such as AI drones, should be implemented to save humanity from a military AI arms race.
This is according to an open letter signed by some of the world’s greatest thinkers, including Stephen Hawking, PayPal and Tesla founder Elon Musk and Apple co-founder Steve Wozniak, amongst hundreds of others top AI and robotic scientists.
The letter, on behalf of the Future Of Life Institute, calls governments to consider the detrimental effects of artificial intelligence-controlled drones and robots.
“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable,” reads the letter, to be announced on July 28 at the International Joint Conferences for Artificial Intelligence 2015 event in Argentina.
In June, the Future Of Life Institute announced its intentions to spend a $10 million (£6.5m) donation from Elon Musk to help fund 37 projects that are dedicated to using artificial intelligence for the benefit of humankind.
Other signatories of the letter, which is titled ‘Autonomous Weapons: an Open Letter from AI & Robotics Researchers’, include political philosopher Noam Chomsky and Jaan Tallinn, the co-founder of Skype.
“Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.
“In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”
Landmark ruling finds NSO Group liable on hacking charges in US federal court, after Pegasus…
Microsoft reportedly adding internal and third-party AI models to enterprise 365 Copilot offering as it…
Albania to ban access to TikTok for one year after schoolboy stabbed to death, as…
Shipments of foldable smartphones show dramatic slowdown in world's biggest smartphone market amidst broader growth…
Google proposes modest remedies to restore search competition, while decrying government overreach and planning appeal
Sega 'evaluating' starting its own game subscription service, as on-demand business model makes headway in…
View Comments
There are couple things still in humanities favour:
If Microsoft powers the AI engine, at least we know it won't work.
If Apple builds the hardware, it won't risk anything that might damage its looks, and wont work in the rain.
If Samsung design it, it will be similar to Apples.
If Google is involved it will be all open source and report everything back to Google.
if Amazon is involved, it will deliver its payload the following day (possibly!)
If Oracle is involved they will take all of the above to court for using their APII.
So maybe humanity will be safe from Skynet after all!