Artificial Intelligence promises many positive developments, but experts have warned that the technology could be exploited for malicious purposes.

The warning came in a new report from the Future of Humanity Institute, with the authors drawn from leading universities such as Cambridge, Oxford and Yale, along with privacy advocates and military experts.

The report builds on the findings of a two day workshop in Oxford back in February 2017. Among the risks the report highlights are that AI could misused by rogue states, criminals and lone-wolf attackers.

Malicious AI Use

The researchers said that artificial intelligence and machine learning capabilities are growing at an unprecedented rate, and whilst a lot of attention has focused on its positive developments, “less attention has historically been paid to the ways in which artificial intelligence can be used maliciously.”

The report said that the malicious use of AI poses imminent threats to digital, physical and political security by allowing for large-scale and much more efficient attacks within the next five years.

“The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labour, intelligence and expertise,” said the report. “A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets.”

And new attacks may arise as AI systems could complete tasks that would be otherwise impractical for humans.

The report urged policymakers to collaborate closely with researchers to investigate, prevent, and mitigate potential malicious uses of AI.

And it also recommended that AI researchers and engineers should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms.

And the report said that AI best practices should be identified as soon as possible.

“In the cyber domain, even at current capability levels, AI can be used to augment attacks on and defences of cyberinfrastructure, and its introduction into society changes the attack surface that hackers can target, as demonstrated by the examples of automated spear phishing and malware detection tools,” said the report.

“As AI systems increase in capability, they will first reach and then exceed human capabilities in many narrow domains, as we have already seen with games like backgammon, chess, Jeopardy!, Dota 2, and Go and are now seeing with important human tasks like investing in the stock market or driving cars,” the report concluded. “Preparing for the potential malicious uses of AI associated with this transition is an urgent task.”

For years now a number of high profile tech figures have warned about the dangers posed by AI. These include Telsa CEO Elon Musk, Bill Gates, and Professor Stephen Hawking.

Professor Hawking told the BBC in December 2014 that a thinking machine could “redesign itself at an ever-increasing rate” and “supersede” humans, while Musk, speaking during an interview at the AeroAstro Centennial Symposium at MIT this year, called AI “our biggest existential threat”.

AI Defence

The report’s findings were backed by a number of experts.

“This report rightly warns that AI will be used by cyber-attackers,” commented Dave Palmer, director of technology at Darktrace. “However, defenders have the home turf advantage. The value of AI for cyber defence lies in its ability to gather lots of subtle pieces of information and draw intelligent conclusions from them. It learns the normal ‘pattern of life’ for every user and device on the network, and uses this evolving understanding to detect the earliest indicators of emerging cyber-threats.”

“And critically, self-learning technology only gets better with time – the more normal activity it sees, the more refined and nuanced its understanding becomes,” added Palmer. “This report should be seen as a wake-up call for organisations to adopt AI defence now, so that they can be confident that they will be able to detect and fight back against even the most unpredictable attacks.”

Another expert pointed out that many organisations are still not doing the security basics, leaving them at risk.

“For a while, cybercriminals have already been progressively using simple ML algorithms to increase the efficiency of their attacks, for example to better profile and target the victims and increase speed of breaches,” said Ilia Kolochenko, CEO of web security specialist High-Tech Bridge.

“However, modern cyberattacks are so tremendously successful mainly because of fundamental cybersecurity problems and omissions in organisations, ML is just an auxiliary accelerator,” Kolochenko added.

“One should also bear in mind that AI/ML technologies are being used by the good guys to fight cybercrime more efficiently too,” he said. “Moreover, development of AI technologies usually requires expensive long term investments that Black Hats typically cannot afford. Therefore, I don’t see substantial risks or revolutions that may happen in the digital space because of AI in the next five years at least.”

Government Backing

The British government, like many other governments around the world, is cautiously backing the technology.

The UK Prime Minister in January indicated she wants the United Kingdom to lead the world in the safe and ethical deployment of artificial intelligence (AI). And Theresa May said she wants a new advisory body to co-ordinate AI efforts with other countries.

The government has previously said it wants to become an AI leader, and also announced that it would strengthen links with France in the tech and artificial intelligence sectors.

Put your knowledge of artificial intelligence to the test. Try our quiz!

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

13 hours ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

15 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

17 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

18 hours ago