Spies Urged To Adopt AI To Counter Augmented Threats

The UK’s foes are likely to use artificial intelligence to augment future threats, a study has warned, arguing that Britain’s intelligence forces must adopt the technology to keep pace.

The study, commissioned by GCHQ and conducted by the Royal United Services Institute, found that AI is likely to be used to bolster threats including cyber-attacks on national infrastructure and convincing “deepfakes” used to spread disinformation.

For their part, the UK’s spies can use the technology to improve cyber defence and to analyse data that can help detect militant activity, the study argues.

But it is more circumspect about AI’s ability to predict militant attacks before they happen.

Big Data Ecosystems and Infrastructures, ai, artificial intelligenceCyber-attacks

The independent study is based on broad access to the UK’s intelligence community.

RUSI argues in the report that both nation states and criminals “will undoubtedly seek to use AI to attack the UK”.

Alexander Babuta, a RUSI fellow and an author of the report, argued the infrastructure needs to be put into place for national security forces to innovate and adapt if they are to keep up with changing technology.

AI could be used to create convincing faked media to manipulate public opinion and elections, and to alter malware to make it more difficult to detect.

In both cases, it’s necessary to use AI-based defence mesasures to counter AI, Babuta argued.

But AI is of only “limited value” in making predictions in fields such as counter-terrorism, the report says.

Human element

Militant attacks occur too rarely and are too different from one another for a machine learning system to be able to detect a pattern.

In such uses technology could, however, augment the ability of humans to sift through data.

This secondary role means humans would remain accountable for critical decision-making, RUSI said.

The think tank said increased use of AI could raise human rights concerns, with profiling techniques creating the potential for discrimination.

New guidance may need to be put into place to govern the way such technologies are used in the future, RUSI said.

Such issues have become more visible since 2013, when Edward Snowden revealed the extent of data collection on US citizens via a series of leaks of classified information.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

M&S Apologises After Cyberattack, Halts Online Orders

British retailer Marks & Spencer apologises after it struggles to recover from cyberattack this week,…

1 day ago

Apple To Manufacture Most US iPhones In India – Report

Apple to pivot manufacturing of iPhones for US away from China and to India, after…

1 day ago

Alphabet’s Google Notifies Staff Of Job Threat Over Remote Working

Several units within Google notified remote workers jobs will be in jeopardy if they don't…

2 days ago

Trump’s Meme Coin Value Surges After Dinner Invitation

Leading holders of Trump meme coin receive invitation to private gala dinner with US President,…

2 days ago