Spies Urged To Adopt AI To Counter Augmented Threats

The UK’s foes are likely to use artificial intelligence to augment future threats, a study has warned, arguing that Britain’s intelligence forces must adopt the technology to keep pace.

The study, commissioned by GCHQ and conducted by the Royal United Services Institute, found that AI is likely to be used to bolster threats including cyber-attacks on national infrastructure and convincing “deepfakes” used to spread disinformation.

For their part, the UK’s spies can use the technology to improve cyber defence and to analyse data that can help detect militant activity, the study argues.

But it is more circumspect about AI’s ability to predict militant attacks before they happen.

Cyber-attacks

The independent study is based on broad access to the UK’s intelligence community.

RUSI argues in the report that both nation states and criminals “will undoubtedly seek to use AI to attack the UK”.

Alexander Babuta, a RUSI fellow and an author of the report, argued the infrastructure needs to be put into place for national security forces to innovate and adapt if they are to keep up with changing technology.

AI could be used to create convincing faked media to manipulate public opinion and elections, and to alter malware to make it more difficult to detect.

In both cases, it’s necessary to use AI-based defence mesasures to counter AI, Babuta argued.

But AI is of only “limited value” in making predictions in fields such as counter-terrorism, the report says.

Human element

Militant attacks occur too rarely and are too different from one another for a machine learning system to be able to detect a pattern.

In such uses technology could, however, augment the ability of humans to sift through data.

This secondary role means humans would remain accountable for critical decision-making, RUSI said.

The think tank said increased use of AI could raise human rights concerns, with profiling techniques creating the potential for discrimination.

New guidance may need to be put into place to govern the way such technologies are used in the future, RUSI said.

Such issues have become more visible since 2013, when Edward Snowden revealed the extent of data collection on US citizens via a series of leaks of classified information.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

SoftBank Promises To Invest $100bn In US

Japanese tech investment firm SoftBank promises to invest $100bn during Trump's second term to create…

9 hours ago

Synopsys, SiMa.ai To Collaborate On AI Car Chips

Synopsys to work with start-up SiMa.ai on joint offering to help accelerate development of AI…

10 hours ago

AI Start-Up Basis Raises $34m For Accountancy Agent

Start-up Basis raises $34m in Series A funding round for AI-powered accountancy agent to make…

10 hours ago

Databricks Raises $10bn In Huge AI Funding Round

Data analytics and AI start-up Databricks completes huge $10bn round from major venture capitalists as…

11 hours ago

Congo Files Complaints Against Apple Over Conflict Minerals

Congo files legal complaints against Apple in France, Belgium alleging company 'complicit' in laundering conflict…

11 hours ago