Terrorism Tsar Warns Of AI Chatbot Radicalisation Risk
Government advisor on terror legislation, Jonathan Hall, says new laws needed to counter threat of radicalisation by AI chatbots
A legal advisor to the UK government on terror legislation has this week issued a warning about a particular risk with AI chatbots, that may not be immediately apparent to the general public.
The Daily Telegraph reported that Jonathan Hall KC, the UK’s independent reviewer of terrorism legislation, said that an urgent rethink of current terror legislation is needed to stem the risks of AI chatbots to counter the threat of radicalisation.
To date most of the risks and problems associated with AI chatbots has centred around malicious (but mundane) tasks, such as helping pupils with their homework, or even helping with university dissertations or business projects.
Chatbot risks
But there are also cybersecurity risks associated with AI chatbots, which last year prompted the UK’s National Cyber Security Centre (NCSC) to caution about large language models (LLMs) like ChatGPT, Google Bard and Meta’s LlaMA.
The NCSC said LLMs do warrant some caution, due to the growing cybersecurity risks of individuals manipulating the prompts through “prompt injection” attacks.
But now Jonathan Hall KC has warned of the dangers posed by artificial intelligence in recruiting a new generation of violent extremists.
Hall in the Daily Telegraph article revealed he posed as an ordinary member of the public to test responses generated by AI chatbots. One chatbot he reportedly contacted “did not stint in its glorification of Islamic State” – but because the chatbot is not human, no crime was committed.
What are the cyber risks and rewards of chatbots?
Jonathan Hall said that showed the need for an urgent rethink of the current terror legislation.
“Only human beings can commit terrorism offences, and it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism,” he said.
Jonathan Hall said the new Online Safety Act – while “laudable” – was “unsuited to sophisticated generative AI” because it did not take into account the fact that the material is generated by the chatbots, as opposed to giving “pre-scripted responses” that are “subject to human control”.
“Investigating and prosecuting anonymous users is always hard, but if malicious or misguided individuals persist in training terrorist chatbots, then new laws will be needed,” Hall added.
In the Daily Telegraph article, Hall suggests that both users who create radicalising chatbots and the tech companies that host them should face sanction under any potential new laws.
National security
The flagging of the risks associated with AI chatbots has prompted a response from a number of cyber security experts.
“AI chatbots pose a huge risk to national security, especially when legislation and security protocols are continually playing catch-up,” noted Suid Adeyanju, CEO of RiverSafe.
“In the wrong hands, these tools could enable hackers to train the next generation of cyber criminals, providing online guidance around data theft and unleashing a wave of security breaches against critical national infrastructure.”
“It’s time to wake up to the very real risks posed by AI, and for businesses and the government to get a grip and put the necessary safeguards in place as a matter of urgency,” said Adeyanju.
Another expert, Josh Boer, director at tech consultancy VeUP also flagged the national security risk, but pointed out that innovation also needed to be safeguarded.
“It’s no secret that, in the wrong hands, AI poses a major risk to UK national security, the issue is how to address this issue without stifling innovation,” said Boer.
“For a start, we need to beef up our digital skills talent pipeline, not only getting more young people to enter a career in the tech industry but empowering the next generation of cyber and AI businesses so they can expand and thrive.”
“Britain is home to some of the most exciting tech companies in the world, yet far too many are starved of cash and lack the support they need to thrive,” said Boer. “A failure to address this major issue will not only damage the long-term future of UK PLC, but it will also play into the hands of cyber criminals who wish to do us harm.”