Police Seek ‘Balance’ In Use Of AI To Predict Crime

Police have said they are seeking “balance” in the use of artificial intelligence to predict crimes, after freedom of information requests found that 14 UK police forces were deploying, testing or investigating predictive AI techniques.

The report by Liberty, “Policing by Machine”, warned that the tools risk entrenching existing biases and delivering inaccurate predictions.

The civil liberties group urged police to end the use of predictive AI, saying mapping techniques rely on “problematic” historical arrest data, while individual risk assessment programmes “encourage discriminatory profiling”.

The forces using or trialling predictive mapping programmes are Avon and Somerset Constabulary, Cheshire Constabulary, Dyfed-Powys Police, Greater Manchester Police, Kent Police, Lancashire Police, Merseyside Police, the Metropolitan Police Service, Norfolk Constabulary, Northamptonshire Police, Warwickshire Police and West Mercia Police, West Midlands Police and West Yorkshire Police, while a further three forces – Avon and Somerset, Durham and West Midlands – are using or trialling individual risk-assessment programmes.


Risk-assessment

Norfolk Police, for instance, is trialling a system for identifying whether burglaries should be investigated, while Durham Constabulary’s Harm Assessment Risk Tool (Hart) provides advice to custody officers on individuals’ risk of re-offending, and West Midlands Police uses hotspot mapping and a data-driven analysis project.

Liberty said that at a minimum, police should be more transparent about their use of algorithms in policing.

A West Midlands Police spokesperson said the force was determined to ensure that its data science work had “ethics at its heart”.

A Durham Constabulary spokesperson said the force was “proud” of Hart, which it said advised officers as part of an intervention programme to help offenders “turn their lives around”.

“All decisions are ultimately made by an experienced custody officer, but the Hart advisory tool gives them a clear indication as to who might be more at risk of re-offending – not so they are stereotyped, but so we can give them more support to turn away from crime,” the spokesperson said.

Innovation

The National Police Chiefs’ Council said such programmes were part of ongoing “innovative” use of technology and were deployed with a background of “strong” ethical standards and legislation.

“Policing is underpinned in the UK by a strong set of values and ethical standards, as well as a significant amount of legislation,” said assistant chief constable Jon Drake, the group’s lead for intelligence.

“At all times we seek to balance keeping people safe with people’s rights. This includes the way in which we police hot-spots for crime.

“For many years police forces have looked to be innovative in their use of technology to protect the public and prevent harm and we continue to develop new approaches to achieve these aims.”

But Hannah Couchman, author of the Liberty report, said predictive policing adds to the notion of “pre-criminality” and puts a “glossy sheen” of technology on existing biases.

“It fails us because it focuses on technology and big data as the solution to policing problems which are deeper, systemic issues requiring a much more considered, radical and compassionate response,” Couchman said on Twitter.

Liberty also called on the London Metropolitan Police to carry out a review of its Gangs Matrix database, which the Information Commissioner’s Office recently called “unjustifiably excessive”.

Budget cuts

The use of AI comes amidst a climate of ongoing budget cuts that have significantly reduced police staffing.

Durham police chief Michael Barton said he has had to cut staff every year since becoming chief officer.

“I have now got a third less money than I had in 2010,” he told the Financial Times. “We have had the fifth biggest cut, pro rata, of any police force in the UK.”

Barton said the force was testing the use of AI because it had the potential to be free of the “inherent biases” that affect the judgements of human custody sergeants.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

SoftBank Promises To Invest $100bn In US

Japanese tech investment firm SoftBank promises to invest $100bn during Trump's second term to create…

5 hours ago

Synopsys, SiMa.ai To Collaborate On AI Car Chips

Synopsys to work with start-up SiMa.ai on joint offering to help accelerate development of AI…

5 hours ago

AI Start-Up Basis Raises $34m For Accountancy Agent

Start-up Basis raises $34m in Series A funding round for AI-powered accountancy agent to make…

6 hours ago

Databricks Raises $10bn In Huge AI Funding Round

Data analytics and AI start-up Databricks completes huge $10bn round from major venture capitalists as…

6 hours ago

Congo Files Complaints Against Apple Over Conflict Minerals

Congo files legal complaints against Apple in France, Belgium alleging company 'complicit' in laundering conflict…

7 hours ago

EU Opens TikTok Probe Over Election Interference Claims

European Commission opens formal probe into TikTok after Romanian first-round elections annulled over Russian interference…

7 hours ago