Durham Police Gets Helping Head From A Suspect Assessing AI

Artificial Intelligence (AI) is being prepped for use by police in Durham with the aim of aiding in the decision to keep a suspect in custody or not.

Durham Constabulary plans to use the system, dubbed The Harm Assessment Risk Tool (HART), to classify whether suspects have a low, medium or high risk of offending,

HART has been trained for five years using historical police records held by the Constabulary between 2008 and 2012, and was found to classify low risk suspects correctly 98 percent of the time and get the verdict right for high risk suspects 88 percent of the time.

With this in mind, Sheena Urwin head of criminal justice at Durham Constabulary, told the BBC that he predicts the system will go live relatively soon.

“I imagine in the next two to three months we’ll probably make it a live tool to support officers’ decision making,” she said.

Policing bias

While HART had been found to produce solid results and that it often erred on the safe side by classifing many suspects as medium risk, there are concerns that the AI could show bias in the decisions it makes.

Custody sergeants have the ultimate decision, but there is an argument that people can be easily influenced by the decisions computer’s come up with; for example, people often invest a lot of credence in the answers a Google search will throw up.

And given many AI systems are often trained with human oversight and involments, there is a risk that they could take on the biases unconscious or otherwise, of their programmers.

Given HART has access to a suspects postcode and gender among other information beyond their offending history, there is a chance the AI could infere biased decisions from a suite of information seemingly unrealted.

However, this is only likely to be really determined with through testing and real-world trials, and with human oversight HART should be prevented fromo making any inherently biased decisions.

And HART’s creators have faith in the system, having told a parliamentary inquiry that they are confident in having mitigated the risks of bias.

“Simply residing in a given post code has no direct impact on the result, but must instead be combined with all of the other predictors in thousands of different ways before a final forecasted conclusion is reached,” the creators said.

The rise of AI has yet to yield truly intelligent machines, but it is garnering increasing amounts of attention, with PwC having appointed its first AI leader, and AI is set to be used in more and more public situations; Nvidia recently launched its Metropolis platform designed to add AI smarts to city video feeds.

Put your knowledge of artificial intelligence to the test. Try our quiz!

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

3 hours ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

6 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

7 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

8 hours ago