Durham Police Gets Helping Head From A Suspect Assessing AI

Artificial Intelligence (AI) is being prepped for use by police in Durham with the aim of aiding in the decision to keep a suspect in custody or not.

Durham Constabulary plans to use the system, dubbed The Harm Assessment Risk Tool (HART), to classify whether suspects have a low, medium or high risk of offending,

HART has been trained for five years using historical police records held by the Constabulary between 2008 and 2012, and was found to classify low risk suspects correctly 98 percent of the time and get the verdict right for high risk suspects 88 percent of the time.

With this in mind, Sheena Urwin head of criminal justice at Durham Constabulary, told the BBC that he predicts the system will go live relatively soon.

“I imagine in the next two to three months we’ll probably make it a live tool to support officers’ decision making,” she said.

Policing bias

While HART had been found to produce solid results and that it often erred on the safe side by classifing many suspects as medium risk, there are concerns that the AI could show bias in the decisions it makes.

Custody sergeants have the ultimate decision, but there is an argument that people can be easily influenced by the decisions computer’s come up with; for example, people often invest a lot of credence in the answers a Google search will throw up.

And given many AI systems are often trained with human oversight and involments, there is a risk that they could take on the biases unconscious or otherwise, of their programmers.

Given HART has access to a suspects postcode and gender among other information beyond their offending history, there is a chance the AI could infere biased decisions from a suite of information seemingly unrealted.

However, this is only likely to be really determined with through testing and real-world trials, and with human oversight HART should be prevented fromo making any inherently biased decisions.

And HART’s creators have faith in the system, having told a parliamentary inquiry that they are confident in having mitigated the risks of bias.

“Simply residing in a given post code has no direct impact on the result, but must instead be combined with all of the other predictors in thousands of different ways before a final forecasted conclusion is reached,” the creators said.

The rise of AI has yet to yield truly intelligent machines, but it is garnering increasing amounts of attention, with PwC having appointed its first AI leader, and AI is set to be used in more and more public situations; Nvidia recently launched its Metropolis platform designed to add AI smarts to city video feeds.

Put your knowledge of artificial intelligence to the test. Try our quiz!

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

2 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

2 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

2 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago