Durham Police Gets Helping Head From A Suspect Assessing AI

Artificial Intelligence (AI) is being prepped for use by police in Durham with the aim of aiding in the decision to keep a suspect in custody or not.

Durham Constabulary plans to use the system, dubbed The Harm Assessment Risk Tool (HART), to classify whether suspects have a low, medium or high risk of offending,

HART has been trained for five years using historical police records held by the Constabulary between 2008 and 2012, and was found to classify low risk suspects correctly 98 percent of the time and get the verdict right for high risk suspects 88 percent of the time.

With this in mind, Sheena Urwin head of criminal justice at Durham Constabulary, told the BBC that he predicts the system will go live relatively soon.

“I imagine in the next two to three months we’ll probably make it a live tool to support officers’ decision making,” she said.

Policing bias

While HART had been found to produce solid results and that it often erred on the safe side by classifing many suspects as medium risk, there are concerns that the AI could show bias in the decisions it makes.

Custody sergeants have the ultimate decision, but there is an argument that people can be easily influenced by the decisions computer’s come up with; for example, people often invest a lot of credence in the answers a Google search will throw up.

And given many AI systems are often trained with human oversight and involments, there is a risk that they could take on the biases unconscious or otherwise, of their programmers.

Given HART has access to a suspects postcode and gender among other information beyond their offending history, there is a chance the AI could infere biased decisions from a suite of information seemingly unrealted.

However, this is only likely to be really determined with through testing and real-world trials, and with human oversight HART should be prevented fromo making any inherently biased decisions.

And HART’s creators have faith in the system, having told a parliamentary inquiry that they are confident in having mitigated the risks of bias.

“Simply residing in a given post code has no direct impact on the result, but must instead be combined with all of the other predictors in thousands of different ways before a final forecasted conclusion is reached,” the creators said.

The rise of AI has yet to yield truly intelligent machines, but it is garnering increasing amounts of attention, with PwC having appointed its first AI leader, and AI is set to be used in more and more public situations; Nvidia recently launched its Metropolis platform designed to add AI smarts to city video feeds.

Put your knowledge of artificial intelligence to the test. Try our quiz!

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

X’s Community Notes Fails To Stem US Election Misinformation – Report

Hate speech non-profit that defeated Elon Musk's lawsuit, warns X's Community Notes is failing to…

1 day ago

Google Fined More Than World’s GDP By Russia

Good luck. Russia demands Google pay a fine worth more than the world's total GDP,…

1 day ago

Spotify, Paramount Sign Up To Use Google Cloud ARM Chips

Google Cloud signs up Spotify, Paramount Global as early customers of its first ARM-based cloud…

2 days ago

Meta Warns Of Accelerating AI Infrastructure Costs

Facebook parent Meta warns of 'significant acceleration' in expenditures on AI infrastructure as revenue, profits…

2 days ago

AI Helps Boost Microsoft Cloud Revenues By 33 Percent

Microsoft says Azure cloud revenues up 33 percent for September quarter as capital expenditures surge…

2 days ago