Research has cast a great deal of doubt into the use facial recognition by police forces in the United Kingdom.
It found that 81 percent of ‘suspects’ flagged by Met’s police facial recognition technology are innocent, and that the overwhelming majority of people identified are not on police wanted lists.
The research was commissioned by Scotland Yard and conducted by academics from the University of Essex, and examined data from the Met, which has been monitoring crowds with live facial recognition (LFR) since August 2016, when it used the technology at Notting Hill Carnival.
According to Sky News, since that time the Met has has conducted 10 trials at locations including Leicester Square, Westfield Stratford, and Whitehall during the 2017 Remembrance Sunday commemorations.
Academics from the University of Essex were granted access to six live trials.
Worryingly, the independent report found that four out of five people identified by the Metropolitan Police’s facial recognition technology as possible suspects, are actually innocent.
The research also revealed that the facial recognition system regularly misidentified people who were then wrongly stopped.
The academics warned the tech was unlikely to be justifiable under human rights law, and that courts would likely rule that the use of the technology was unlawful.
David Davis MP, a former shadow home secretary, was quoted by the Guardian newspaper as saying that the research by Prof Peter Fussey and Dr Daragh Murray at the University of Essex’s Human Rights Centre showed the technology “could lead to miscarriages of justice and wrongful arrests” and poses “massive issues for democracy”.
“All experiments like this should now be suspended until we have a proper chance to debate this and establish some laws and regulations,” he reportedly said. “Remember what these rights are: freedom of association and freedom to protest; rights which we have assumed for centuries which shouldn’t be intruded upon without a good reason.”
Experts were not surprised at the findings meanwhile.
“Recognition technologies will always have some degree of false positive and false negatives,” explained Javvad Malik, security awareness advocate at KnowBe4.
“Facial recognition, especially outside of controlled environments is still very much a developing area of research and therefore it’s not surprising to hear of potentially low accuracy rates,” said Malik.
“Such technologies will need a lot of training and tweaking before they even get to close to an acceptable level where it can automatically be trusted,” he added. “Until such a time, such technologies should always be used to augment and never to replace the human analyst. It is, therefore, important that organisations such as the police force don’t rely entirely on facial recognition to apprehend criminals.”
Another expert explained the differences between how the police calculated their results, and the academdic study findings.
“The Met’s 0.1% error rate figure is calculated by dividing the number of incorrect matches by the total number of people whose faces were scanned,” said Paul Bischoff, privacy advocate, Comparitech.com.
“The University of Essex study’s 81% error rate divides the number of incorrect matches by the total number of reported matches,” said Bischoff. “Let’s say the face recognition cameras are set up at the entrance to the event. 1,000 people attending the event go through that entrance, and their faces are scanned. Of those 1,000 people, 42 people were “matched”, or identified, as police suspects. Eight of those matches were verified as correct, and the remaining 34 matches were false or inconclusive.”
“The University’s report is much more in line with how most people would judge accuracy,” he said. “The Met’s figure is hugely inflated and deceptive.”
The study will likely increase pressure on the government about the use of facial recognition by the police.
In 2017 South Wales Police used facial recognition software at the Champions League Final in Cardiff to scan the face of every fan attending the game.
But the use of facial recognition systems by South Wales police is currently under judicial review, and the information commissioner, Elizabeth Denham, has criticised “a lack of transparency about its use”.
There are also doubts as to the effectiveness of such facial recognition systems. These systems has been previously criticised in the US after research by the Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.
Microsoft for example has recently refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias.
And Redmond reportedly deleted a large facial recognition database, that was said to have contained 10 million images that were used to train facial recognition systems.
San Francisco meanwhile has banned the use of facial recognition technology, meaning that local agencies, such as the local police force and other city agencies such as transportation would not be able to utilise the technology in any of their systems.
And facial recognition can also be fooled as well. In 2017 Vietnamese cybersecurity firm said it had tricked the facial recognition feature on the iPhone X using a 3D-printed mask.
Can you protect your privacy online? Take our quiz!
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…