Uber Sued In UK Over Alleged Facial Recognition Bias
Coloured drivers and couriers allege Uber’s facial recognition system failed to effectively identify people of colour, or those with a dark skin
Uber in the UK is once again facing a legal issue, after two trade unions helped three former workers file a racial discrimination lawsuit against it.
The lawsuit alleges that Uber’s facial recognition software discriminates against people of colour, and it also accused the company of unfair dismissal, after its facial recognition system failed to identify the former workers.
The lawsuit was filed this week with help from two unions, the Independent Workers Union of Great Britain (IWGB) and the App Drivers & Couriers Union (ADCU).
Facial recognition lawsuit
The trade unions allege that Uber’s facial recognition technology provided by Microsoft, which since March 2020 has been used to verify the identity of drivers and couriers, is not able to effectively identify people with darker skin.
Some drivers have alleged that their Uber accounts were terminated because the technology does not work.
This is because Uber workers have to provide a real time selfie, and face dismissal if the system fails to match the selfie with a stored reference photo.
Lawyers for the union argue that facial recognition systems, including those operated by Uber are inherently faulty and generate particularly poor accuracy results when used with people of colour, the complaint alleges.
“Last year Uber made a big claim that it was an anti-racist company and challenged all who tolerate racism to delete the app,” said Yaseen Aslam, President of ADCU. “But rather than root out racism Uber has bedded it into its systems and workers face discrimination daily as a result.”
“To secure renewal of their license in London, Uber introduced a flawed facial recognition technology which they knew would generate unacceptable failure rates when used against a workforce mainly composed of people of colour,” added James Farrar, General Secretary of ADCU.”
“Uber then doubled down on the problem by not implementing appropriate safeguards to ensure appropriate human review of algorithmic decision making,” said Farrar.
It is alleged that hundreds of drivers and couriers who served through the pandemic have lost their jobs without any due process or evidence of wrongdoing.
Ongoing concerns
Racial discrimination by facial recognition systems has been a concern for many years now.
Twitter in May for example dropped a picture cropping tool after an experiment showed the algorithm sometimes preferred white faces to black ones.
Amazon in May also extended its one-year moratorium on police use of its facial recognition software, making it an indefinite ban.
IBM has also cancelled all its facial recognition programs in light of ongoing concern at the use of the technology.
But Microsoft was the first tech giant to take action, when it previously refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias.
And Redmond also deleted a large facial recognition database, that was said to have contained 10 million images that were used to train facial recognition systems.
These decisions came after research by the US Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.
In August 2019, the ACLU civil rights campaign group in the US ran a demonstration to show how inaccurate facial recognition systems can be.
It ran a picture of every California state legislator through a facial-recognition program that matches facial pictures to a database of 25,000 criminal mugshots.
That test saw the facial recognition program falsely flag 26 legislators as criminals.
Amazon for its part offered its rebuttal to the ACLU test, available here, alleging that the result could be skewed when an inappropriate facial database is used, and that the ACLU default confidence threshold was too low.