Twitter Drops Picture Cropping Tool Over Bias Concern
Twitter confirms it has abandoned a picture cropping algorithm after an indepth investigation confirmed racial and gender bias
Twitter has dropped a picture cropping tool after it began an investigation last year when an experiment showed the algorithm sometimes preferred white faces to black ones.
Last September Twitter opened up the source code for the algorithm it has used since 2018 for picture cropping on the Twitter mobile app.
Its job was to automatically crop pictures that were too big to fit on the screen. It apparently selected which parts of an image should be cut off.
But concern began when graduate programmer Tony Arcieri posted a long image featuring headshots of then Senate Republican leader Mitch McConnell at the top and former US president Barack Obama at the bottom – separated by white space.
Racial, gender bias
In a second image, Obama’s headshot was placed at the top, with McConnell’s at the bottom.
Both times, former president Obama was cropped out altogether.
Twitter responded quickly and said at the time that it had tested for racial and gender bias during the algorithm’s development. But it also promised to open up the source code so others could check it for bias.
And now almost 8 months later Twitter has announced its decision in a blog post on Wednesday, by Rumman Chowdhury, a software engineering director for Twitter’s machine learning ethics, transparency and accountability team.
“In October 2020, we heard feedback from people on Twitter that our image cropping algorithm didn’t serve all people equitably,” he wrote. “As part of our commitment to address this issue, we also shared that we’d analyse our model again for bias.”
Chowdhury wrote that Twitter’s teams had over the past few months, accelerated improvements for how it assess algorithms for potential bias.
“Today, we’re sharing the outcomes of our bias assessment and a link for those interested in reading and reproducing our analysis in more technical detail,” he wrote.
The research paper is available here, and Twitter concluded the algorithm was biased after testing it for gender- and race-based biases.
It found the algorithm essentially favoured White people over Black people, and women over men. It found:
- In comparisons of men and women, there was an 8 percent difference from demographic parity in favour of women.
- And then in comparisons of black and white individuals, there was a 4 percent difference from demographic parity in favour of white individuals.
- In comparisons of black and white women, there was a 7 percent difference from demographic parity in favor of white women.
- In comparisons of black and white men, there was a 2 percent difference from demographic parity in favour of white men.
Manual cropping
Chowdhury then confirmed that bearing the above in mind, Twitter had decided to drop the picture cropping algorithm, and instead allow humans to do it themselves.
“We considered the tradeoffs between the speed and consistency of automated cropping with the potential risks we saw in this research,” he wrote. “One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people.”
Chowdhury said that since March, Twitter has been testing a new way to display standard aspect ratio photos in full on iOS and Android.
“After getting positive feedback on this experience, we launched this feature to everyone,” he wrote. “This update also includes a true preview of the image in the Tweet composer field, so Tweet authors know how their Tweets will look before they publish.”
“We want to thank you for sharing your open feedback and criticism of this algorithm with us,” he concluded.
Facial recognition
There are ongoing concerns about racial bias in facial-recognition technology as well.
Amazon this week extended its one-year moratorium on police use of its facial recognition software, making it an indefinite ban.
IBM has also cancelled all its facial recognition programs in light of ongoing concern at the use of the technology.
But Microsoft was the first tech giant to take action, when it previously refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias.
And Redmond also deleted a large facial recognition database, that was said to have contained 10 million images that were used to train facial recognition systems.
Previous tests
These decisions came after research by the US Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.
In August 2019, the ACLU civil rights campaign group in the US ran a demonstration to show how inaccurate facial recognition systems can be.
It ran a picture of every California state legislator through a facial-recognition program that matches facial pictures to a database of 25,000 criminal mugshots.
That test saw the facial recognition program falsely flag 26 legislators as criminals.
Amazon offered its rebuttal to the ACLU test, available here, alleging that the result could be skewed when an inappropriate facial database is used, and that the ACLU default confidence threshold was too low.
But police in the UK have defended the use of facial recognition.
In February 2020 the UK’s most senior police officer, Metropolitan Police Commissioner Cressida Dick, said criticism of the tech was “highly inaccurate or highly ill informed.”
She also said facial recognition was less concerning to many than a knife in the chest.