Twitter has opened up the source code after an experiment apparently showed its picture-cropping algorithm sometimes prefers white faces to black ones.
The Twitter tool in question is reportedly an automatic tool on Twitter’s mobile app. Its job is to automatically crop pictures that are too big to fit on the screen. It apparently selects which parts of an image should be cut off.
According to Sky News, graduate programmer Tony Arcieri posted a long image featuring headshots of Senate Republican leader Mitch McConnell at the top and former US president Barack Obama at the bottom – separated by white space.
In a second image, Mr Obama’s headshot was placed at the top, with Mr McConnell’s at the bottom.
Both times, former president Obama was cropped out altogether.
“Twitter is just one example of racism manifesting in machine learning algorithms,” Arcieri tweeted.
Twitter responded quickly and said that it had tested for racial and gender bias during the algorithm’s development.
It also promised to open up the source code so others could check it for bias.
“We tested for bias before shipping the model & didn’t find evidence of racial or gender bias in our testing,” it tweeted. “But it’s clear that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, & will open source it so others can review and replicate.”
Twitter’s chief technology officer, Parag Agrawal, also commented on the issue.
“We did analysis on our model when we shipped it – but [it] needs continuous improvement,” he tweeted. “Love this public, open, and rigorous test – and eager to learn from this.”
There are ongoing concerns about racial bias in facial-recognition technology as well.
Amazon in June became the latest tech giant to express these concerns, after it placed a one-year moratorium on police use of its facial recognition software.
IBM also cancelled all its facial recognition programs in light of ongoing concern at the use of the technology.
But Microsoft was the first, when it previously refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias.
And Redmond also deleted a large facial recognition database, that was said to have contained 10 million images that were used to train facial recognition systems.
These decisions came after research by the US Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.
In August 2019, the ACLU civil rights campaign group in the US ran a demonstration to show how inaccurate facial recognition systems can be.
It ran a picture of every California state legislator through a facial-recognition program that matches facial pictures to a database of 25,000 criminal mugshots.
That test saw the facial recognition program falsely flag 26 legislators as criminals.
Welcome to Silicon UK: AI for Your Business Podcast. Today, we explore how AI can…
Japanese tech investment firm SoftBank promises to invest $100bn during Trump's second term to create…
Synopsys to work with start-up SiMa.ai on joint offering to help accelerate development of AI…
Start-up Basis raises $34m in Series A funding round for AI-powered accountancy agent to make…
Data analytics and AI start-up Databricks completes huge $10bn round from major venture capitalists as…
Congo files legal complaints against Apple in France, Belgium alleging company 'complicit' in laundering conflict…