The fallout continues from Google’s employment decision last week concerning a prominent AI researcher, who claimed she was fired over a disagreement about a research paper.
Dr Timnit Gebru is best known for her work on algorithmic bias, particularly with facial recognition technology. In 2018, she co-authored a paper that highlighted the higher error rates for identifying darker-skinned people, partly because facial recognition AI algorithms were mostly trained with datasets that were overwhelmingly white.
Last week Dr Gebru claimed that Google had fired her over an email she sent to colleagues, but Google disputes this and insisted she didn’t follow procedure.
The issue began when Gebru alleged that Google was trying to get her to retract a research paper, because she had co-authored a paper with four other Googlers as well as some external collaborators that needed to go through Google’s review process.
Google however alleged that this particular paper was only shared with a day’s notice before its deadline, and that it normally requires two weeks to conduct a review.
But instead of awaiting reviewer feedback, the paper was approved for submission and submitted.
Gebru then emailed a Google unit (the Brain Women and Allies listserv), in which she voiced frustration that managers were trying to get her to retract the research paper.
After the email went out, Gebru reportedly told managers that certain conditions had to be met in order for her to stay at the company.
Otherwise, she would have to work on a transition plan.
But Google called her bluff, and a senior Google official at Google Research, said she could not meet Gebru’s conditions and accepted her resignation as a result.
Gebru’s position was initially supported by the Google Walkout Twitter account, which is run by current and former employees.
But now it seems that support is growing for the AI ethics expert, after an open letter demanding transparency has now been signed by more than 4,500 people, including DeepMind researchers and UK academics.
It was signed by staff at Google, Microsoft, Apple, Facebook, Amazon and Netflix as well as several scientists from Google’s DeepMind.
And to make matters worse, on Monday, members of Dr Gebru’s own team at Google published a second open letter challenging the company’s account of her dismissal.
“Dr. Gebru did not resign, despite what Jeff Dean (Senior Vice President and head of Google Research), has publicly stated,” said the letter. “Dr. Gebru has stated this plainly, and others have meticulously documented it.”
Read also : OpenAI Starts Testing New ‘Reasoning’ AI Model“Dr.Gebru detailed conditions she hoped could be met,” the letter added. “Those conditions were for 1) transparency around who was involved in calling for the retraction of the paper, 2) having a series of meetings with the Ethical AI team, and 3) understanding the parameters of what would be acceptable research at Google. She then requested a longer conversation regarding the details to occur post-vacation.”
“In response, she was met with immediate dismissal, as she details in this tweet,” said the letter. “Dr. Gebru’s dismissal has been framed as a resignation, but in Dr. Gebru’s own words, she did not resign. All reports under her management received a letter from Megan Kacholia (Vice President of Engineering for the Google Brain organization), stating that Megan had accepted Timnit’s resignation. Megan went around Dr. Gebru’s own manager, Samy Bengio (lead of Google Brain) in sending these emails, which he has stated publicly.”
And it is not just Google staffers, but also other experts in the field who have added their support for Dr. Gebru.
“I stand with Dr Timnit Gebru,” Tabitha Goldstaub, who chairs the UK government’s AI council, was quoted by the BBC as saying. “She’s brave, brilliant and we’ve all benefited from her work.
“This is another example of why independent, publicly funded research into AI is so important,” said Goldstaub.
University College London honorary associate professor Julien Cornebise also told the BBC that “selective publication” had occurred historically with the tobacco industry and cancer, as well as energy industries and climate change.
“AI researchers need to realise where they write their research is important because they might not have control over how it is used and published,” he reportedly said.
Landmark ruling finds NSO Group liable on hacking charges in US federal court, after Pegasus…
Microsoft reportedly adding internal and third-party AI models to enterprise 365 Copilot offering as it…
Albania to ban access to TikTok for one year after schoolboy stabbed to death, as…
Shipments of foldable smartphones show dramatic slowdown in world's biggest smartphone market amidst broader growth…
Google proposes modest remedies to restore search competition, while decrying government overreach and planning appeal
Sega 'evaluating' starting its own game subscription service, as on-demand business model makes headway in…