Google To Change Review Process Of Scientist Work
Executives at troubled Google AI research unit say they are working to retain trust, after departures of two notable women
Google executives at its troubled AI research unit have pledged to make changes, as they seek to restore trust among staff after a troubled few months.
Reuters said that it had listened to a town hall recording last Friday, in which executives sought to reassure staff, and had committed to change procedures before July for reviewing its scientists’ work.
In a sign of how toxic the relationship (from an outside perspective) between staff and management have become, Reuters said it was able to listen to the hour-long recording which is usually private, and the content was confirmed by two sources.
Town hall meeting
Chief Operating Officer of the research unit Maggie Johnson reportedly said during the meeting that teams are already trialing a questionnaire that will assess projects for risk and help scientists navigate reviews.
This initial change will roll out by the end of the second quarter, and the majority of papers will not require extra vetting, she was quoted as saying by Reuters.
This is not the only change Google is implementing.
In December Google introduced a “sensitive topics” review for studies involving dozens of issues, such as China or bias in its services.
Internal reviewers had demanded that at least three papers on AI be modified to refrain from casting Google technology in a negative light, Reuters reported at the time.
Jeff Dean, Google’s senior vice president overseeing the division, was also quoted as saying on the Friday town hall recording that the “sensitive topics” review “is and was confusing” and that he had tasked a senior research director, Zoubin Ghahramani, with clarifying the rules.
Google reportedly declined to comment on the Friday meeting.
Review process
However, an internal email, seen by Reuters, offered fresh detail on Google researchers’ concerns, showing exactly how Google’s legal department had modified one of the three AI papers, called “Extracting Training Data from Large Language Models.”
The email, dated 8 February, from a co-author of the paper, Nicholas Carlini, went to hundreds of colleagues, seeking to draw their attention to what he called “deeply insidious” edits by company lawyers.
“Let’s be clear here,” the roughly 1,200-word email reportedly said. “When we as academics write that we have a ‘concern’ or find something ‘worrying’ and a Google lawyer requires that we change it to sound nicer, this is very much Big Brother stepping in.”
Required edits, according to his email, included “negative-to-neutral” swaps such as changing the word “concerns” to “considerations,” and “dangers” to “risks.”
Lawyers also allegedly required deleting references to Google technology; the authors’ finding that AI leaked copyrighted content; and the words “breach” and “sensitive,” the email reportedly said.
Carlini did not respond to requests for comment.
‘Fired’ Gebru
Problems within Google’s AI research teams first surfaced in early December 2020, when prominent female AI ethics researcher, Timnit Gebru, alleged that Google had ‘fired’ her over an email she sent to colleagues, but Google disputes this and insisted she didn’t follow procedure.
In the aftermath of Gebru’s departure, staff on Google’s ethical AI research team demanded the firm sideline vice president Megan Kacholia, as well as commit to greater academic freedom.
That call came amid an outpouring of support forGebru, after an open letter demanded transparency was signed by more than 4,500 people, including DeepMind researchers and UK academics, as well as staff from Microsoft, Apple, Facebook, Amazon and Netflix.
And to make matters worse, members of Dr Gebru’s own team at Google also published a second open letter challenging the company’s account of her dismissal.
The furore even saw Alphabet and Google CEO Sundar Pichai telling Google staff in December that the circumstances surrounding Gebru’s departure would be examined.
Earlier this month two staff resigned from Google, citing Gebru’s dismissal as the reason for leaving.
Then last Thursday (one day before the town hall meeting) Google appointed one of its few black executives, Marian Croak, to oversee research on responsible artificial intelligence (AI).
Croak, a vice president of engineering will report to Google AI chief Jeff Dean, and will manage 10 teams studying issues such as racial bias in algorithms and technology for disabled individuals.
But this appointment did not go down well in some quarters.
Google employee Dr Alex Hanna on Twitter called the news about Croak “a betrayal,” saying it occurred behind the Ethical AI team’s back and did not address demands the team made after Gebru’s firing.
And Gebru herself made clear on Twitter, her disappointment on the appointment.
Croak said at Friday’s meeting was quoted by Reuters as saying that it would take time to address concerns among AI ethics researchers and mitigate damage to Google’s brand.
“Please hold me fully responsible for trying to turn around that situation,” she apparently said on the recording.
Another firing
But almost immediately after her appointment, Google fired researcher Margaret Mitchell, following an investigation into misuse of corporate email.
In January, Google had revoked Mitchell’s corporate email access for reportedly using automated scripts to find examples of mistreatment of Gebru.
The company said its review found that Mitchell had violated its code of conduct and security policies, including moving electronic files outside the company.
Mitchell had co-led Google’s ethics in artificial intelligence team for about two years with Gebru.
Mitchell had also collaborated on the paper that prompted Gebru’s departure.