In recent days, the scientific world of artificial intelligence has been experiencing some turmoil after Timnit Gebru, technical co-leader of Google’s ethical AI (artificial intelligence) team, said on her Twitter page that she had just been fired by Jeff Dean, Head of Google’s AI Unit. This situation arose as a result of an email the black researcher sent to the company’s AI unit’s Google Brain Women and Allies mailing list to express her frustration over a research paper she said. hoped to publish, but that had been rejected by his superiors.
When it comes to Gebru, you should know that she is one of the best-known and most respected black female scientists working in the field of AI. Before joining Google in 2018, Gebru worked with MIT researcher Joy Buolamwini on a project called Gender Shades which found that IBM and Microsoft’s face analysis technology was very accurate for white men, but very inaccurate for black women. This has helped push U.S. lawmakers and tech players to question and test the accuracy of facial recognition on different demographics. This prompted Microsoft, IBM and Amazon to announce that they would suspend the mainstream release of the technology this year. Gebru also co-founded the non-profit organization Black in AI which aims to increase the representation of people of color in the field.
Being quite well known and respected in this community, it is therefore not surprising to see many voices being raised in defense of Gebru. On the web, 1,604 Googlers (Google employees) and 2,512 people from academia, technology industry and civil society signed a letter to support the AI researcher. In this letter, which openly defends Gebru, the authors make the following statements:
- We demand that Jeff Dean (Google Senior Fellow and Senior Vice-President of Research), Megan Kacholia (Vice-President of Engineering for the Google Brain organization) and those who were involved in the decision to censor Dr Gebru’s article meet the AI ethics team to explain the process by which the document was unilaterally rejected by the leaders.
- We demand transparency from the general public, including Google users and our colleagues in the academic community, on the decision of Google management to order Ms. Gebru and her colleagues to withdraw their research on language models. in large scale. This has become a matter of public concern, and public accountability is necessary to ensure any trust in Google Research in the future.
- We call on Google Research to make an unequivocal commitment to research integrity and academic freedom, to significantly strengthen commitments made in Google’s search philosophy, and to commit to supporting research that promotes the goals of Google’s AI Principles by providing clear guidelines on how search will be reviewed and how search integrity will be upheld.
Faced with this flood of criticism against Google, Jeff Dean reportedly sent the email below to his team to clear up any confusion.
” Because there has been a lot of speculation and misunderstanding on social media, I wanted to share a little more about how it went and make sure we are here to support you as you continue to research which you are all committed to.
Timnit co-authored an article with four colleagues at Google as well as with external collaborators who had to go through our review process (as is the case with all articles submitted externally). We have approved dozens of articles that Timnit and / or the other Googlers have written and published.
But as you know, articles often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this document was only shared with a day’s notice before its deadline – we need two weeks for this type of review – and instead of waiting for reviewers’ comments, it was approved. for submission and submitted. A cross-functional team then reviewed the article as part of our regular process and the authors were notified that it did not meet our publication bar and received comments on the reasons. The article put aside too much relevant research – for example, it spoke about the environmental impact of large models, but ignored subsequent research showing much greater efficiency gains. Likewise, he raised concerns about biases in language patterns, but failed to consider recent research to alleviate these issues. We recognize that the authors were extremely disappointed with the decision Megan and I ultimately made, especially since they had already submitted the article.
Timnit responded with an email demanding that a number of conditions be met for her to continue working at Google, which means revealing the identity of every person Megan and I had spoken to and consulted with in the framework for document review and feedback. Timnit wrote that if we didn’t meet those requirements she would quit Google and work on an end date. We accept and respect his decision to resign from Google.
Considering Timnit’s role as a respected researcher and manager of our ethical AI team, I’m sorry that Timnit has come to a point where she feels the work that we do in this way. I’m also sorry that hundreds of you received an email from Timnit this week asking you to stop working on critical DCI programs. Please do not do it. I understand the frustration with the pace of progress, but we have important work to do and we must maintain it.
I know we all genuinely share Timnit’s passion for making AI more equitable and inclusive. Without a doubt, wherever she goes after Google she will do a great job and I can’t wait to read her articles and see what she does.
Thanks for reading, and thanks for all the important work you continue to do.