← Back to incidents
Google Fired AI Ethics Researchers Timnit Gebru and Margaret Mitchell Over Research Paper Controversy
HighGoogle fired AI ethics co-leads Timnit Gebru and Margaret Mitchell in 2020-2021 after disputes over the 'Stochastic Parrots' paper that highlighted risks of large language models. The terminations sparked widespread criticism and raised concerns about corporate control over AI safety research.
Category
Other
Industry
Technology
Status
Resolved
Date Occurred
Dec 2, 2020
Date Reported
Dec 3, 2020
Jurisdiction
US
AI Provider
Google
Application Type
other
Harm Type
reputational
People Affected
2
Human Review in Place
Yes
Litigation Filed
No
ai_ethicscorporate_censorshipresearch_freedomgoogletimnit_gebrumargaret_mitchellstochastic_parrotsalgorithmic_biastech_worker_rights
Full Description
In December 2020, Google abruptly terminated Dr. Timnit Gebru, co-lead of its Ethical AI team, following a dispute over a research paper she co-authored examining the risks and limitations of large language models. The paper, later published as 'On the Dangers of Stochastic Parrots,' highlighted environmental costs, bias amplification, and potential for misinformation in large language models like those Google was developing. Google management demanded the paper be retracted or Gebru's name removed, citing concerns about the research methodology and potential negative impact on Google's AI products.
Gebru, a prominent AI researcher known for her work on algorithmic bias, refused to comply with the demands and sent an internal email criticizing Google's approach to diversity and inclusion, as well as the company's treatment of underrepresented researchers. In response to her email, Google's AI division head Jeff Dean announced that Gebru's employment had been terminated immediately, claiming she had resigned. Gebru disputed this characterization, stating she had been fired for refusing to retract legitimate research.
Two months later, in February 2021, Google fired Margaret Mitchell, the other co-lead of the Ethical AI team and Gebru's research partner. Mitchell had been investigating Gebru's dismissal and had been vocal about the need for transparency in the incident. Google cited violations of its code of conduct and data security policies as reasons for Mitchell's termination, specifically pointing to her use of automated scripts to collect evidence related to Gebru's firing.
The terminations sent shockwaves through the AI research community and sparked widespread criticism of Google's handling of AI ethics research. Prominent researchers, civil rights organizations, and Google employees condemned the firings as an attack on academic freedom and responsible AI research. The controversy highlighted tensions between corporate interests and independent research within big tech companies, particularly around studies that might question the safety or societal impact of AI products generating significant revenue.
The incident led to significant personnel exodus from Google's AI ethics team, with several researchers leaving in solidarity. It also prompted broader discussions about the independence of AI ethics research within technology companies and the need for external oversight of AI development. The controversy ultimately contributed to increased scrutiny of big tech companies' AI development practices and calls for stronger governance frameworks for AI research and deployment.
Root Cause
Corporate leadership rejected internal research paper highlighting risks of large language models, leading to conflict over academic freedom and corporate control of AI ethics research.
Mitigation Analysis
Establishing clear academic freedom policies, independent research review processes, and transparent publication guidelines could have prevented this conflict. Creating firewall between commercial interests and ethics research teams, along with external advisory boards for sensitive research topics, would provide better protection for critical AI safety research.
Lessons Learned
The incident demonstrates the fundamental tension between corporate commercial interests and independent AI safety research, highlighting the need for structural protections for ethics researchers within technology companies and the importance of external oversight in AI development.
Sources
We read the paper that forced Timnit Gebru out of Google. Here's what it says.
MIT Technology Review · Dec 4, 2020 · news
Google fires second AI ethics leader as dispute over research continues
Reuters · Feb 19, 2021 · news