← Back to incidents
Google Photos AI Labeled Black People as 'Gorillas'
CriticalGoogle Photos' AI image recognition system labeled photos of Black people as 'gorillas' in 2015. Google's response was to remove the 'gorilla' category entirely rather than fix the underlying algorithmic bias, which reportedly remained unresolved as of 2023.
Category
Bias
Industry
Technology
Status
Ongoing
Date Occurred
Jun 28, 2015
Date Reported
Jul 1, 2015
Jurisdiction
US
AI Provider
Google
Application Type
embedded
Harm Type
reputational
Human Review in Place
No
Litigation Filed
No
facial_recognitionracial_biasimage_classificationtraining_dataalgorithmic_discriminationcomputer_vision
Full Description
In June 2015, Google Photos user Jacky Alciné discovered that the platform's automatic image recognition feature had labeled photos of him and a Black female friend as 'gorillas.' Alciné posted screenshots of the offensive labeling on Twitter on June 28, 2015, tagging Google and expressing his outrage at the racist categorization. The incident quickly gained widespread attention on social media and in technology news outlets, highlighting serious issues with algorithmic bias in computer vision systems. Google became aware of the issue on June 29, 2015, when Alciné's tweet began circulating widely across social platforms.
The technical failure occurred within Google Photos' automated image classification system, which used deep learning neural networks to identify and tag objects, people, and animals in uploaded photos. The image recognition model had been trained on datasets with insufficient representation of Black people, causing the algorithm to learn discriminatory features that incorrectly associated darker skin tones with non-human categories. The neural network's feature detection system failed to properly distinguish between human faces of different races and animal characteristics, resulting in the systematic misclassification of Black individuals as primates. The model's training data bias led to pattern recognition that reinforced harmful racial stereotypes through automated categorization.
The incident caused significant reputational damage to Google and sparked widespread criticism about racial bias in artificial intelligence systems developed by major technology companies. Black users of Google Photos experienced direct harm through the dehumanizing categorization, while the broader Black community faced the perpetuation of historical racist tropes through modern technology. The incident became a prominent example cited in academic research, policy discussions, and media coverage about algorithmic discrimination. Technology industry leaders and civil rights organizations used the case to highlight the urgent need for more diverse and inclusive AI development practices.
Google responded within hours through Chief Social Architect Yonatan Zunger, who apologized publicly on Twitter and promised immediate action to address the issue. The company's initial response involved completely removing the 'gorilla' label from Google Photos' automatic tagging system by July 1, 2015. Google also stated they were working on longer-term fixes to improve the accuracy of their image recognition algorithms, particularly for people of color. The company implemented additional content filtering to prevent similar offensive categorizations from appearing in the future.
Despite Google's promises of comprehensive fixes, investigative reporting by Wired in 2018 revealed that the company had not resolved the underlying algorithmic bias three years after the incident. Instead of retraining the model with more representative data, Google maintained its approach of blocking certain animal labels, including 'gorilla,' 'chimpanzee,' and 'monkey,' from appearing in Google Photos entirely. This workaround prevented the specific offensive labeling but left the discriminatory model architecture unchanged, effectively censoring legitimate animal photos while avoiding the more complex work of addressing training data bias.
Follow-up investigations through 2023 confirmed that Google's image recognition systems continued to struggle with accurately identifying Black individuals, and the animal label restrictions remained in place nearly eight years after the original incident. The case became a landmark example in AI ethics research and policy development, frequently cited in discussions about the need for algorithmic auditing, diverse training datasets, and inclusive AI development practices. The incident influenced subsequent regulatory proposals and industry standards for bias testing in machine learning systems, though comprehensive solutions for addressing algorithmic discrimination in computer vision remained elusive across the technology sector.
Root Cause
The image recognition model was trained on datasets with insufficient representation of Black people, and the neural network learned discriminatory features that associated darker skin tones with non-human categories. The algorithm's feature detection system failed to properly distinguish between human faces of different races and animal features.
Mitigation Analysis
This incident could have been prevented through diverse training datasets with balanced racial representation, algorithmic bias testing across demographic groups, and human review of sensitive classification categories. Post-deployment monitoring should have included bias detection systems that flag potentially discriminatory outputs before they reach users. The company's response of simply removing the 'gorilla' label rather than addressing the underlying bias demonstrates inadequate remediation.
Lessons Learned
This incident demonstrates how algorithmic bias can perpetuate and amplify social discrimination when AI systems are developed without adequate consideration for diverse populations. It also highlights the inadequacy of quick fixes that suppress symptoms rather than addressing root causes of bias in machine learning systems.
Sources
When It Comes to Gorillas, Google Photos Remains Blind
Wired · Jan 11, 2018 · news
Google 'fixed' its racist algorithm by removing gorillas from its image-labeling tech
The Guardian · Jul 1, 2015 · news