← Back to incidents

Twitter's Image Cropping Algorithm Demonstrated Racial Bias in Face Selection

High

Twitter's automatic image cropping algorithm systematically favored white faces over Black faces in preview images. Users demonstrated the bias through controlled experiments, leading Twitter to acknowledge the issue and eventually remove automatic cropping entirely.

Category
Bias
Industry
Technology
Status
Resolved
Date Occurred
Jan 1, 2020
Date Reported
Sep 19, 2020
Jurisdiction
US
AI Provider
Other/Unknown
Model
Twitter Image Cropping Algorithm
Application Type
embedded
Harm Type
reputational
People Affected
192,000,000
Human Review in Place
No
Litigation Filed
No
facial_recognitionimage_processingsocial_mediaalgorithmic_biasracial_discriminationsaliency_detection

Full Description

In September 2020, Twitter users began noticing that the platform's automatic image cropping algorithm appeared to favor white faces over Black faces when generating preview images. The algorithm, which was designed to automatically select the most visually salient portion of an image for the preview thumbnail, consistently cropped out Black faces while keeping white faces visible in side-by-side comparisons. The issue gained widespread attention when users conducted controlled experiments, posting images containing both white and Black faces to demonstrate the systematic bias. These experiments showed that regardless of positioning, the algorithm would consistently select the white face as the focal point for the cropped preview. The bias was particularly striking in images where both faces were similarly positioned and lit, ruling out technical factors like image quality or composition as explanations. Twitter initially defended the algorithm, stating that it was designed to focus on faces and had been tested for bias. However, as more examples surfaced and the pattern became undeniable, the company acknowledged the problem in September 2020. Twitter's Chief Technology Officer Parag Agrawal admitted that the company had found bias in the algorithm and committed to investigating the issue further. In 2021, Twitter released findings from its algorithmic bias bounty program, which confirmed the racial bias in the image cropping system. The research revealed that the neural network underlying the cropping algorithm had learned to associate certain facial features and skin tones with higher saliency scores, leading to the systematic exclusion of Black faces from preview images. The bias was found to be statistically significant across large datasets of test images. Twitter's response evolved from initial denial to acknowledgment and ultimately to action. In March 2021, the company announced it would remove automatic image cropping entirely, allowing users to see full images by default rather than algorithmic previews. This decision represented a significant shift in the platform's approach to automated content curation and demonstrated the company's recognition that the bias could not be easily corrected without fundamental changes to the underlying system. The incident highlighted broader issues with algorithmic bias in social media platforms and sparked industry-wide discussions about the need for more rigorous bias testing in AI systems. It also demonstrated how user communities can serve as effective watchdogs for algorithmic fairness, using crowd-sourced testing to expose biases that internal testing had missed.

Root Cause

The neural network-based cropping algorithm was trained on data that led to biased saliency detection, causing it to systematically favor lighter-skinned faces when determining the most important regions of images for preview cropping.

Mitigation Analysis

Comprehensive algorithmic bias testing during development could have identified this issue before deployment. Regular bias audits using diverse test datasets, especially for demographically sensitive applications like facial recognition, would have caught the systematic preference. Implementation of fairness constraints in the training process and ongoing monitoring of algorithmic outputs across demographic groups could have prevented this harm.

Lessons Learned

The incident demonstrates that algorithmic bias can persist even in systems that undergo internal testing, highlighting the need for diverse perspectives in AI development and testing. It also shows the power of transparent, user-driven bias detection and the importance of companies being willing to remove biased systems entirely when fixes prove inadequate.

Sources

Sharing learnings about our image cropping algorithm
Twitter Engineering · May 12, 2021 · company statement