← Back to incidents

AWS Rekognition Facial Recognition System Shows Racial Bias in Congressional Test

High

MIT and ACLU testing revealed Amazon Rekognition falsely matched 28 US Congress members with criminal mugshots, with 39% of errors affecting people of color despite comprising only 20% of Congress.

Category
Bias
Industry
Technology
Status
Resolved
Date Occurred
Jul 1, 2018
Date Reported
Jul 26, 2018
Jurisdiction
US
AI Provider
Other/Unknown
Model
Amazon Rekognition
Application Type
api integration
Harm Type
reputational
People Affected
28
Human Review in Place
No
Litigation Filed
No
facial_recognitionalgorithmic_biaslaw_enforcementracial_discriminationamazonaclucongresscriminal_justice

Full Description

In July 2018, the American Civil Liberties Union (ACLU) conducted a study in partnership with MIT researchers testing Amazon's Rekognition facial recognition service. The researchers used publicly available photos of all 535 members of Congress and compared them against a database of 25,000 publicly available criminal mugshots using Amazon's default confidence threshold settings. The results revealed significant bias in the system's performance. Out of 535 Congressional photos tested, Rekognition incorrectly matched 28 members of Congress with individuals in the criminal mugshot database. Most concerning was the disproportionate impact on people of color: while members of color comprised only about 20% of Congress, they represented 39% of the false matches (11 out of 28 incorrect identifications). Amazon disputed the ACLU's methodology, arguing that the researchers used an inappropriately low confidence threshold of 80% rather than the 99% threshold Amazon recommended for law enforcement applications. Amazon claimed that using their recommended settings would have eliminated most false positives. The company also criticized the test design, stating that comparing Congressional photos against mugshots was not representative of real-world law enforcement use cases. The incident sparked broader debate about the use of facial recognition technology in law enforcement and criminal justice applications. Several civil rights organizations and technology experts raised concerns about the potential for such systems to perpetuate and amplify existing biases in policing, particularly given documented disparities in arrest rates across racial groups. The false identifications included several prominent members of Congress, including Representatives John Lewis, Bobby Scott, and Joaquin Castro. Following the publication of these results, pressure mounted on Amazon to restrict law enforcement access to Rekognition technology. The incident became a catalyst for broader discussions about algorithmic bias in facial recognition systems and contributed to eventual moratoriums on facial recognition use by several major technology companies and government entities.

Root Cause

Amazon Rekognition's facial recognition algorithm exhibited systematic bias against darker-skinned individuals, likely due to insufficient diverse training data and algorithmic bias in the underlying machine learning models.

Mitigation Analysis

This incident could have been prevented through comprehensive bias testing across demographic groups before deployment, diverse training datasets, algorithmic fairness audits, and confidence threshold tuning. Ongoing monitoring for disparate impact across racial groups and mandatory human review for law enforcement applications would reduce harmful false positives.

Lessons Learned

This incident demonstrated the critical importance of comprehensive bias testing for AI systems before deployment, particularly those used in sensitive applications like law enforcement. It highlighted how default system configurations may not be appropriate for high-stakes use cases and the need for clear guidance on proper implementation.