← Back to incidents

FTC Bans Rite Aid from Facial Recognition After False Shoplifting Identifications

Critical

FTC banned Rite Aid from using facial recognition for 5 years after the technology falsely identified customers as shoplifters, with disproportionate impact on minority customers.

Category
Bias
Industry
retail
Status
Resolved
Date Occurred
Jan 1, 2020
Date Reported
Dec 19, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
other
Harm Type
reputational
Human Review in Place
No
Litigation Filed
No
Regulatory Body
Federal Trade Commission
facial_recognitionalgorithmic_biasretailFTCdiscriminationfalse_positivesregulatory_enforcement

Full Description

In December 2023, the Federal Trade Commission issued a comprehensive ban prohibiting Rite Aid Corporation from using facial recognition technology for five years following extensive investigation into discriminatory practices. The FTC found that Rite Aid's facial recognition systems, deployed across hundreds of stores nationwide from approximately 2012 to 2020, systematically produced false positive identifications that disproportionately affected Black, Latino, and Asian customers. The investigation revealed that Rite Aid's facial recognition technology was used to identify suspected shoplifters by matching customer faces against a database of individuals previously flagged for theft or suspicious behavior. However, the system's algorithmic bias resulted in significantly higher error rates for people of color compared to white customers. When the system generated matches, store employees would often take immediate security actions including confronting customers, conducting searches, and in some cases calling law enforcement, without adequate human verification of the technology's accuracy. The FTC's complaint documented numerous instances where innocent customers were wrongfully accused of shoplifting, subjected to humiliating public confrontations, and in some cases wrongfully detained or banned from stores. The false identifications caused significant emotional distress and reputational harm to affected individuals, while exposing Rite Aid to substantial legal and regulatory risks. The company failed to implement adequate safeguards to verify the accuracy of facial recognition matches before taking adverse actions against customers. Under the settlement agreement, Rite Aid is prohibited from using any facial recognition technology until 2028 and must implement comprehensive bias testing and human oversight protocols if it chooses to deploy such systems in the future. The FTC also required Rite Aid to delete existing facial recognition databases and implement employee training programs on algorithmic bias and discrimination. This enforcement action represents one of the most significant regulatory interventions against biased AI systems in retail settings and establishes important precedents for corporate accountability in algorithmic decision-making.

Root Cause

Facial recognition systems exhibited algorithmic bias with higher error rates for people of color, leading to disproportionate false positive identifications of Black, Latino, and Asian customers as shoplifters.

Mitigation Analysis

Implementation of human verification protocols before taking any security action could have prevented wrongful accusations. Regular bias testing across demographic groups would have identified the discriminatory performance early. Real-time monitoring of false positive rates by ethnicity could have triggered system adjustments or suspension.

Lessons Learned

This case demonstrates the critical importance of bias testing and human oversight in AI systems that affect individual rights and dignity, particularly in customer-facing applications where false positives can cause immediate harm.