← Back to incidents
AI-Powered Retail Surveillance Systems Disproportionately Tracked Black Shoppers at Major Retailers
HighAI surveillance systems deployed by major retailers including Walmart, Target, and CVS were documented by the ACLU to disproportionately flag and track Black shoppers, embedding racial profiling into automated retail security.
Category
Bias
Industry
Other
Status
Ongoing
Date Occurred
Jan 1, 2019
Date Reported
Jul 13, 2021
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
embedded
Harm Type
reputational
People Affected
100,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
retailsurveillancefacial_recognitionracial_biascivil_rightsACLUalgorithmic_discriminationcomputer_vision
Full Description
Between 2019 and 2021, major U.S. retailers including Walmart, Target, CVS, and others deployed AI-powered surveillance systems from vendors like Vaak and DeepCam that were designed to identify suspicious behavior and potential shoplifting. The American Civil Liberties Union (ACLU) conducted extensive research documenting how these systems exhibited systematic racial bias against Black customers.
The ACLU's investigation revealed that AI surveillance algorithms were significantly more likely to flag Black shoppers for additional monitoring, security intervention, and store personnel attention compared to white customers exhibiting similar behaviors. The systems used computer vision and machine learning to analyze customer movements, facial expressions, and shopping patterns, but the underlying training data and algorithmic design perpetuated existing racial biases in retail security practices.
Vendors like Vaak marketed their AI systems as reducing shoplifting by up to 10x through automated threat detection, while DeepCam promoted real-time behavioral analysis capabilities. However, these systems lacked adequate bias testing and fairness safeguards, resulting in discriminatory outcomes that disproportionately impacted Black customers across hundreds of retail locations nationwide.
The documented harm extended beyond individual customer experiences to broader community impact, as affected stores became less welcoming environments for Black shoppers and reinforced harmful stereotypes about criminality. Civil rights organizations filed multiple complaints and lawsuits alleging that the AI systems violated anti-discrimination laws by creating a disparate impact on protected classes. The incidents sparked broader conversations about algorithmic accountability in retail and the need for bias auditing in commercial AI deployments.
Root Cause
AI surveillance systems were trained on biased datasets and deployed without adequate testing for racial bias, resulting in algorithms that systematically flagged Black customers as higher risk for shoplifting or suspicious behavior.
Mitigation Analysis
Comprehensive bias testing with diverse datasets representing different demographics could have identified racial disparities before deployment. Regular algorithmic audits and fairness metrics monitoring would detect bias in production systems. Human oversight requiring human verification before flagging customers could reduce automated discrimination.
Lessons Learned
The incident demonstrates how AI systems can amplify and systematize existing human biases at scale, making algorithmic bias testing and fairness auditing essential for any AI deployment affecting protected classes.
Sources
How Artificial Intelligence Can Deepen Racial and Economic Inequalities
American Civil Liberties Union · Jul 13, 2021 · company statement
U.S. stores are using AI cameras to watch shoppers; civil rights groups concerned
Reuters · Nov 16, 2021 · news