← Back to incidents

Clearview AI Builds 40+ Billion Image Facial Recognition Database Without Consent

Critical

Clearview AI scraped 40+ billion facial images without consent to build a comprehensive surveillance database, resulting in $50+ million in fines and settlements across multiple jurisdictions for privacy violations.

Category
Privacy Leak
Industry
Technology
Status
Ongoing
Date Occurred
Jan 1, 2020
Date Reported
Jan 18, 2020
Jurisdiction
International
AI Provider
Other/Unknown
Model
Clearview AI facial recognition system
Application Type
api integration
Harm Type
privacy
Estimated Cost
$50,000,000
People Affected
3,000,000,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
settled
Regulatory Body
UK ICO, Italian DPA, French CNIL, Canadian Privacy Commissioner
Fine Amount
$52,000,000
facial_recognitionprivacy_violationmass_surveillancebiometric_datadata_scrapingBIPAGDPRconsentlaw_enforcement

Full Description

Clearview AI, founded by Hoan Ton-That and Richard Schwartz in 2017, developed a facial recognition system by scraping billions of publicly available photos from social media platforms including Facebook, Instagram, Twitter, YouTube, and dating apps like Venmo. The company's technology could identify individuals by matching faces against this massive database, providing law enforcement and private entities with unprecedented surveillance capabilities. The company's existence became widely known in January 2020 when The New York Times published an exposé revealing that Clearview had provided its technology to over 600 law enforcement agencies across the United States. By 2024, the database had grown to contain over 40 billion images, making it one of the largest facial recognition databases in existence. Ton-That publicly described his vision of a future where anonymity in public spaces would be eliminated, stating that his technology would make it possible to identify anyone, anywhere. Clearview's business model relied on providing access to law enforcement agencies, immigration authorities, and some private companies. The company marketed its services as a tool for solving crimes and finding missing persons, but critics raised concerns about the potential for abuse and the creation of a pervasive surveillance infrastructure. Internal documents revealed that the company had also provided access to countries with poor human rights records and had been used by authoritarian governments. The privacy violations led to immediate legal and regulatory action across multiple jurisdictions. In the United States, the ACLU filed lawsuits in Illinois, California, New York, and Virginia, challenging the company's practices under various privacy laws including the Illinois Biometric Information Privacy Act. The litigation resulted in a significant settlement in 2022, with Clearview agreeing to restrict sales to most private entities and limit some government uses within the United States. Internationally, regulators imposed substantial fines and restrictions. The UK's Information Commissioner's Office fined Clearview £7.5 million and ordered the deletion of UK residents' data. Italy's data protection authority imposed a €20 million fine, while France's CNIL levied a €20 million penalty. Canada's Privacy Commissioner found the company's practices violated Canadian privacy law and ordered it to cease operations in Canada. Despite these regulatory actions, Clearview AI continued operations, particularly focusing on contracts with law enforcement agencies and expanding internationally. The company maintained that its data collection practices were legal because the images were publicly available, though courts and regulators consistently rejected this argument. The incident highlighted fundamental questions about consent, privacy, and the limits of surveillance technology in democratic societies.

Root Cause

Clearview AI systematically scraped billions of publicly available photos from social media platforms, dating apps, and websites without user consent or notification to build a comprehensive facial recognition database, violating privacy laws across multiple jurisdictions.

Mitigation Analysis

This incident highlights the need for data provenance tracking to verify consent for training data, automated consent verification systems before data collection, and regulatory frameworks specifically governing biometric data collection. Technical controls like differential privacy and purpose limitation could have reduced harm, but the fundamental business model required systematic privacy violations.

Litigation Outcome

ACLU settled in 2022 with restrictions on sales to private entities and some government use limitations. Multiple international regulatory fines totaling over $50 million.

Lessons Learned

This case demonstrates that claims of 'public availability' do not override consent requirements for biometric data collection, and that privacy violations at scale can result in significant regulatory consequences across multiple jurisdictions simultaneously. It also illustrates how AI surveillance technologies can create systemic privacy harms that traditional legal frameworks struggle to address effectively.

Sources

ACLU Settles Groundbreaking Lawsuit Against Clearview AI
ACLU · May 9, 2022 · company statement
ICO fines facial recognition database company Clearview AI Inc
UK Information Commissioner's Office · May 23, 2022 · regulatory action