← Back to incidents

Clearview AI Scraped Billions of Photos for Facial Recognition Without Consent

Critical

Clearview AI scraped over 3 billion facial images from social media without consent to build a surveillance database sold to law enforcement. The company faced over $21 million in regulatory fines and a $50 million class action settlement.

Category
surveillance
Industry
Government
Status
Ongoing
Date Occurred
Jan 1, 2017
Date Reported
Jan 18, 2020
Jurisdiction
International
AI Provider
Other/Unknown
Application Type
api integration
Harm Type
privacy
Estimated Cost
$50,000,000
People Affected
3,000,000,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
settled
Regulatory Body
Multiple EU Data Protection Authorities, UK ICO, ACLU
Fine Amount
$21,500,000
facial_recognitionmass_surveillanceprivacy_violationdata_scrapingbiometric_dataGDPRlaw_enforcementsocial_media

Full Description

Clearview AI, founded in 2017 by Hoan Ton-That and Richard Schwartz, developed a facial recognition system by scraping billions of photographs from public social media platforms including Facebook, Instagram, Twitter, YouTube, and Venmo without user consent or platform authorization. The company built a database containing over 3 billion facial images linked to their original online sources, creating an unprecedented surveillance tool. The company marketed its technology primarily to law enforcement agencies across the United States, claiming the ability to identify individuals from a single photograph with high accuracy. Clearview's client list reportedly included the FBI, Department of Homeland Security, and hundreds of local police departments. The technology was also sold to private security companies and used for various commercial purposes beyond law enforcement. The existence and scope of Clearview's operations became public in January 2020 through investigative reporting by The New York Times. The revelation sparked immediate controversy over privacy rights, consent, and the implications of mass facial recognition surveillance. Major social media platforms including Facebook, Twitter, Google, and LinkedIn sent cease-and-desist letters demanding Clearview stop scraping their platforms, citing violations of their terms of service. Regulatory authorities worldwide took swift action against Clearview AI. The UK's Information Commissioner's Office issued a provisional fine of £17 million ($21.5 million) in May 2021 for violating GDPR and UK data protection laws. Multiple EU data protection authorities including those in France, Italy, and Greece imposed additional fines totaling several million euros. The Australian Privacy Commissioner found Clearview in breach of Australian privacy laws and ordered the deletion of Australian citizens' data. In the United States, the American Civil Liberties Union filed a class action lawsuit in Illinois under the state's Biometric Information Privacy Act, arguing that Clearview violated individuals' biometric privacy rights. In May 2022, Clearview agreed to settle the lawsuit for $50 million and restrict its services primarily to federal law enforcement agencies, agreeing to stop selling to most commercial entities and private companies. Despite regulatory pressure and legal settlements, Clearview AI continues to operate with a modified business model focused on federal law enforcement and military applications. The company has reportedly expanded its database and improved its technology while facing ongoing scrutiny from privacy advocates and regulatory bodies worldwide.

Root Cause

Clearview AI systematically scraped billions of photographs from social media platforms including Facebook, Instagram, Twitter, and YouTube without user consent or platform authorization to build a facial recognition database. The company then sold access to this database to law enforcement agencies and private companies, enabling mass surveillance capabilities.

Mitigation Analysis

This incident could have been prevented through stronger data governance controls including explicit consent mechanisms for biometric data collection, automated content scraping detection by platforms, mandatory impact assessments for biometric processing, and clearer regulatory frameworks prohibiting mass biometric surveillance without judicial oversight. Platform-level technical controls to prevent scraping and stronger enforcement of terms of service violations would have limited the data collection scope.

Litigation Outcome

Class action lawsuit settled for $50 million in 2022, with Clearview agreeing to restrict usage to federal agencies and stop selling to most commercial entities

Lessons Learned

The Clearview AI case demonstrates the urgent need for comprehensive biometric privacy regulations and stronger enforcement mechanisms for unauthorized data collection. It highlights how existing platform terms of service are insufficient to prevent systematic data scraping for surveillance purposes, requiring both technical and legal solutions.

Sources

ICO fines facial recognition database company Clearview AI Inc
UK Information Commissioner's Office · May 23, 2021 · regulatory action