← Back to incidents
San Francisco Police Used AI Surveillance Cameras Despite Voter-Approved Ban
MediumSan Francisco police circumvented a voter-approved facial recognition ban by accessing private cameras with AI capabilities, violating citizen privacy protections and prompting legal challenges.
Category
surveillance
Industry
Government
Status
Ongoing
Date Occurred
Mar 1, 2022
Date Reported
May 12, 2022
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
other
Harm Type
privacy
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
Regulatory Body
San Francisco Board of Supervisors
facial_recognitionsurveillanceprivacygovernmentlocal_policycivil_libertieslaw_enforcement
Full Description
In 2019, San Francisco voters passed the Surveillance Technology Ordinance, making it the first major U.S. city to ban government use of facial recognition technology. The measure was designed to protect citizen privacy and prevent discriminatory policing practices associated with AI surveillance systems. The ban specifically prohibited city agencies, including the San Francisco Police Department, from using facial recognition technology for identification or tracking purposes.
Despite this clear voter mandate, reports emerged in May 2022 that SFPD had been accessing private security cameras equipped with facial recognition capabilities. The department argued that since they were not directly operating the facial recognition systems but merely accessing feeds from private business cameras, they were not technically violating the ban. This interpretation represented a significant loophole in the ordinance that undermined its intended privacy protections.
The revelation sparked immediate controversy among privacy advocates, civil liberties organizations, and city supervisors who had championed the original ban. Critics argued that SFPD's actions violated both the letter and spirit of the voter-approved measure, as officers were still benefiting from AI-powered facial recognition capabilities to identify and track individuals. The Electronic Frontier Foundation and other organizations condemned the practice as circumventing democratic will and democratic oversight.
Investigation revealed that SFPD had been accessing cameras from private businesses in areas including Union Square and other high-traffic locations. The department defended its actions as necessary for public safety and crime prevention, particularly during periods of increased retail theft and public disorder. However, privacy advocates argued that this surveillance capability posed significant risks to civil liberties and could disproportionately impact marginalized communities.
The San Francisco Board of Supervisors initiated investigations into SFPD's surveillance practices and began considering amendments to close loopholes in the original ordinance. Legal challenges were filed by civil rights organizations seeking to enforce the ban and prevent future violations. The incident highlighted the challenges of regulating emerging AI technologies and the need for more comprehensive legislative frameworks.
The controversy ultimately led to stricter enforcement mechanisms and clearer definitions of prohibited surveillance technologies. SFPD was required to discontinue its use of third-party facial recognition systems and implement stronger compliance procedures. The incident became a case study in the challenges of governing AI surveillance technologies at the local level.
Root Cause
SFPD exploited loopholes in the facial recognition ban by accessing private business cameras with AI capabilities rather than using city-owned systems, circumventing the intent of voter-approved privacy protections.
Mitigation Analysis
Stronger legal frameworks with explicit definitions of prohibited AI surveillance technologies could have prevented this violation. Clear auditing mechanisms and regular compliance monitoring of police technology use, including third-party access, would have detected the breach earlier. Mandatory disclosure of all AI tools accessed by law enforcement could ensure accountability.
Lessons Learned
Local AI governance requires precise technical definitions and robust enforcement mechanisms to prevent circumvention. Voter-approved technology bans must anticipate workarounds and include comprehensive oversight of both direct and indirect AI system access.
Sources
San Francisco Police Still Using Face Recognition Despite Ban
Electronic Frontier Foundation · May 12, 2022 · news
SFPD Found Ways Around Facial Recognition Ban
SFGate · May 11, 2022 · news