← Back to incidents
Chinese AI Traffic Camera Falsely Identified Bus Advertisement as Jaywalker
MediumIn 2018, a Chinese AI traffic enforcement camera mistakenly identified a businesswoman's face on a bus advertisement as a jaywalker, publicly displaying her photo on a shame screen designed to deter traffic violations.
Category
Safety Failure
Industry
Government
Status
Resolved
Date Occurred
Nov 1, 2018
Date Reported
Nov 21, 2018
Jurisdiction
China
AI Provider
Other/Unknown
Application Type
embedded
Harm Type
reputational
People Affected
1
Human Review in Place
No
Litigation Filed
No
facial_recognitiontraffic_enforcementfalse_positivechinapublic_shamingsurveillanceliveness_detection
Full Description
In November 2018, an AI-powered traffic enforcement system in Ningbo, Zhejiang Province, China, made headlines for a significant misidentification error. The system, designed to catch jaywalkers and publicly shame them by displaying their photos on large screens, incorrectly identified a prominent businesswoman as a traffic violator. The woman in question was Dong Mingzhu, chairwoman of major appliance manufacturer Gree Electric.
The incident occurred when the facial recognition camera detected what it interpreted as a person illegally crossing the street. However, the 'person' was actually Dong Mingzhu's face printed on an advertisement displayed on the side of a public bus. The AI system captured the image and, following its programmed protocol, displayed the photo on a public shame screen along with personal information, intending to publicly humiliate what it believed to be a jaywalker.
The error was discovered when observers noticed the discrepancy and reported it to authorities. The incident quickly gained attention on Chinese social media, highlighting the limitations of the facial recognition technology being deployed for traffic enforcement. The system's inability to distinguish between a real person and a printed advertisement exposed fundamental flaws in the AI's image processing capabilities.
China has been a global leader in implementing AI-powered surveillance and enforcement systems, with facial recognition technology widely deployed in urban areas for various purposes including traffic monitoring, crime prevention, and social credit scoring. This incident occurred during a period of rapid expansion of such systems across Chinese cities, where public shaming through digital billboards had become a common enforcement mechanism for traffic violations.
The Ningbo traffic police acknowledged the error and reportedly upgraded their system to better distinguish between real faces and images. The incident served as a wake-up call for AI system developers and government agencies about the importance of robust testing and validation before deploying automated enforcement systems that can impact citizens' reputations and privacy.
Root Cause
The facial recognition system lacked the capability to distinguish between real human faces and printed images on advertisements, leading to false positive identification and automated public shaming.
Mitigation Analysis
This incident could have been prevented through liveness detection algorithms that verify active human presence, mandatory human review before public display of violations, or depth-sensing cameras to distinguish 2D images from 3D faces. The system should have included confidence thresholds and multi-factor verification before automated punishment.
Lessons Learned
This incident demonstrates the critical importance of liveness detection and multi-modal verification in facial recognition systems, especially when used for automated enforcement with public consequences. It highlights the risks of deploying AI systems without adequate safeguards against false positives in high-stakes applications.
Sources
Chinese facial recognition AI mistakes bus ad for jaywalker
BBC · Nov 21, 2018 · news
Chinese AI facial recognition system mistakes bus ad for famous businesswoman jaywalker
South China Morning Post · Nov 21, 2018 · news