← Back to incidents
CBP Facial Recognition Systems Show Racial and Demographic Bias in Border Screening
HighCBP's facial recognition systems at US border crossings demonstrated significant bias, with higher error rates for people of color, women, and elderly travelers. GAO investigations revealed systematic disparities affecting millions of border crossers annually.
Category
Bias
Industry
Government
Status
Ongoing
Date Occurred
Jan 1, 2019
Date Reported
Aug 1, 2019
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
embedded
Harm Type
operational
People Affected
274,000,000
Human Review in Place
Yes
Litigation Filed
No
Regulatory Body
Government Accountability Office
facial_recognitiongovernmentbiasborder_securitydiscriminationCBPdemographics
Full Description
US Customs and Border Protection (CBP) deployed facial recognition technology across major border crossings and airports starting in 2019 as part of the Biometric Entry-Exit Program, processing hundreds of millions of travelers annually. The system was designed to verify identities by comparing live facial scans against passport photos and databases of known individuals. By August 2019, Government Accountability Office investigations revealed that these systems demonstrated significant racial and demographic bias affecting an estimated 274 million people who cross US borders each year.
Government Accountability Office investigations and academic studies revealed significant disparities in the system's performance across demographic groups. The technology showed notably higher false rejection rates for people of color, particularly those with darker skin tones, as well as women and elderly individuals. These technical failures occurred because the facial recognition algorithms were trained on datasets with insufficient representation of diverse demographics, leading to systematic errors when processing faces that differed from the predominantly white, male, younger training data. The systems struggled specifically with darker skin tones, facial features associated with different ethnicities, and age-related facial changes.
The operational impact was substantial, forcing affected travelers into secondary screening processes that caused significant delays and potential complications with border entry. Non-white travelers, women, and elderly individuals experienced disproportionately longer processing times due to false rejections requiring manual verification. The bias created a two-tiered system where certain demographic groups faced systematically different treatment at border crossings, potentially affecting travel patterns and creating discriminatory barriers to entry. Civil rights organizations documented cases where the technology's failures led to increased scrutiny and questioning of travelers from affected groups.
Following the GAO's findings, CBP acknowledged the performance disparities but maintained that the technology provided overall security benefits. The agency stated it was working with vendors to improve algorithmic accuracy across demographic groups and implementing additional testing procedures. CBP also indicated plans to collect more diverse training data and establish bias mitigation strategies, though specific timelines and implementation details remained limited in public disclosures.
The incident highlighted broader systemic issues with government adoption of biometric technologies without adequate bias testing. The GAO's recommendations included requirements for demographic performance testing before deployment, regular auditing of system accuracy across different groups, and establishment of bias mitigation protocols. This case became a key reference point for subsequent discussions about algorithmic accountability in government systems and contributed to growing calls for federal standards on AI bias testing in high-stakes applications.
The controversy occurred amid broader national discussions about facial recognition technology and civil liberties, with several cities and states considering or implementing restrictions on government use of such systems. The CBP incident provided concrete evidence of how algorithmic bias could systematically disadvantage minority communities in critical government services, influencing ongoing policy debates about the appropriate use and regulation of AI systems in public sector applications.
Root Cause
Facial recognition algorithms trained on datasets with insufficient representation of diverse demographics, leading to higher error rates for underrepresented groups. The systems struggled with darker skin tones, facial features associated with different ethnicities, and age-related changes.
Mitigation Analysis
More diverse training datasets including balanced representation across race, gender, and age groups could have reduced disparities. Regular algorithmic auditing and bias testing across demographic groups would have identified these issues earlier. Mandatory human review for all flagged cases and performance monitoring disaggregated by demographic characteristics could have mitigated discriminatory outcomes.
Lessons Learned
Government deployment of AI systems requires comprehensive bias testing across all demographic groups before implementation. Regular auditing and diverse training data are essential for preventing discriminatory outcomes in high-stakes applications affecting millions of people.
Sources
FACIAL RECOGNITION TECHNOLOGY: Current and Planned Uses by Federal Agencies
Government Accountability Office · Aug 24, 2020 · regulatory action
Federal study confirms racial bias in many facial recognition systems
Washington Post · Dec 19, 2019 · news