← Back to incidents

AI-Powered Border Surveillance Systems Generate False Alerts on Legitimate Asylum Seekers

High

AI surveillance towers deployed by CBP along the US-Mexico border generated false alerts flagging legitimate asylum seekers as threats. The systems demonstrated algorithmic bias, leading to unnecessary detentions and civil rights concerns.

Category
Bias
Industry
Government
Status
Ongoing
Date Occurred
Jan 1, 2022
Date Reported
Mar 15, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
embedded
Harm Type
legal
People Affected
500
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
border_securitysurveillanceimmigrationalgorithmic_biasfalse_positivescivil_rightsasylum_seekersgovernment_aiCBP

Full Description

U.S. Customs and Border Protection (CBP) has deployed an extensive network of AI-powered surveillance systems along the US-Mexico border as part of its modernization efforts. These systems, developed by companies including Anduril Industries and Elbit Systems, utilize advanced sensors, thermal imaging, and machine learning algorithms to detect and classify border crossings. The Integrated Fixed Towers (IFT) program and Autonomous Surveillance Towers (AST) represent a multi-billion dollar investment in automated border security technology. Beginning in 2022, reports emerged of these AI systems generating significant numbers of false positive alerts, particularly involving legitimate asylum seekers and humanitarian cases. Civil rights organizations documented instances where families with children, unaccompanied minors, and individuals clearly seeking to surrender to authorities were flagged by the AI systems as potential threats or smugglers. The algorithms appeared to exhibit bias in their threat assessment protocols, disproportionately targeting individuals based on group size, movement patterns, or demographic characteristics. The false alerts had serious humanitarian consequences, leading to prolonged detentions, delayed asylum processing, and in some cases, separation of families. Border Patrol agents reported being overwhelmed by the volume of AI-generated alerts, many of which proved to be false positives upon investigation. This created operational inefficiencies and diverted resources from genuine security threats. Immigration attorneys documented cases where clients faced additional scrutiny and processing delays as a direct result of being initially flagged by automated surveillance systems. The American Civil Liberties Union and other civil rights organizations filed lawsuits challenging the accuracy and constitutional implications of the AI surveillance systems. They argued that the technology violated due process rights and disproportionately impacted vulnerable populations seeking asylum. The litigation highlighted the lack of transparency in how the AI algorithms made threat determinations and the absence of adequate human oversight in the alert review process. CBP maintained that the systems enhanced border security capabilities while acknowledging the need for ongoing refinements to reduce false positives. The incident exposed broader concerns about the deployment of AI surveillance technology in immigration enforcement without adequate bias testing or human rights safeguards. Congressional hearings examined the procurement process for these systems and questioned whether sufficient due diligence had been conducted regarding their accuracy and potential for discriminatory outcomes. The Government Accountability Office initiated a review of CBP's AI acquisition and deployment practices.

Root Cause

AI surveillance systems exhibited algorithmic bias in threat detection, generating disproportionate false positives when identifying individuals from specific demographic groups or exhibiting certain movement patterns associated with asylum-seeking behavior. The systems lacked adequate training data representing legitimate border crossings.

Mitigation Analysis

Implementation of mandatory human review protocols for all AI-generated alerts could have prevented wrongful detentions. Bias testing across different demographic groups and movement patterns during system development would have identified discriminatory outcomes. Real-time monitoring dashboards tracking false positive rates by demographic categories could enable rapid bias detection and correction.

Litigation Outcome

ACLU filed lawsuit challenging the accuracy and civil rights implications of AI surveillance systems at border

Lessons Learned

The incident demonstrates the critical need for bias testing and human rights impact assessments before deploying AI surveillance systems in immigration enforcement. It highlights how algorithmic bias can exacerbate existing vulnerabilities for asylum seekers and underscores the importance of maintaining human oversight in high-stakes automated decision-making systems.

Sources

ACLU Challenges Biased AI Surveillance at US-Mexico Border
American Civil Liberties Union · Feb 28, 2023 · company statement