← Back to incidents
AI Translation Errors in Border Control Led to Wrongful Detentions
HighAI translation tools used by U.S. border control agents produced incorrect translations of Arabic and other languages, leading to wrongful detentions of travelers based on false criminal associations created by translation errors.
Category
Bias
Industry
Government
Status
Ongoing
Date Occurred
Jan 1, 2023
Date Reported
Aug 15, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
embedded
Harm Type
legal
People Affected
100
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
translationborder_controldetentionbiasdue_processimmigrationarabicgovernment_aicivil_rights
Full Description
U.S. Customs and Border Protection (CBP) and other immigration agencies have increasingly relied on AI-powered translation tools to process the growing volume of multilingual communications and documents at ports of entry. These systems are embedded in various border control applications, including the CBP One mobile app used for asylum appointments and general translation needs during inspections. However, reports emerged in 2023 documenting systematic errors in these AI translation systems, particularly when processing Arabic, Farsi, and other Middle Eastern languages.
The errors typically manifested as contextual mistranslations that created false associations with criminal activity or terrorism. For example, common Arabic phrases or names were incorrectly translated to suggest connections to illegal activities, leading border agents to flag travelers for additional screening, detention, and interrogation. In several documented cases, travelers were held for hours or days based solely on these mistranslations, with some ultimately denied entry despite having valid documentation and no actual security concerns.
The problem was compounded by the lack of human oversight in the translation process. Border agents, often lacking language expertise, relied heavily on the AI-generated translations without verification. The high-pressure environment at border crossings discouraged questioning the AI outputs, and supervisory review of AI-assisted decisions was not systematically required. This created a feedback loop where incorrect AI translations directly led to enforcement actions without adequate human validation.
Civil rights organizations and immigration lawyers began documenting these cases, revealing a pattern of discriminatory impact on Arabic-speaking travelers and others from Middle Eastern countries. The American Civil Liberties Union and other advocacy groups filed complaints highlighting due process violations and the disproportionate impact on specific ethnic and linguistic communities. Legal challenges focused on the lack of transparency in AI decision-making and the absence of meaningful human review before taking detention actions based on AI translations.
The incidents have broader implications for AI use in law enforcement and immigration contexts. They highlight the particular risks when AI systems trained on limited or biased datasets are deployed in high-stakes government applications without adequate oversight. The translation errors appear to stem from both technical limitations in natural language processing for certain languages and potential bias in training data that associated certain terms or phrases with security threats without proper context understanding.
Root Cause
AI translation systems embedded in border control applications produced inaccurate translations of Arabic and other languages, particularly failing to understand context and cultural nuances, leading to false associations with criminal or terrorist activity when none existed.
Mitigation Analysis
This incident could have been prevented through mandatory human review of all AI translations before taking enforcement action, especially for high-stakes decisions like detention. Additional controls should include cultural competency testing of translation models, bias auditing for different languages and dialects, and requiring multiple translation sources for critical communications. Training border agents on AI limitations and requiring supervisor approval for AI-assisted detentions would also reduce harm.
Lessons Learned
This incident demonstrates the critical need for human oversight when AI systems are used in high-stakes government decisions, particularly those affecting individual liberty and due process rights. The discriminatory impact on specific linguistic communities highlights how AI bias can compound existing inequities in law enforcement and immigration contexts.
Sources
AI Translation Errors Create Civil Rights Crisis at Border
ACLU · Aug 15, 2023 · company statement
Border Agents' AI Tools Mistranslate Languages, Leading to Wrongful Detentions
Washington Post · Sep 12, 2023 · news