← Back to incidents
Palantir's AI System Used by ICE for Immigration Enforcement Targeting
HighPalantir's AI system enabled ICE to analyze data from schools and social services to identify and target undocumented immigrants for deportation, raising significant civil liberties concerns.
Category
Bias
Industry
Government
Status
Ongoing
Date Occurred
Jan 1, 2017
Date Reported
May 2, 2017
Jurisdiction
US
AI Provider
Other/Unknown
Model
Investigative Case Management (ICM)
Application Type
agent
Harm Type
legal
People Affected
30,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
ongoing
Regulatory Body
Congressional oversight committees
government_surveillanceimmigration_enforcementdata_integrationcivil_libertiespalantiricealgorithmic_bias
Full Description
In 2017, U.S. Immigration and Customs Enforcement (ICE) expanded its use of Palantir Technologies' Investigative Case Management (ICM) system, an AI-powered platform that aggregated and analyzed vast amounts of personal data to identify and track undocumented immigrants. The system integrated data from multiple sources including school enrollment records, social services databases, utility company records, license plate readers, and financial transaction data to create comprehensive profiles of individuals and their networks.
The ICM system utilized machine learning algorithms to identify patterns and connections that human analysts might miss, enabling ICE agents to locate undocumented immigrants more efficiently. The platform could cross-reference various databases to predict where individuals might be found, identify family members and associates, and prioritize enforcement actions. This capability significantly enhanced ICE's operational efficiency, contributing to a marked increase in immigration arrests and deportations during this period.
Civil liberties organizations, including the ACLU, documented how the system's use of data from schools and social services created a chilling effect on immigrant communities. Families became afraid to enroll children in school, seek medical care, or access social services for fear of being identified and targeted for deportation. The Immigrant Defense Project and other advocacy groups documented cases where individuals were arrested after accessing public services, suggesting the AI system was successfully connecting seemingly unrelated data points.
The controversy intensified when internal Palantir documents revealed that some employees had raised concerns about the company's work with ICE, leading to internal protests and employee departures. Public advocacy campaigns, including demonstrations at Palantir offices and shareholder actions, pressured the company to reconsider its government contracts. However, Palantir defended its work as supporting legitimate law enforcement activities and maintaining that it provided technology tools rather than making enforcement decisions.
Congressional oversight committees launched investigations into the use of AI systems for immigration enforcement, questioning both the effectiveness and civil liberties implications of such technologies. The investigations revealed that the system processed data on millions of individuals, including U.S. citizens and legal residents who were not targets of enforcement actions but whose information was captured in the analysis. Multiple civil rights organizations filed lawsuits challenging the data collection and algorithmic targeting practices, arguing they violated constitutional protections and civil rights laws.
Root Cause
Palantir's AI system aggregated data from multiple sources including schools, social services, and utility companies to create profiles enabling targeted enforcement actions against immigrant communities without proper privacy protections or bias testing.
Mitigation Analysis
Proper algorithmic bias testing could have identified discriminatory targeting patterns. Data minimization principles should have limited collection to immigration-specific sources. Human rights impact assessments and civil liberties review processes could have prevented misuse of social services data for enforcement targeting.
Litigation Outcome
Multiple ongoing civil rights lawsuits challenging the data collection and targeting practices
Lessons Learned
Government use of AI for enforcement activities requires robust civil liberties protections and algorithmic accountability measures. Data integration across social services and enforcement agencies creates significant risks for vulnerable populations and can undermine public trust in essential services.
Sources
Palantir Knows Everything About You
The Nation · Apr 2, 2018 · news
Palantir and ICE: A Dangerous Partnership
ACLU · Aug 15, 2019 · regulatory action