← Back to incidents
Palantir AI Platform Deployed Across UK NHS Without Adequate Public Consultation
HighPalantir's AI platform was deployed across UK NHS to process millions of patient records during COVID-19 without adequate public consultation or GDPR compliance, raising significant privacy concerns about surveillance company handling sensitive health data.
Category
data_governance
Industry
Healthcare
Status
Resolved
Date Occurred
Mar 1, 2020
Date Reported
Apr 21, 2020
Jurisdiction
UK
AI Provider
Other/Unknown
Application Type
other
Harm Type
privacy
People Affected
66,000,000
Human Review in Place
Unknown
Litigation Filed
No
Regulatory Body
Information Commissioner's Office
NHShealthcareprivacyGDPRemergency_procurementsurveillancepatient_dataCOVID-19data_protection
Full Description
In March 2020, during the early stages of the COVID-19 pandemic, the UK's National Health Service rapidly deployed Palantir Technologies' AI-powered data analytics platform to help manage the health crisis. The deployment was part of the NHS's COVID-19 data store, designed to aggregate and analyze patient data across the health system to support resource planning and patient care coordination. However, the implementation proceeded without standard procurement processes, public consultation, or adequate data protection safeguards typically required for such sensitive data handling.
Palantir, a US-based company with deep ties to intelligence agencies and surveillance operations, gained access to process millions of NHS patient records through its Foundry platform. The system was designed to integrate data from multiple NHS sources including patient records, hospital capacity data, and COVID-19 testing results. Privacy campaigners and digital rights organizations immediately raised concerns about the lack of transparency in the procurement process and the appropriateness of allowing a surveillance-focused company to handle sensitive health data of UK citizens.
The deployment violated several key principles of the EU's General Data Protection Regulation (GDPR), which remained applicable in the UK. Critics highlighted the absence of a proper data protection impact assessment (DPIA), which is mandatory under GDPR for high-risk processing of personal data. Additionally, there was no meaningful public consultation about the use of patient data for AI analytics, despite the sensitive nature of health information and the scale of data processing involved affecting virtually the entire UK population.
Investigations by privacy advocates revealed that the NHS had granted Palantir broad access to patient data without implementing adequate technical and organizational measures to protect privacy. The company's business model, built around providing data analytics to intelligence agencies and law enforcement, raised questions about data governance and the potential for function creep beyond the stated COVID-19 response purposes. The Information Commissioner's Office received multiple complaints about the deployment and initiated inquiries into the data protection compliance.
The incident highlighted significant gaps in the UK's emergency procurement processes for AI systems handling personal data. The rapid deployment under emergency conditions bypassed normal oversight mechanisms, creating precedent for future health crises where privacy protections might be similarly compromised. The controversy contributed to broader debates about the appropriate role of private technology companies in public health data management and the need for stronger safeguards when deploying AI systems in healthcare settings.
Root Cause
NHS rapidly deployed Palantir's AI data analytics platform during COVID-19 emergency without following standard procurement processes, public consultation, or conducting adequate data protection impact assessments required under GDPR.
Mitigation Analysis
Implementation of mandatory data protection impact assessments, public consultation requirements for health data platforms, and enhanced oversight of emergency procurement could have prevented privacy concerns. Requiring explicit consent mechanisms and data minimization principles would have reduced scope of potential harm. Independent privacy audits before deployment would have identified risks early.
Lessons Learned
Emergency conditions do not justify bypassing fundamental data protection requirements when deploying AI systems. Healthcare organizations must maintain robust privacy safeguards even during crisis response, and public consultation remains essential when private companies gain access to population-scale health data.
Sources
NHS coronavirus app: concerns over privacy and role of Palantir
The Guardian · Apr 21, 2020 · news
Palantir and the NHS: a data scandal in the making?
openDemocracy · May 11, 2020 · news