← Back to incidents

AI Video Interview Platforms Discriminated Against Candidates with Disabilities Through Biased Scoring Algorithms

High

AI video interview platforms by HireVue, myInterview, and Pymetrics systematically discriminated against candidates with disabilities by penalizing atypical speech, facial expressions, and eye movements. The EEOC issued guidance and multiple lawsuits were filed.

Category
Bias
Industry
HR / Recruiting
Status
Ongoing
Date Occurred
Jan 1, 2019
Date Reported
May 12, 2021
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
other
Harm Type
legal
People Affected
10,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
Regulatory Body
Equal Employment Opportunity Commission (EEOC)
disability_discriminationhiring_biasfacial_recognitionspeech_analysisEEOCADA_violationalgorithmic_auditingemployment_screening

Full Description

Beginning around 2019, major AI-powered video interview platforms including HireVue, myInterview, and Pymetrics began widespread adoption by Fortune 500 companies for automated candidate screening. These platforms used machine learning algorithms to analyze candidates' facial expressions, voice patterns, word choice, and eye movements to generate employability scores, often without human oversight in initial screening phases. Research conducted by disability rights organizations and academic institutions documented systematic discrimination against candidates with various disabilities. Candidates with autism spectrum disorders were penalized for limited eye contact and atypical facial expressions. Those with speech disabilities, including stuttering or vocal cord paralysis, received lower scores due to speech pattern analysis. Candidates with ADHD or anxiety disorders were marked down for fidgeting or unusual movement patterns. The algorithms had been trained primarily on neurotypical behavioral patterns and lacked accommodation protocols. In May 2021, the Electronic Privacy Information Center (EPIC) filed a complaint with the Federal Trade Commission highlighting these discriminatory practices. Simultaneously, disability advocacy groups documented cases where qualified candidates with disclosed disabilities were systematically rejected at higher rates when AI screening was used versus traditional interviews. The Wall Street Journal and other major publications investigated and confirmed these patterns across multiple platforms. The Equal Employment Opportunity Commission responded in May 2022 with technical assistance documents specifically addressing AI bias in hiring, emphasizing that employers using AI tools remained liable for discriminatory outcomes even when using third-party vendors. The EEOC guidance clarified that reasonable accommodations must be provided in AI-assisted hiring processes and that employers must monitor for disparate impact on protected classes. HireVue faced the most scrutiny, ultimately retiring its facial analysis features in January 2021 following sustained pressure. However, the company continued using voice and language analysis that advocacy groups argued still discriminated against candidates with speech disabilities. Multiple class action lawsuits were filed against major employers including Unilever, Goldman Sachs, and Hilton, alleging violations of the Americans with Disabilities Act through their use of these platforms. The incident highlighted broader issues with AI bias in employment screening and led to increased regulatory attention on algorithmic discrimination. Several states began considering legislation requiring algorithmic auditing for bias in hiring tools, while the EEOC increased enforcement actions related to AI-enabled discrimination.

Root Cause

AI algorithms trained on neurotypical behavioral patterns penalized candidates with disabilities whose speech, facial expressions, or eye movements deviated from trained norms. The systems lacked accommodation features and failed to account for disability-related differences in communication.

Mitigation Analysis

Human review of AI scores, especially for candidates requesting accommodations, could have identified discriminatory patterns. Bias testing across disability groups during algorithm development would have revealed systematic discrimination. Alternative assessment methods for candidates with disclosed disabilities and algorithmic auditing for disparate impact could have prevented widespread harm.

Litigation Outcome

Multiple discrimination complaints filed with EEOC, class action lawsuits initiated against major employers using these platforms

Lessons Learned

AI systems trained on majority populations can systematically exclude minority groups without explicit bias testing. Regulatory frameworks must evolve to address algorithmic discrimination, particularly in high-stakes applications like employment screening where bias can perpetuate systemic exclusion.

Sources