← Back to incidents
Workday AI Hiring Tool Faces Expanded Class Action for Race, Age, and Disability Discrimination
HighWorkday's AI hiring platform faces expanded class action lawsuit alleging systematic discrimination against racial minorities, older workers, and disabled applicants across hundreds of client companies. The case could set significant legal precedent for AI employment discrimination liability.
Category
Bias
Industry
HR / Recruiting
Status
Litigation Pending
Date Occurred
Jan 1, 2020
Date Reported
Jan 15, 2025
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
api integration
Harm Type
legal
Estimated Cost
$50,000,000
People Affected
100,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
hiringrecruitmentbiasdiscriminationclass-actionworkdayemployment-lawai-ethicsalgorithmic-bias
Full Description
The Mobley v. Workday class action lawsuit, originally filed in 2022, has expanded significantly in 2025 with additional plaintiffs and broader allegations of systematic discrimination. The lawsuit now includes hundreds of job seekers who claim Workday's AI-powered hiring algorithms unlawfully screened them out based on race, age, and disability status. The platform, used by major corporations including Tesla, Goldman Sachs, and Delta Air Lines, processes millions of job applications annually through automated screening tools.
The expanded complaint alleges that Workday's algorithms were trained on historical hiring data that embedded decades of workplace discrimination, causing the AI system to perpetuate and amplify these biases. Plaintiffs claim the system systematically ranked candidates from protected classes lower than similarly qualified white, younger, and non-disabled applicants. Internal documents reportedly show that Workday was aware of potential bias issues but failed to implement adequate safeguards or conduct proper algorithmic auditing.
The case has grown to include over 100,000 potential class members who applied for jobs through Workday's platform between 2020 and 2024. Plaintiffs are seeking damages estimated at $50 million, along with injunctive relief requiring Workday to redesign its algorithms and implement bias monitoring systems. The lawsuit alleges violations of Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, and the Americans with Disabilities Act.
The legal implications extend beyond Workday to the broader AI industry. The case represents one of the first major class action lawsuits targeting AI bias in hiring at scale, potentially establishing precedent for how courts will handle algorithmic discrimination claims. Employment law experts note that the case could clarify whether companies can be held liable for the discriminatory outcomes of third-party AI tools they deploy.
Workday has denied the allegations, arguing that its tools are designed to reduce bias in hiring and that any disparate outcomes result from legitimate job-related factors. The company claims it conducts regular bias testing and provides clients with guidance on fair hiring practices. However, plaintiffs' attorneys argue that these measures are inadequate given the scale and systematic nature of the alleged discrimination.
The case is being closely watched by both the tech industry and civil rights advocates as it could establish important legal standards for AI accountability in employment decisions. The outcome may influence how companies approach AI deployment in hiring and what safeguards are legally required to prevent discriminatory outcomes.
Root Cause
The AI hiring algorithms were trained on historical hiring data that reflected existing workplace discrimination patterns, causing the system to perpetuate and amplify bias against protected classes in the screening and ranking of job candidates.
Mitigation Analysis
Regular bias testing and algorithmic auditing could have identified discriminatory patterns. Human review of AI recommendations, especially for protected class candidates, would have caught systematic exclusions. Diverse training data and fairness constraints in model development could have prevented biased outcomes from the start.
Lessons Learned
The case demonstrates that AI systems can perpetuate historical discrimination at unprecedented scale when deployed without adequate bias testing and human oversight. It highlights the legal risks companies face when using third-party AI tools for employment decisions and the need for proactive algorithmic auditing in high-stakes applications.
Sources
Workday AI Hiring Lawsuit Expands with Discrimination Claims
Reuters · Jan 15, 2025 · news
AI Hiring Tools Face Largest Discrimination Lawsuit Yet
Washington Post · Jan 16, 2025 · news