← Back to incidents

NYC Bias Audits Reveal Disparities in Automated Hiring Systems Under Local Law 144

Medium

NYC's Local Law 144 mandating bias audits of automated hiring tools revealed significant demographic disparities in AI screening systems. Multiple companies' tools showed substantially different selection rates across protected groups, highlighting systemic bias in employment algorithms.

Category
Bias
Industry
HR / Recruiting
Status
Ongoing
Date Occurred
Date Reported
Jul 5, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
other
Harm Type
discrimination
Human Review in Place
No
Litigation Filed
No
Regulatory Body
NYC Department of Consumer and Worker Protection
hiring_biasalgorithmic_discriminationnyc_law_144employment_aibias_auditregulatory_complianceworkplace_discrimination

Full Description

New York City's Local Law 144, which took effect in July 2023, requires employers using automated employment decision tools (AEDTs) to conduct annual bias audits and publicly post the results. The law represents one of the first comprehensive municipal regulations of AI hiring systems and mandates disclosure of selection rates by race, ethnicity, and gender. As companies began complying with the audit requirements, several bias audits revealed troubling disparities in automated hiring systems. The bias audits, conducted by third-party auditors, examined selection rates across different demographic groups for various automated hiring tools including resume screening algorithms, video interview analysis systems, and skills assessment platforms. Multiple companies reported selection rate disparities that exceeded acceptable thresholds, with some tools showing significantly lower selection rates for certain racial and ethnic groups compared to the most selected group. The audits measured both the selection rate (percentage of candidates selected) and the impact ratio (ratio of selection rates between groups). The revealed disparities highlighted systematic issues with AI hiring tools that had been operating without transparency or oversight. Many of the biased systems appeared to be perpetuating historical hiring patterns and workplace inequalities embedded in training data. The automated tools were making screening decisions based on patterns that correlated with demographic characteristics, effectively discriminating against protected groups even when those characteristics weren't explicitly considered by the algorithms. The implementation of Local Law 144 created unprecedented transparency in algorithmic hiring practices, but also revealed the widespread nature of bias in employment AI systems. Companies found to have disparate impact were required to either modify their tools to reduce bias or provide alternative selection processes. The law established ongoing monitoring requirements and gave job candidates new rights to request information about how automated tools evaluate their applications. However, enforcement mechanisms remained limited, and many advocacy groups argued the law's impact ratio thresholds were too permissive to adequately protect against discrimination.

Root Cause

AI hiring systems trained on historical hiring data perpetuated existing biases and discriminatory patterns, lacking adequate bias detection and mitigation measures during development and deployment.

Mitigation Analysis

Required bias audits under Local Law 144 created visibility into algorithmic discrimination, but stronger controls needed include diverse training datasets, algorithmic fairness testing during development, ongoing monitoring of selection rates by demographic groups, and human oversight in hiring decisions. Regular audits and algorithmic impact assessments should be mandatory rather than reactive.

Lessons Learned

The NYC bias audit requirement demonstrates the critical need for proactive algorithmic auditing and transparency in employment AI systems. The widespread disparities revealed suggest bias is endemic in hiring algorithms, requiring systematic regulatory oversight and technical remediation rather than voluntary compliance.