← Back to incidents
Proctorio AI Exam Proctoring System Flagged Students of Color Disproportionately
HighProctorio's AI exam proctoring software disproportionately flagged students of color as potential cheaters due to biased facial recognition algorithms. Multiple universities discontinued the service after documented bias incidents.
Category
Bias
Industry
Education
Status
Resolved
Date Occurred
Mar 1, 2020
Date Reported
Oct 15, 2020
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
other
Harm Type
reputational
People Affected
50,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
settled
facial_recognitioneducational_technologyalgorithmic_biasremote_proctoringracial_biascivil_rightshigher_education
Full Description
During the COVID-19 pandemic, Proctorio became one of the most widely adopted AI-powered exam proctoring systems as universities shifted to remote learning. The software used facial recognition, eye-tracking, and behavioral analysis to monitor students during online exams, automatically flagging suspicious behavior for instructor review. However, research and student reports revealed systematic bias against students of color, particularly those with darker skin tones.
The bias manifested in multiple ways. Proctorio's facial recognition system had difficulty detecting and tracking faces with darker skin tones, leading to higher rates of flagging for 'face not detected' violations. The gaze-tracking technology similarly struggled with darker eyes, incorrectly interpreting normal eye movements as suspicious looking away from the screen. Students in non-traditional testing environments, such as shared living spaces common among lower-income students, were also disproportionately flagged for background noise or movement.
Documented cases emerged across multiple universities. At the University of Illinois, Black students reported being flagged at significantly higher rates than white students for identical behaviors. Miami University saw similar patterns, with students of color receiving academic integrity violations at disproportionate rates. The University of Vermont conducted an internal analysis that confirmed the bias patterns, leading them to discontinue Proctorio in 2021.
Student advocacy groups and researchers documented these patterns through surveys and data requests. A study by the Algorithmic Justice League found that Proctorio's facial recognition component had error rates of up to 34% for Black women compared to less than 1% for white men. Students reported psychological distress from being repeatedly flagged, with some developing anxiety about taking exams and others facing academic probation or course failures based on AI flags.
Proctorio's response to criticism was notably aggressive. The company filed DMCA takedown requests against researchers and critics, including attempting to remove academic papers and student testimonials from the internet. CEO Mike Olsen personally sued University of British Columbia student Ian Linkletter for $90,000 after Linkletter posted concerns about the software on Twitter. These legal tactics drew widespread condemnation from academic freedom advocates.
By 2022, over 20 universities had discontinued their contracts with Proctorio, citing bias concerns, student complaints, and privacy issues. Some institutions implemented human review requirements for all AI flags, while others moved away from automated proctoring entirely. The incident highlighted the risks of deploying biased AI systems in high-stakes educational settings and the importance of algorithmic auditing in EdTech products.
Root Cause
Proctorio's facial recognition and gaze-tracking algorithms were trained on datasets with insufficient representation of darker skin tones and diverse facial features, leading to higher false positive rates for students of color. The AI also flagged non-traditional testing environments and assistive technologies as suspicious behavior.
Mitigation Analysis
Bias testing with diverse demographic datasets during development could have identified disparate impact before deployment. Human review of all AI flags before academic penalties, algorithmic auditing requirements, and transparent appeal processes would have prevented unjust academic consequences. Regular bias monitoring and third-party algorithmic audits could have detected these patterns earlier.
Litigation Outcome
Class action lawsuit filed against multiple universities using Proctorio, with some cases settled out of court and policy changes implemented
Lessons Learned
This incident demonstrates how AI bias can perpetuate educational inequities and the importance of thorough bias testing before deploying AI in academic settings. Companies' aggressive legal responses to bias criticism can amplify reputational damage and highlight the need for transparent accountability mechanisms in EdTech.
Sources
Students revolt against surveillance technologies as schools reopen
The Washington Post · Nov 12, 2020 · news
Students are rebelling against eye-tracking exam surveillance tools
The Verge · Apr 9, 2021 · news
Colleges Drop Proctorio After Bias Complaints
Inside Higher Ed · May 11, 2021 · news
Students Are Fighting Back Against Proctoring Surveillance Apps
Vice · Oct 15, 2020 · news