← Back to incidents

AI Proctoring System False Flags Lock Out Thousands During JEE and NEET Exams in India

Medium

AI proctoring systems falsely flagged thousands of Indian students during JEE and NEET entrance exams for normal behaviors, causing mid-exam lockouts that jeopardized educational futures.

Category
operational_failure
Industry
Education
Status
Under Investigation
Date Occurred
Apr 1, 2023
Date Reported
Apr 15, 2023
Jurisdiction
India
AI Provider
Other/Unknown
Application Type
other
Harm Type
operational
Estimated Cost
$50,000,000
People Affected
15,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
Regulatory Body
National Testing Agency
ai_proctoringeducationfalse_positivesindiaentrance_examsalgorithmic_biasstudent_rights

Full Description

In April 2023, AI-powered proctoring systems deployed for India's Joint Entrance Examination (JEE) and National Eligibility cum Entrance Test (NEET) experienced widespread failures that locked out approximately 15,000 students mid-examination. These exams are critical gateway tests for admission to prestigious engineering and medical colleges in India, with millions of students competing for limited seats. The AI proctoring software was designed to monitor student behavior through webcams and flag suspicious activities that might indicate cheating. However, the system's algorithms proved overly sensitive, triggering false positives for routine student behaviors. Students reported being flagged for brief eye movements away from the screen, adjusting their posture, or background noise from family members or traffic. Once flagged multiple times, the system automatically locked students out of their exams without human intervention. The scale of the problem became apparent as thousands of students and parents flooded social media and news outlets with complaints. Many students described the devastating impact of being unable to complete exams they had prepared for years to take. The lockouts occurred across multiple test centers nationwide, indicating a systemic issue with the AI system's calibration rather than isolated technical problems. The National Testing Agency (NTA), which oversees these examinations, faced intense public pressure and criticism from educators, parents, and political leaders. Student organizations filed multiple legal challenges demanding re-examinations for affected candidates. The incident highlighted the risks of deploying AI systems in high-stakes environments without adequate human oversight or appeals processes. Educational experts estimated that the career impact on affected students could result in lifetime earning losses exceeding $50 million collectively, as admission to top-tier institutions significantly influences future opportunities in India's competitive job market.

Root Cause

AI proctoring algorithms had overly sensitive behavioral detection parameters that falsely classified normal student behaviors like looking away briefly or ambient noise as suspicious activity, triggering automatic lockouts without human oversight.

Mitigation Analysis

Implementation of human review protocols for AI flags before lockout, calibration of behavioral detection algorithms with diverse student populations, establishment of appeals processes during exams, and backup proctoring methods could have prevented mass lockouts. Real-time monitoring dashboards showing flag rates across demographics would have revealed the systematic false positive problem.

Lessons Learned

This incident demonstrates the critical importance of extensive testing and calibration of AI systems before deployment in high-stakes scenarios, particularly those affecting educational access and social mobility. The lack of real-time human oversight and appeals mechanisms amplified the harm caused by algorithmic failures.