← Back to incidents
Ofqual A-Level Grading Algorithm Downgraded 40% of Students, Disproportionately Harmed Disadvantaged Schools
CriticalOfqual's 2020 A-level grading algorithm downgraded 40% of teacher-predicted grades, systematically disadvantaging students from state schools and disadvantaged backgrounds. After mass protests and legal challenges, the government abandoned the results.
Category
Bias
Industry
Education
Status
Resolved
Date Occurred
Aug 13, 2020
Date Reported
Aug 13, 2020
Jurisdiction
UK
AI Provider
Other/Unknown
Model
Ofqual Standardisation Algorithm
Application Type
other
Harm Type
reputational
People Affected
280,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
settled
Regulatory Body
House of Commons Education Committee
educationalgorithmic_biasstandardizationsocial_inequalitypublic_sectorgradingCOVID-19Ofqual
Full Description
In August 2020, England's Office of Qualifications and Examinations Regulation (Ofqual) deployed a standardisation algorithm to determine A-level grades after COVID-19 cancelled traditional exams. The algorithm was designed to prevent grade inflation by using historical school performance data to moderate teacher predictions. However, the system created systematic bias against disadvantaged students.
The algorithm downgraded nearly 40% of teacher-predicted A-level grades, affecting approximately 280,000 students. The impact was not randomly distributed - students from state schools and colleges with historically lower performance saw dramatic downgrades, while private schools largely maintained their predicted grades. In some cases, entire cohorts of students from disadvantaged schools had their grades reduced by multiple letter grades, devastating their university prospects.
The methodology relied heavily on a school's historical performance over the previous three years, combined with the rank order of students within each subject at each school as determined by teachers. For smaller cohorts (fewer than 15 students), the algorithm applied greater volatility, often resulting in more severe downgrades. This particularly affected smaller state schools and colleges serving disadvantaged communities, while larger private schools with consistent historical performance saw minimal changes.
Public outcry began immediately after results were released on August 13, 2020. Students, parents, and educators protested that the algorithm had embedded social and economic inequality into the grading system. High-achieving students from disadvantaged backgrounds found their university places in jeopardy, while students from privileged schools maintained their expected grades despite identical teacher predictions.
Facing mounting pressure, legal challenges, and criticism from MPs across party lines, Education Secretary Gavin Williamson announced on August 17, 2020, that the government would abandon the algorithmic results. All students would receive either their teacher-assessed grades or their algorithmic grades, whichever was higher. Ofqual's chief regulator Sally Collier resigned shortly after, and the entire standardisation process was deemed a policy failure that had reinforced educational inequality through algorithmic bias.
Root Cause
The standardisation algorithm used historical school performance data to moderate teacher predictions, systematically penalizing students from schools with historically lower performance while preserving grades at high-performing schools, creating algorithmic bias against disadvantaged students.
Mitigation Analysis
Human review of algorithmic outputs, especially for edge cases and appeals, could have identified the disparate impact early. Bias testing across different school types and demographic groups before deployment would have revealed the systematic discrimination. A phased rollout with monitoring of grade distributions by school type could have prevented mass harm.
Litigation Outcome
Multiple legal challenges were filed but became moot when the government reversed the algorithm results and reverted to teacher-assessed grades
Lessons Learned
High-stakes algorithmic decision-making requires rigorous bias testing and equity analysis before deployment. Historical data can perpetuate systemic inequalities when used without careful consideration of fairness principles. Public sector algorithms affecting individual life outcomes need transparent governance and rapid correction mechanisms.
Sources
A-levels and GCSEs: U-turn as teacher estimates to be used for exam results
BBC News · Aug 17, 2020 · news
Almost 40% of English students have A-level results downgraded
The Guardian · Aug 13, 2020 · news
How an algorithm failed 280,000 students and shook public trust
Financial Times · Aug 20, 2020 · news
Education Committee inquiry into COVID-19 impact
UK Parliament · Sep 15, 2020 · regulatory action