← Back to incidents

UK Ofqual A-Level Grading Algorithm Downgraded 40% of Students, Disproportionately Harming Disadvantaged Students

Critical

Ofqual's A-level grading algorithm in 2020 downgraded nearly 40% of students' predicted grades, systematically discriminating against disadvantaged students. Mass protests forced the government to abandon the algorithm and revert to teacher assessments within days.

Category
Bias
Industry
Education
Status
Resolved
Date Occurred
Aug 13, 2020
Date Reported
Aug 13, 2020
Jurisdiction
UK
AI Provider
Other/Unknown
Model
Ofqual Standardisation Algorithm
Application Type
other
Harm Type
reputational
People Affected
280,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
settled
Regulatory Body
UK Parliament Education Select Committee
algorithmic_biaseducationUKstandardizationsocioeconomic_discriminationgovernment_algorithmpublic_policy

Full Description

In August 2020, England's Office of Qualifications and Examinations Regulation (Ofqual) deployed a controversial standardisation algorithm to determine A-level grades for approximately 700,000 students whose exams were cancelled due to COVID-19. The algorithm was designed to moderate teacher-predicted grades using historical performance data from each school, aiming to prevent grade inflation and maintain statistical consistency with previous years. The algorithm's methodology proved deeply flawed and discriminatory. It relied heavily on a school's historical performance over the previous three years, meaning students at schools with lower past achievement were systematically downgraded regardless of their individual merit or potential. The model assigned greater weight to historical data for larger class sizes, while smaller classes (more common in private schools) had their teacher predictions accepted with minimal adjustment. This created a perverse incentive structure that favored elite institutions. When results were released on August 13, 2020, the scale of the bias became immediately apparent. Nearly 40% of students received grades lower than their teacher predictions, with the downgrades disproportionately affecting students from state schools and disadvantaged backgrounds. Students from the most deprived areas saw downgrade rates of 44.2%, compared to 37.9% for the most affluent areas. Private school students experienced significantly fewer downgrades, with some elite schools seeing upgrade rates while comparable state schools faced mass downgrades. The public and political backlash was swift and intense. Students, parents, and teachers organized protests, with many arguing the algorithm had institutionalized educational inequality. High-achieving students from underperforming schools found themselves unable to secure university places despite years of academic excellence. The hashtag #AlgorithmicBias trended on social media as stories emerged of identical predicted grades being treated differently based solely on school type. Facing mounting pressure and threats of legal action, Education Secretary Gavin Williamson announced on August 17, 2020, that the government would abandon the algorithm-based grades entirely. Students would receive the higher of either their calculated grade or their mock exam results, with most ultimately receiving their teacher-assessed grades. The reversal affected university admissions across the UK, forcing institutions to manage significant overcapacity as thousands more students than expected met their offers.

Root Cause

The algorithm used historical school performance data to moderate teacher predictions, creating systemic bias against students from schools with historically lower performance. The model prioritized institutional reputation over individual merit and failed to account for the unique circumstances of COVID-19 disruptions.

Mitigation Analysis

Robust bias testing against protected characteristics (school type, socioeconomic status) could have identified discriminatory patterns before deployment. Human review mechanisms for appeals and edge cases were inadequate. The algorithm should have included fairness constraints and been validated against known disparities in educational outcomes. Alternative approaches like portfolio-based assessment or expanded teacher moderation could have been implemented.

Litigation Outcome

Multiple legal challenges were filed but became largely moot when the government reversed the algorithm-based grades and reverted to teacher assessments.

Lessons Learned

The Ofqual incident demonstrates how algorithms can amplify and institutionalize existing societal biases when historical data reflects systemic inequalities. It highlights the critical importance of algorithmic fairness testing and the need for human oversight in high-stakes automated decision-making systems affecting individual life outcomes.

Sources

Education Select Committee inquiry into coronavirus impact
UK Parliament · Sep 1, 2020 · regulatory action