Ofqual’s Algorithm Under Review
The Office for Statistics Regulation, the national statistics regulator, will be reviewing the algorithm that was used by Ofqual to determine A-level grades for the thousands of students who were unable to sit exams because of the pandemic. There was a large public outcry when algorithm-determined grades, many of which were lower than teacher-predicted grades, were initially issued to students. In England, 36% of entries were awarded a lower grade than teachers predicted, and 3% were down two grades. Following days of protests, the Government made a U-turn on 17th August, confirming that the original teacher-predicted grades would be reinstated. The only exception to this rule being where a student had received a higher grade from the algorithm.
What led to the downgrading?
Ofqual had done the following to predict each student’s A-level results. It had requested teachers to supply an estimated grade for each pupil for every subject, as well as a ranking of the student compared to every other student at the school within that estimated grade. These details were then put through an algorithm which also factored in the school’s performance in each subject over the past three years. For each previous year the number of students receiving every grade from A* – U was noted, in order to generate a percentage average. This average was then applied to each student irrespective of their individual predicted grade. This was done with the aim of ensuring that results were broadly similar to previous years, to avoid grade inflation by teachers and the pressure this would put on higher education institutions.
Why the public outcry?
The application of Ofqual’s algorithm meant that existing inequalities within the UK school system were being exacerbated. Not only did the algorithm mean that a bright student from an underperforming school was likely to have their results downgraded, it also meant downgrading affected state schools to a much higher degree than it did the private sector. In Scotland, the pass rate for students from the most deprived backgrounds was reduced by 15.2% as opposed to only 6.9% for wealthy students.
Additionally, Ofqual’s formula meant that where there were fewer than 15 students studying a subject at school, teacher predictions were afforded more weight. Where there were fewer than 5 students studying a subject, only the teacher predicted grade would decide the student’s fate. Given that class sizes in private schools are generally much smaller than in state schools, this was another way in which Ofqual’s formula exacerbated existing inequalities.
If this fiasco teaches us anything, it is that there needs to be a greater understanding of the way in which algorithms work. It is clear from the recent failings by Ofqual that where inputted data is flawed or biased, at best the output will be flawed and biased and at worst the algorithm may amplify inequities. Hopefully the upcoming review by the Office for Statistics Regulation will help reveal how we can further protect against these mishaps in the future.