Philip blames an algorithm for potentially losing his place to study law at university.
The 18-year-old, whose full name CNN is not disclosing because he feared repercussions from universities, was among more than 300,000 pupils in England, Wales and Northern Ireland who woke on August 13 to critically important A-level exam results, which are broadly equivalent to the US high school diploma.
These exams were canceled this summer due to the pandemic. Student marks were instead determined by an algorithm, the Direct Centre Performance Model, which was chosen by the government’s exam regulator. The model drew on a collection of data to produce the grades. A subsequent outcry over alleged algorithmic bias against pupils from more disadvantaged backgrounds has now left teenagers and experts alike calling for greater scrutiny of such technology.
The teachers at Philip’s west London school predicted he would gain 2 A grades and a B in his exams, which would have comfortably secured his spot to study law at Exeter University.
On August 13, the student sat at home trying to access the website that would confirm whether or not he had a university place.
“I was upstairs trying to get [the website] to load and my Mum was downstairs doing the same thing,” he told CNN. “She got it open and shouted out. And they’d declined me.
“I didn’t feel too good,” Philip added. “Yeah, I was pretty cross about it. But everyone I was with was in a similar situation.”
The model awarded Philip a B grade and 2 Cs. The teenager was not alone; close to 40% of grades in England were downgraded from teacher-predicted marks, with pupils at state-funded schools hit harder by the system than their private school peers. Many subsequently lost their place at university.
Uproar followed, with some teenagers protesting outside the UK department of education. Videos from the student protests were widely shared online, including those in which teenagers chanted: “F**k the algorithm!”
Following several days of negative headlines, Education Secretary Gavin Williamson announced that students would be awarded teacher-predicted grades, instead of marks allocated by the model.
The chosen algorithm was meant to guarantee fairness, by ensuring grade distribution for the 2020 cohort followed the pattern of previous years, with a similar number of high and low marks. It drew on teacher-predicted grades and teacher rankings of students to determine grades. But crucially it also took into account the historical performance of schools, which benefited students from more affluent backgrounds.
Private schools in England, which charge parents fees, typically have smaller classes, with grades that could not easily be standardized by the model. The algorithm thus gave more weight to the teacher-predicted grades for these cohorts, which are often wealthier and whiter than their downgraded peers at state schools.
“One of the complexities that we have is that there are lots of ways an algorithm can be fair,” said Helena Webb, senior researcher at Oxford University’s Department of Computer Science.
“You can see an argument where [the government] said [it] wanted to get results that look similar to last year’s. And at a country-wide level, that could be argued as [being] fair. But it completely misses what was fair for individuals.
“Obviously this algorithm is reflecting and mirroring what has happened in previous years,” she added. “So it doesn’t [reflect] the fact that schools might [improve.] And of course that’s going to have worse effects on state schools than on very well known private schools which have consistently higher grades.”
“What’s made me angry is the way [they] treated state schools,” said Josh Wicks, 18, a pupil from Chippenham in Wiltshire, western England. His marks were downgraded from 2 A* and an A to 3 As.
“The algorithm thought that if the school hadn’t achieved [high grades] before, [pupils] couldn’t get them now,” he told CNN. “I just think it’s patronizing.”
The political storm has left ministers in Boris Johnson’s government scrambling for explanations, following heavy criticism of its handling of the coronavirus pandemic. Covid-19 has killed more than 41,000 people in the UK, making it the worst-hit country in Europe.
Why are some algorithms accused of bias?
Algorithms are used across every part of society today, from social media and visa application systems, to facial recognition technology and exam grading.
The technology can be liberating for cash-strapped governments and for corporations chasing innovation. But experts have long warned of the existence of algorithmic bias and as automated processes become more widespread, so do accusations of discrimination.
“The A-levels thing is the tip of the iceberg,” said Cori Crider, co-founder of Foxglove, an organization that challenges the alleged abuse of digital technology. Crider told CNN that the algorithms replicated the biases found in the raw data used.
But Crider warned against the impulse to simply blame policy issues on the technology.
“Anybody who tells you it’s a tech problem is [lying],” she said.
“What happened [with the exams] is that a political choice was made to minimize grade inflation. That’s a political choice, not a tech one.”
Foxglove and the Joint Council for the Welfare of Immigrants recently challenged the British Home Office over its use of an algorithm designed to stream visa applications. The activist groups alleged that the algorithm was biased against applicants from certain countries, making it automatically more likely that such applicants would be denied a visa.
Foxglove alleged that the screening system suffered from a feedback loop,”where past bias and discrimination, fed into a computer program, reinforce future bias and discrimination.”
“We have been reviewing how the visa application streaming tool operates and will be redesigning our processes to make them even more streamlined and secure,” a UK Home Office spokesperson told CNN.
“But we do not accept the allegations Joint Council for the Welfare of Immigrants made in their Judicial Review claim and whilst litigation is still ongoing it would not be appropriate for the department to comment any further.”
Crider said the problems Foxglove found with past data leading to biased algorithms were evident elsewhere, pointing to the debate over predictive policing programs in the United States.
In June, the Californian city of Santa Cruz banned predictive policing over concerns that the analytic software program officers used in their work was discriminating against people of color.
“We have technology that could target people of color in our community – it’s technology that we don’t need,” Mayor Justin Cummings told Reuters news agency in June.
“Part of the problem is the data being fed in,” Crider said.
“Historical data is being fed in [to algorithms] and they are replicating the [existing] bias.”
Webb agrees. “A lot of [the issue] is about the data that the algorithm learns from,” she said. “For example, a lot of facial recognition technology has come out … the problem is, a lot of [those] systems were trained on a lot of white, male faces.
“So when the software comes to be used it’s very good at recognizing white men, but not so good at recognizing women and people of color. And that comes from the data and the way the data was put into the algorithm.”
Webb added that she believed the problems could partly be mitigated through “a greater attention to inclusivity in datasets” and a push to add a greater “multiplicity of voices” around the development of algorithms.
Activists and experts told CNN they hoped recent debates around algorithms would lead to greater oversight of the technology.
“There’s a lack of regulatory oversight over how these systems are used,” Webb said, adding that companies could also choose to self-regulate.
Some companies are becoming notably more vocal on the issue.
“Some technologies risk repeating the patterns developed by our biased societies,” Instagram CEO Adam Mosseri wrote in a statement in June on the company’s diversity efforts. “While we do a lot of work to help prevent subconscious bias in our products, we need to take a harder look at the underlying systems we’ve built, and where we need to do more to keep bias out of these decisions.”
Facebook, which owns Instagram, subsequently created new teams to review bias in company systems.
“I would like to see democratic pushback on [the use of algorithms],” Crider said. “Are there areas in public life where it’s not acceptable to have these systems at all?”
While the debate continues in boardrooms and academia, these automated systems continue to determine people’s lives in numerous and subtle ways.
For Philip, the UK government’s scrapping of the exams algorithm has left him in limbo.
“We emailed Exeter [University] and phoned and they’re in a kind of mess,” he said, adding that he was hopeful he could win his place back. “I think I’ll just defer now anyway.”
He said he was grateful to be given his predicted grades but said the experience had gone “pretty badly.”
“[The government] had months to sort this out,” he said. “I get that there’s a lot of things going on with the health stuff but […] it’s a pretty poor showing.”