The results fiasco gave algorithms a bad name. Here's what we need to learn

Like any technology, algorithms can only produce desirable outcomes when used alongside expert human judgment.

Over the past two weeks, UK school results were announced for exams that no student had sat because of Covid-19. Ofqual and its equivalents outside England had been given the unenviable task of developing algorithms that would give a "probable" grade for each untaken exam. In A-Levels, the algorithm downgraded 39% of results from the grades predicted by teachers, with tens of thousands of students downgraded by at least two grades. Following a very similar outcome in Scotland, which announces its exam results a week before A-Levels, there was an immediate and intense backlash that led to an inevitable government U-turn back to the predicted grades.

The process has been described as an "algoshambles" and The Sun delivered a verdict of "Grade A fiasco". The original A-Level and GCSE results have already been reissued and, earlier this week, BTEC results were delayed so they can be regraded.

So, was this a case of a rogue algorithm taking our young people’s future into its own hands, or just old-fashioned bad decision-making?

The Ofqual algorithm has been described as a "black box", but this is misleading – in fact, its authors had published a dense 300-page technical explanation. This revealed the algorithm was designed to avoid grade inflation and control for discrepancies in predicted grades between schools. The algorithm used three years of historic school and student performance data, and the student rankings. Crucially, it did not use the teacher predicted grades. The average grades of the historic data were then applied to students – so, if 2% of the students in a class typically got A* grades, around 2% of the class of 2020 would get an A*, regardless of what their teachers had predicted. The exception were small cohorts of students where there was limited historic data – for these, predicted grades were used.

Scrutiny of the results quickly found that outstanding students in schools with poor historic performance could be penalised, because the algorithm was designed to rely on historic school performance. The solution also advantaged small schools and niche subjects – both more likely among private schools – leading to accusations of privileging the already privileged.

Given the importance of algorithms in modern marketing, what should marketing leaders learn from the so-called "algoshambles"? As data specialists we don’t think the answer is to run screaming from algorithms. The best leaders blend analysis and critical judgment, using data to avoid poor decisions and bias. But there are some critical questions that leaders should ask to avoid damage to their customers, businesses and brands.

Is an algorithm the right solution to the problem?

Arguably the A-Level problem was intractable: you can’t perfectly grade exams that were never taken. Algorithms, like students, take time to achieve peak performance – the "run once and done" approach was a recipe for error.

What is the impact of outliers?

Naïve use of statistics focuses on averages, but we also need to recognise the outliers. In this case, a lot of the anger about the A-Level results was driven by those cases where students were heavily downgraded compared to teachers’ predictions.

Check your requirements

Focusing the algorithm on avoiding grade inflation led to a design that seemed to be as much about grading schools as individuals. When using data, make sure your requirements align with the fundamental business question. Are all stakeholders agreed about what that is?

Explain yourself

Ofqual published a 300-page explanation of the algorithm but left its interpretation to the media and individuals. It was the appearance of transparency without adequate explanation. It may not always be easy to explain an algorithm, but using simple language to describe what it’s doing will go a long way to mitigate criticism.

Have a plan for bias

The desire to be fair meant the algorithm was avoided for small cohorts of students, but this built bias into the results. Don’t just check for bias, have a plan for how to manage it and reduce it when it does appear.

Consider the human impact of your decision

Is society prepared to accept an algorithmic decision? As a society we still find the logic of an algorithm harder to understand than the decision-making of a teacher – one might be no more accurate than the other, but the human decision is more acceptable.

This mess has been bad for students, but it’s also given algorithms a bad name. It’s a timely reminder of the power and influence of data analysis, and society’s wariness of that power. We’re algorithmic optimists – done well, data science creates value for people and brands. But it needs to be balanced with informed judgment, ethical decision-making and clear leadership.

Olivia Hawkins is consulting director at Wunderman Thompson Data. Alex Steer is chief data officer at Wunderman Thompson EMEA

Subscribe today for just $116 a year

Get the very latest news and insight from Campaign with unrestricted access to campaignlive.com , plus get exclusive discounts to Campaign events

Become a subscriber

GET YOUR CAMPAIGN DAILY FIX

The latest work, news, advice, comment and analysis, sent to you every day

register free