Last month I watched Claire Beale of Campaign interview Lorraine Candy of Elle at Magnetic’s annual Spark event.
Speaking "editor-in-chief to editor-in-chief", Beale asked Candy what role human editors had in an age of data and algorithms. Candy’s answer was simple: "We [editors] are the walking algorithms".
Her contention was well received in the room, but perhaps unsurprising given her role. Should we believe her?
AI is getting smart and only getting smarter.
AI is also becoming mainstream. The Turing Test was once only known to those who studied technical and philosophical issues of intelligence. Now it’s the subject of Hollywood movies.
In Hollywood movies artificial intelligences are so sophisticated we don’t just believe they are people but fall in love with them as in Her and Ex Machina.
These films aren’t set in a distant science fiction future. They are set now. In the hit series Mr Robot an FBI agent confides her deepest thoughts and hopes to Alexa, the voice of Amazon Echo – a product you could buy right now for less than £150.
In a world where falling in love with computers is made to look plausible should we not also believe that algorithms and AI could edit magazines better, or at least as well as, the current "walking algorithms" do? Is it not a more humble and achievable task in comparison?
To understand our answer, we should turn to one of the staple questions of philosophical AI – the frame problem.
In essence the issue is this. Computers can now beat human all comers at chess and even the more complex Go. Yet the same computer would not know to leave the room if a fire started – something even the worst chess player in the world would know to do.
AI is great at specific intelligence. Humans are very good at general intelligence.
This is important when we understand how algorithms and AI that interact with human beings.
If the problem is one of specific intelligence, then a computer would be better. This is bad news for your accountant as this BBC tool shows their jobs have a 95% chance of automation (Advertising is 3%, so rest easy).
AI can trawl the tax rules and work out your optimum position and file the paper work with far greater efficiency. Computers are good at repetitive tasks that demand great accuracy and humans find boring. They are also good at crunching vast numbers of options to find the best one.
When AI really struggles is when it needs to read situations – and most of all when it needs to read people.
The AI editor we all know best is probably Amazon’s Book Recommendations. It’s at the heart of Amazon’s appeal. I’ve spent a lot of time telling Amazon which books I already own and what I think of them.
I’m impressed by its recommendations. It’s alerted me to books I didn’t know where coming out and directed me to authors I didn’t know. This despite my tastes spanning populist pulp to recondite treatises.
However, when I really want a recommendation I prefer to ask Profs Noys and Price, friends of mine who teach in the English Department at The University of Chichester.
The reason is simple. Amazon knows more books and knows which ones I’ve read – but this isn’t the same as knowing me. The profs can make leaps in recommendations Amazon just can’t. Yes, I trust their personal expertise and deep knowledge – but that is only half the battle, and not the half where the batter is won.
What they also know is my mood, where I am in my life, if I’m reading while travelling alone for business or holidaying with family. They know if I want to be educated or entertained, whether my mind needs emptying or filling.
More perceptively, they can suggest when my tastes need correcting or challenging. They don’t just want to sell me a book. They want me to be a better reader and a better person.
This is why they are my editors and Amazon is only an algorithm.
Editors choose content for human reasons – and this demands the wide and general intelligence human beings are good at.
Algorithms choose content for narrower and more purely commercial reasons. Amazon only recommends books that it can sell me, for example.
This is why I, and many others, including a new generation of younger readers fully immersed in digital culture are turning to magazines.
It is not just that we trust people to build a better product – because trust can be misplaced and granted based on prejudice. It is that editors, human editors, know how to make human connections.
They know that their magazine brand is not merely a commercial exchange, but a social one. It is an exchange based on the crucial but intangible value of picking stories audiences want and need to read about.
Editors know when to stretch readers, when to leap forward, when to move back. Editors know how to make human leaps. And, for the moment, algorithms don’t.
Dr Nick Southgate is a behavioural economist and was one of the keynote speakers at Magnetic’s Spark conference in September 2016.