Social media companies stepped up to the plate during the 2020 U.S. presidential election.
During the four suspenseful days it took to count all of the votes, Twitter slapped multiple warning labels on Trump’s claims of false victory and voter fraud. And YouTube, Facebook and Instagram prominently displayed messaging that election results were still not finalized until the AP called the race.
That’s a big step up from 2016, when the words “misinformation” and “election interference” weren’t mainstays in the national vocabulary. This time, the platforms actually prepared for 2020. Facebook and YouTube banned political advertising the week prior to the election, and Twitter slowed down retweets and warned users if they were about to share misinformation.
It almost felt like social media companies were finally stepping up to the plate, taking responsibility for their role in democracy and civil discourse. Maybe they suddenly found their conscience, or maybe they figured a Joe Biden win was inevitable. Either way, the advertising industry may have felt a sense of hope from this sudden display of maturity.
But the bar from 2016 was pretty low, and the problems still run deep and wide.
First of all, the platforms are far from off the hook yet. It’s still unclear how interference from Russia and China played out during and leading up to the election, and misinformation is continuing to spread about the veracity of the outcome. Type “Biden Loses” into Google, and the top two search hits are false videos on YouTube.
No one understands the dangers of social media more than advertisers, who have been burned time and again as they find their messages adjacent to false and divisive speech.
Now that the platforms have proven not just their capacity but their willingness to stop amplifying misinformation, we can’t go back to letting false content run rampant -- no matter how profitable it may be for them.
It’s not like the platforms aren't technically able to flag or remove false information. Facebook banned ads for hand sanitizer, face masks and test kits when COVID-19 hit in March, and YouTube began demonetizing videos referencing COVID-19 altogether. The platforms are still flagging and removing false information around the virus and a potential vaccine.
Good on them, but should it really have taken a global pandemic and an unprecedented election to institute basic fact-checking measures?
Some will argue that while advertisers line the coffers of social media giants, they don’t really have an impact on content moderation policies. After all, Facebook’s gangbuster Q3 earnings proved that a major advertiser boycott didn’t make much of a dent in revenues. Facebook has about 10 million advertisers on its platform, and just a small handful are Fortune 1,000 brands willing to take a stand.
But social media platforms are starting to show progress, even if it’s baby steps, and we can’t let them turn back now. Just because they stepped up during the election doesn’t mean we stop paying attention or asking questions.
So instead of praising the platforms for last week’s victory, let’s continue to hold their feet to the fire on content moderation, misinformation and platform safety. The election is over, but the core issues themselves aren’t going away, and it’s been too long and too hard fought of a battle for advertisers to give up now.