TikTok is calling for social-media platforms to collaborate on the removal of harmful content, shortly after revealing the enormous volume of videos it had to remove in the first six months of 2020 for violating its policies on nudity, minor safety, violence and more.
The social video app's interim head, Vanessa Pappas, yesterday (22 September) penned a letter to the leaders of nine rival social and content platforms proposing a memorandum of understanding under which the platforms would warn each other about trending harmful content.
Often the instigators of harmful content post across multiple platforms in order to increase the reach of their messages, in the same way legitimate publishers cross-post.
The platforms TikTok has reached out to are Facebook, Google, Instagram, Pinterest, Reddit, Snapchat, Twitch, Twitter, and YouTube, a TikTok spokesperson revealed to Campaign Asia-Pacific.
TikTok's suggestion is to create a "hashbank" of violent and graphic content that reduces the need for each platform to do its own investigations, therefore, expediting the removal process.
"By working together...we could significantly reduce the chances of people encountering it and enduring the emotional harm that viewing such content can bring—no matter the app they use," TikTok said in a blog post authored by Trust & Safety representatives Cormac Keenan, Arjun Narayan Bettadapur Manjunath and Jeff Collins.
The trust and safety executives admitted that the current "whack-a-mole approach" to remove unsafe content as it moves from one app to another is limited, and that a formal, collaborative approach to early identification and notification amongst companies would be "more effective".
Platforms already work together in the removal of content such as child sexual abuse material, TikTok said. But there's a "critical need" to work together to protect people from extremely violent and graphic content such as suicide, it added.
"We are committed to working with others across the industry, as well as experts, academics, and non-profit organisations as we develop a framework and plan to bring this group to fruition," it said. "Our users deserve it."
The announcement was made shortly after the platform released its global Transparency Report for the first six months of 2020, which details the volume and nature of content it took down for violating its community guidelines or terms of service.
Between January 1 and June 30, TikTok said it removed 104,543,719 videos across the globe—more than double the previous six months. This whopping figure accounts for less than 1% of all videos uploaded on TikTok in that time span, the company said, which paints a picture of how popular the app is.
TikTok said it found and removed 96.4% of the videos it removed before a user reported them, and 90.3% were removed before they received any views.
The category with the biggest removals was adult nudity and sexual activities, at 30.9% of total takedowns, followed by minor safety at 22.3% and illegal activities at 19.6%. Other categories included suicide and self-harm, violent content, hate speech and dangerous individuals.
The five countries/markets with the largest volumes of removed videos were India, the US, Pakistan, Brazil and the UK.
India also accounted for the lion's share of legal requests for content removal in the time span, with 1,187 law-enforcement requests for user information (TikTok complied with 79%) and 55 government requests to remove or restrict content.
TikTok was banned in India in late June over security concerns.