In TikTok’s latest Community Guidelines Enforcement Report, the platform said that it removed nearly 82 million videos between April and June 2021 for violating its guidelines and/or terms of service.
That is less than 1% of the total videos uploaded to TikTok within those three months, Eric Han, TikTok’s head of U.S. safety, said in a company writeup of the report.
“Of those videos, we identified and removed 93.0% within 24 hours of being posted and 94.1% before a user reported them. 87.5% of removed content had zero views, which is an improvement since our last report (81.8%),” Han added.
More than 40% of the 81,518,334 videos TikTok removed were taken down due to violations of its very broad “minor safety” policy, which umbrellas everything from “harmful activities by minors” (aka underage drinking, smoking, or illegal drug-doing, plus dangerous pranks and stunts) to “grooming behavior” committed by adult users to nudity and sex.
Another 20.9% of videos were removed for “illegal activities and regulated goods,” 14% were removed for adult nudity and sex, 7.7% were removed for violent and graphic content, and 6.8% were removed for harassment and bullying.
In the report, TikTok noted that it’s rolling out automatic moderating tech that detects and removes “some categories of violative content.” Out of the 81,518,334 videos TikTok took down between April and June, 16,957,950 of them were removed by this tech.
Also worth mentioning are the 8,542,037 videos TikTok removed for fake activity and/or being posted by spam accounts. The platform’s report specifically mentions that it is working to “evolve and adapt our safeguards by investing in automated defenses to detect, block, and remove inauthentic accounts and engagement.”
As a result, the aforementioned videos were taken down, and TikTok additionally: stopped 148,759,987 fake accounts from being created; prevented 632,416,873 follow requests from fake accounts; terminated 71,935,583 fake accounts that successfully followed other users; prevented a whopping 9,612,942,242 likes from spam accounts; and “corrected” a further 91,812,066 fake likes that’d already been left on videos.
On top of removing violative user-generated content, TikTok rejected 1,829,219 advertisements for violating policies and guidelines, it said.
TikTok’s now letting livestream creators mute individual commenters
Han’s writeup revealed that TikTok is introducing an expanded anti-harassment featuring today. With this update, livestreamers (or their approved moderators) will be able to select individual commenters to mute for a few seconds, a few minutes, or the entire duration of the live stream.
Han also pointed out that TikTok says it’s continuing to crack down on antisemitism.
“As participants to the Malmö International Forum on Holocaust Remembrance and Combating Antisemitism, today we’re proud to reaffirm our commitment to combat antisemitic content on TikTok by continuing to strengthen our policies and enforcement actions,” he said. “We also want to keep expanding our work with NGOs and civil society groups so they can harness the power of TikTok to share their knowledge with new audiences, and direct our community to educational resources so they can learn about the Holocaust and modern-day antisemitism.”
You can read TikTok’s full report here.
Visit Tubefilter for more great stories.