TikTok removed more than 104.54 million videos from its platform in the first half of this year for breaching its community guidelines or terms of service. The number accounts for less than 1% of all videos uploaded on the Chinese app maker’s platform, with the largest volumes removed from India and the US at 37.68 million and 9.82 million, respectively.
Some 96.4% of the videos were identified and removed before users reported them, while 90.3% were removed before they clocked any views, according to TikTok’s latest transparency report released Tuesday. The majority, at 30.9% were removed for containing nudity and sexual activities, while 22.3% were taken out for violating minor safety and 19.6% were removed for containing illegal activities and regulated goods.
Apart from India and the US, the highest numbers of videos also were removed from Pakistan, Brazil, and the UK at 6.45 million, 5.53 million, and 2.95 million, respectively.
TikTok also complied with “valid” government and law enforcement requests across the globe for user information. Such requests would have to be submitted with the appropriate legal documents such as subpoena, court order, warrant, or emergency request. Amongst these, India submitted the most requests at 1,206, of which TikTok complied with 79%, followed by the US at 290, of which 85% were complied with. Israel made 41 requests, of which TikTok complied with 85%, while Germany submitted 37 requests, but just 16% were complied with.
In limited emergency situations, TikTok said it would disclose user information without legal process. This typically occurred when it had reason to believe the disclosure of information was required to prevent the imminent risk of death or serious physical injury to any person.
China was notably missing from the list of government requests.
In addition, TikTok said it also received legal requests from governments and law enforcement agencies as well as IP (intellectual property) rights holders to restrict or remove certain content. These, the company said, would be honoured if made through “proper channels” or required by law.
Amongst these, Russia submitted requests that identified the most number of accounts at 259, of which 29% were complied with. India submitted requests that specified 244 accounts, of which 22% were complied with.
Pointing to its efforts to “connect” its users, TikTok said it promoted content — amidst the global pandemic — thru in-app info pages and hosted hashtag challenges with partners such as World Health Organization, UNICEF India, and well-known individuals such as Bill Nye the Science Guy and Prince’s Trust. It also developed dedicated pages within its app that enabled users to learn more about Black history, in support of the Black community.
Proposal for global group to safeguard against harmful content
In a separate statement Tuesday, TikTok said its interim head Vanessa Pappas sent a letter to the heads of nine social and content platforms, proposing a Memorandum of Understanding aimed at encouraging companies to warn one another of violent, graphic content on their own platforms.
“Social and content platforms are continually challenged by the posting and cross-posting of harmful content, and this affects all of us [including] our users, our teams, and the broader community,” the company said. “As content moves from one app to another, platforms are sometimes left with a whack-a-mole approach when unsafe content first comes to them. Technology can help auto-detect and limit much, but not all of that, and human moderators and collaborative teams are often on the frontlines of these issues.”
“Each individual effort by a platform to safeguard its users would be made more effective through a formal, collaborative approach to early identification and notification amongst companies,” TikTok said. “By working together and creating a hashbank for violent and graphic content, we could significantly reduce the chances of people encountering it and enduring the emotional harm that viewing such content can bring — no matter the app they use.”
TikTok said it previously launched a fact-checking program across eight markets to help verify misleading content, such as misinformation about COVID-19, elections, and climate change. It also introduced in-app educational public service announcements on hashtags related to important topics in the public discourse, such as the elections, Black Lives Matter, and harmful conspiracies, including QAnon.
View original article here Source