Twitch Report Details the Challenges of Curbing Toxic Speech

Video live-streaming service Twitch, which saw a 40 percent increase in channels in 2020, released its first transparency report featuring details on how it provides security for its 26 million daily users. The Amazon-owned company has always struggled to control harassment and hate speech but even more so during its meteoric growth spurt, especially since live content is harder to control. The new report acknowledges that challenge, noting it’s relied on volunteer moderators and user reports as well as its AutoMod tool, introduced in 2016.

Wired reports that, “like other large platforms, Twitch also relies on machine learning to flag potentially problematic content for human review.” But a 2020 GamesIndustry.biz report stated that several Twitch employees revealed that “executives at the company didn’t prioritize safety tools and were dismissive of hate speech concerns,” and a 2019 Anti-Defamation League study revealed that “nearly half of Twitch users surveyed reported facing harassment.”

In 2020, Twitch’s new head of trust and safety Angela Hession said that safety was the “number one investment” at the company, the same year that it “released updated versions of its ‘Nudity and Attire,’ ‘Terrorism and Extreme Violence’ and ‘Harassment and Hateful Conduct’ guidelines.” That year, Twitch also created an eight-person Safety Advisory Council.

Under Hession, “Twitch finally banned depictions of the Confederate flag and blackface.” She added that the company has quadrupled its number of content moderators. The report stated that, “AutoMod or active moderators touched over 95 percent of Twitch content throughout the second half of 2020 … [and] people reporting that they received harassment via Twitch direct message decreased by 70 percent in that same period.”

Twitch reported that, “enforcement actions increased by 788,000 early 2020 to 1.1 million late 2020,” adding that this “reflects its increase in users.” During that time, user reports rose from 5.9 million to 7.4 million, and channel bans from 2.3 million to 3.9 million.

Twitch stated it sent “2,158 tips to the National Center for Missing & Exploited Children in 2020 (a 65 percent increase between early and late 2020) but had escalated instances of violence to law enforcement just 38 times.” The company also “processed 37 percent more subpoena requests, for a total of 226 throughout the year.”

Twitch plans to build up its “off-services” policies “which will guide how Twitch investigates and acts on situations where users are implicated in serious crimes, like sexual assault.” But, notes Wired, “it comes a little late: Last year, dozens of women came forward with allegations of inappropriate or harmful behavior against Twitch streamers.”

Still, Hession would not reveal the number of moderators it employs “or whether they are full-time or contractors, like the majority of Facebook’s … [although it] does note that mods are able to view potentially traumatizing content muted and in black and white and receive wellbeing services.” “If there’s one infraction where we don’t prevent a child from streaming or we feel there’s toxicity, we will always need to do better,” said Hession.

Wired adds that, “Twitch will release an updated accountability report twice a year, offering an opportunity to hold itself to the standards that it has established.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.