With the recent explosion of live audio chat platforms, from Clubhouse to game-centric Discord and Twitter’s Spaces comes a new problem: how to moderate real-time speech. As Discord chief legal officer Clint Smith put it, “audio presents a fundamentally different set of challenges for moderation than text-based communication.” “It’s more ephemeral and it’s harder to research and action,” he added. One problem is that the majority of tools built to detect problematic content are aimed at text comments, not audio.
Reuters reports that this problem is compounded by “a lack of extra clues, like the visual signals of video or accompanying text comments” as well as the fact that “not all companies make or keep voice recordings to investigate reports of rule violations.” Clubhouse, for example, “deletes its recording if a live session ends without an immediate user report, and Discord does not record at all.”
The latter, pushed to act against toxic content, now “gives users controls to mute or block people and relies on them to flag problematic audio.” Clubhouse has similar user controls. But these controls can be easily abused, such as blocking users to harass or exclude them.
“It’s really important for these services to be learning from the rollout of video-streaming to understand they will face all of the same kinds of questions,” said Free Expression Project director Emma Llansó, a member of Twitch’s Safety Advisory Council. At Clubhouse, where Rooms are capped at 8,000 people, co-founder Paul Davison said that it “has been staffing up its trust and safety team to handle issues in multiple languages and quickly investigate incidents.”
With 10 million weekly active users, Clubhouse has a “full-time staff that only recently reached double digits.” A spokeswoman said it also uses in-house reviewers and third-party services to moderate content but would not comment in more detail. The company has previously said it is “investing in tools to detect and prevent abuse as well as features for users, who can set rules for their rooms, to moderate conversations.”
According to one source, the live audio platform Fireside — a “socially responsible platform” that’s the brainchild of Mark Cuban — will be curated “to avoid the issues other networks have faced.” Twitter is testing Spaces “with 1,000 users that began with women and people from marginalized groups … Hosts are given controls to moderate and users can report problems.”
Andrew McDiarmid of Twitter’s product trust team explained that Twitter is also considering “proactive detection” that will flag objectionable tweets without the user needing to do so. The company is also “still deciding how to translate existing rules, like labeling misinformation … to the audio arena.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.