By
Paula ParisiAugust 14, 2024
YouTube, which began testing crowdsourced fact-checking in June, is now expanding the experiment by inviting users to try the feature. Likened to the Community Notes accountability method introduced by Twitter and continued under X, YouTube’s as yet unnamed feature lets users provide context and corrections to posts that might be misleading or false. “You can sign up to submit notes on videos you find inaccurate or unclear,” YouTube explains, adding that “after submission, your note is reviewed and rated by others.” Notes widely rated as helpful “may be published and appear below the video.” Continue reading YouTube Tests Expanded Community Fact-Checking for Video
By
ETCentric StaffMarch 11, 2024
Artificial intelligence stakeholders are calling for safe harbor legal and technical protections that will allow them access to conduct “good-faith” evaluations of various AI products and services without fear of reprisal. More than 300 researchers, academics, creatives, journalists and legal professionals had as of last week signed an open letter calling on companies including Meta Platforms, OpenAI and Google to allow access for safety testing and red teaming of systems they say are shrouded in opaque rules and secrecy despite the fact that millions of consumers are already using them. Continue reading Researchers Call for Safe Harbor for the Evaluation of AI Tools
By
Paula ParisiDecember 1, 2021
Twitter is tweaking its Birdwatch crowdsourced fact-check feature, adding aliases so contributors can conceal their identities when notating someone’s tweet. The company says its goal in having people append anonymously is “keeping focus on the content of notes rather than who’s writing them,” reducing bias and tempering polarization. To ensure aliases don’t overshadow accountability, all Birdwatch accounts now have profile pages that aggregate past contributions, and the ratings those contributions received from other Birdwatchers, accruing credibility to contributors whose notes and ratings are consistently found helpful by others. Continue reading Twitter Formalizes Its Birdwatch Program with Aliases, Profiles
By
Paula ParisiSeptember 27, 2021
Facebook’s semi-independent Oversight Board is scrutinizing the company’s XCheck (or cross-check) system, which permits famous or powerful users to be held to more lenient behavior rules than other users. The inquiry, which calls out “apparent inconsistencies” in the social media firm’s decision-making, follows an investigative report by The Wall Street Journal. XCheck was initially designed as a quality control system for sanctions against high-profile users, including celebrities, politicians and journalists. It eventually grew to encompass millions of accounts, some of whom were “whitelisted,” which rendered them immune from disciplinary actions. Continue reading XCheck System Is Scrutinized by Facebook Oversight Board