Facebook, Twitter Turn to Algorithms to Weed Out Bad Actors
August 24, 2018
Facebook revealed a ratings system it has been developing over the past year, assigning users a “reputation score” that estimates their trustworthiness on a scale from zero to one. The idea behind the system is to weed out bad actors, according to Facebook product manager Tessa Lyons who is in charge of the battle against fake news. Up until now, Facebook, like other tech companies, has depended on users to report problematic content, but discovered that users began to file false reports about items they said were untrue.
The Washington Post quotes Lyons as saying that it is “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher.” The problem is such that WP deemed this system “a battleground,” with false reporting becoming a tactic among left wing and right wing groups.
Lyons added that, “a user’s trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility … nor is there is a single unified reputation score that users are assigned,” but that the score is only “one measurement among thousands of new behavioral clues.” Among the clues that Facebook considers are “which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users.”
In an attempt to fend off threats from Russian (and other countries’) interference, fake news and bad actors, Facebook and others are turning to untested algorithms “to understand who poses a threat.” Twitter is another company that is going the algorithmic route, factoring in “the behavior of other accounts in a person’s network as a risk factor in judging whether a person’s tweets should be spread.” But the algorithms are “highly opaque,” leading to some criticism for the companies’ lack of transparency.
“Not knowing how [Facebook is] judging us is what makes us uncomfortable,” said Harvard University’s First Draft research lab director Claire Wardle. “But the irony is that they can’t tell us how they are judging us — because if they do, the algorithms that they built will be gamed.”
Although tech companies have “a long history of using algorithms to make all kinds of predictions about people … as misinformation proliferates, companies are making increasingly sophisticated editorial choices about who is trustworthy.” “One of the signals we use is how people interact with articles,” said Lyons. “For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.”
She would not, however, list “other signals the company used to determine trustworthiness, citing concerns about tipping off bad actors.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.