Social Media Platforms Struggle to Subdue Conspiracy Groups
September 22, 2020
Facebook vowed to stop QAnon, a conspiracy theory group claiming that a satanic cult, led by Democratic politicians and entertainers, engages in trafficking of children and cannibalism. Instead, QAnon’s Facebook group has grown by hundreds of new followers, as have the Facebook pages of a violent militia movement. More disturbing is that a study showed Facebook’s own recommendation engine drove users towards these groups. YouTube is another social platform that reportedly recommends the content of fringe groups.
The New York Times reports that, “the stakes are high ahead of the November 3 election,” especially as QAnon groups revere President Trump. Travis View, host of “QAnon Anonymous,” a podcast that explains the movement, stated that, “in allowing QAnon groups to … continue to grow, Facebook has created a huge problem for themselves and for society in a more general sense.”
QAnon groups have been sophisticated in changing names and keywords to avoid Facebook detection; Facebook is “working with external experts on ways to disrupt activity designed to evade our enforcement.”
Facebook banned QAnon groups on August 19, removing 790 of them. But about 100 such groups, tracked by The New York Times for a month, “continued to grow at a combined pace of over 13,600 new followers a week, according to an analysis of data from CrowdTangle, a Facebook-owned analytics platform.”
Members’ comments, likes and posts in those groups grew to over 600,000 a week after the rules went into effect. Concordia University PhD candidate Marc-André Argentino, who is studying QAnon, said 51 Facebook groups identifying as anti-child trafficking groups in actuality were sharing QAnon conspiracies, and their growth skyrocketed in the weeks after Twitter and Facebook began enforcing their new rules.
Wired reports how recommendation algorithms fuel the spread of misinformation. Currently, 70 percent of all YouTube watch time takes place via recommendations. With a goal of reaching 1 billion hours of videos viewed on YouTube, the company, with Google then-director of engineering Cristos Goodrow and his team, tweaked the recommendation algorithm, adding neural net models, to reach that goal.
Although there was evidence that the algorithm “encouraged creators to use misleading tactics … to dupe people into clicking,” the focus was driving more hours viewed, which unleashed “all kinds of misinformation, some of it dangerous.”
Just before the 2016 U.S. presidential election, former Google engineer Guillaume Chaslot coded a web-scraper program and found that YouTube’s recommendations “heavily favored Trump as well as anti-Clinton material” by optimizing whoever “was most willing to tell fantastic lies.”
On October 22, 2016, YouTube hit its 1 billion views goal. After the Las Vegas shooting, YouTube put measures in place to delete disinformation and promote reliable news sources. It struggled, however, to define and contain “borderline” material, although some of it was toxic.
But wild-eyed conspiracy videos can still garner millions of views before YouTube takes them down, making it a “by-now familiar game of social media whack-a-mole.” The conclusion now is that, “the recommendation system may be less important, for good and ill, to the spread of misinformation today.” As one Google engineer said, “now that society is so polarized, I’m not sure YouTube alone can do much.”
“The time to do this was years ago,” he said.
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.