Lawmakers See Solution in Regulating Facebook’s Algorithm
October 15, 2021
U.S. lawmakers agitated by the recent testimony of Facebook whistleblower Frances Haugen and related media reports are homing in on the social network’s News Feed algorithm as ripe for regulation, although First Amendment questions loom. The past year has seen Congress introduce or reintroduce no fewer than five bills that expressly focus on software coding that decides who sees what content on social media platforms. In addition to the U.S., laws advancing the idea of regulating such algorithms are gaining momentum in the European Union, Britain and China.
The end-goal is to make social players — not limited to Facebook and its platforms including Instagram — accountable for the effects of their coding on society.
“It’s heartening to see Congress finally beginning to focus on the heart of the problem,” representative Tom Malinowski (D-New Jersey), co-author of an algorithm regulation bill, said in The Washington Post. “The heart of the problem is not that there’s bad stuff posted on the Internet. It’s that social networks are designed to make the bad stuff spread.”
House representative Frank Pallone (D-New Jersey) and others are introducing the Justice Against Malicious Algorithms Act (JAMA), which would take on the Section 230 liability shield of the Communications Decency Act and make Internet platforms “liable when they ‘knowingly or recklessly’ use algorithms to recommend content that leads to physical or ‘severe emotional’ harm,” Engadget reports.
The algorithmic approach marks a strategic shift from earlier congressional hearings, which focused more on content “moderation,” the social networks’ judgment to ban or allow certain types of posts.
“Lawmakers on the left wanted tech giants to crack down more aggressively on hate speech, conspiracy theories and falsehoods, while those on the right wanted to tie the tech giants’ hands to prevent what they claim is a form of censorship,” WaPo writes. Both approaches were checked by the First Amendment, which restricts the government’s ability to regulate companies’ speech.
Some feel shifting the spotlight to something more abstract, like algorithms — including those used by TikTok, YouTube and Twitter — will be a more neutral line of attack, helping to unite the parties in their cause.
“One of the consequences of how Facebook is picking out that content today is that it’s optimizing for content that gets engagement, or reaction,” Haugen told CBS’s “60 Minutes.” “But its own research is showing that content that is hateful, that is divisive, that is polarizing — it’s easier to inspire people to anger than it is to other emotions.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.