By
Paula ParisiNovember 9, 2023
A second Meta Platforms whistleblower has come forward, testifying this week before a Senate subcommittee that the company’s social networks were potentially harming teens, and his warnings to that effect were ignored by top leadership. Arturo Bejar, from 2009 to 2015 a Facebook engineering director and an Instagram consultant from 2019 to 2021, told the Senate Judiciary Subcommittee on Privacy, Technology and Law that Meta officials failed to take steps to protect underage users on the platforms. Bejar follows former Facebook whistleblower Frances Haugen, who provided explosive Senate testimony in 2021. Continue reading Second Meta Whistleblower Testifies to Potential Child Harm
By
Paula ParisiSeptember 1, 2022
A first of its kind U.S. proposal to protect children online cleared the California Legislature Tuesday and was sent to the desk of Governor Gavin Newsom. The California Age-Appropriate Design Code Act will require social media platforms to implement guardrails for users under 18. The new rules will curb risks — such as allowing strangers to message children — and require changes to recommendation algorithms and ad targeting where minors are concerned. The bill was drafted following Facebook whistleblower Frances Haugen’s 2021 congressional testimony about the negative effects of social media on children’s mental health. Continue reading California’s Online Child Safety Bill Could Set New Standards
By
Paula ParisiMarch 4, 2022
A group of state attorneys general has announced an investigation into TikTok and the potential harm it may cause younger users. The fact-finding is not unlike that launched by top state legal advisors last year into Meta Platforms. The bipartisan group is exploring whether TikTok is violating state consumer protection laws with engagement tactics that may cause minors to become “hooked” on the app. Kids in the age of social media “feel like they need to measure up to the filtered versions of reality that they see on their screens,” said California attorney general Rob Bonta. Continue reading State AGs Launch Investigation into Effects of TikTok on Kids
By
Paula ParisiJanuary 26, 2022
Facebook finds itself the subject of yet more unflattering allegations, this time claiming the company gouged people in third world countries by charging them for services it had said would be free when making deals with cellular carries in the areas. Internal documents are said to have surfaced indicating that after promising to let low-income citizens in places like Pakistan, Indonesia and the Philippines use a pared-down version of Facebook along with some Internet browsing without incurring data charges, the Meta Platforms company wound up charging, in total, millions of dollars a month. Continue reading Facebook Caught in Fee Controversy for Free Mobile Service
By
Paula ParisiDecember 17, 2021
The metaverse is in its early days, but many are already concerned as they anticipate the content moderation problems that have bedeviled traditional social media increasing exponentially in virtual worlds. The confluence of realistic immersive environments, the anonymity of avatars and potential for deepfakes is enough to give anyone pause. Throw in machine learning that will make today’s ad targeting seem primitive and it’s an even more volatile mix. Experts agree, the very qualities that make the metaverse appealing — false facades and hyperreality — make it potentially more dangerous than the digital platforms of today. Continue reading Policing the Metaverse Looms as a Challenge for Tech Firms
By
Paula ParisiDecember 15, 2021
British lawmakers are seeking “major changes” to the forthcoming Online Safety Bill that cracks down on Big Tech but apparently does not go far enough. Expansions under discussion include legal consequences for tech firms and new rules for online fraud, advertising scams and deepfake (AI-generated) adult content. Comparing the Internet to the “Wild West,” Damian Collins, chairman of the joint committee that issued the report, went so far as to suggest corporate directors be subject to criminal liability if their companies withhold information or fail to comply with the act. Continue reading UK Lawmakers Are Taking Steps to Toughen Online Safety Bill
By
Paula ParisiDecember 14, 2021
The U.S. Senate has introduced the bipartisan Platform Accountability and Transparency Act (PATA), which if passed into law would allow independent researchers to sue Big Tech for failing to provide requested data. The move follows last week’s Instagram hearing, where leaked internal research suggested the platform’s negative effects on the mental health of teens. On December 6, an international coalition of more than 300 scientists sent an open letter to Mark Zuckerberg — CEO of Meta Platforms, the company that owns Instagram and Facebook — requesting the social behemoth voluntarily share research. Continue reading Senate Wants Social Firms to Pay for Holding Back Research
By
Paula ParisiDecember 10, 2021
Meta Platforms is restructuring its internal research department, drawing on employees from individual divisions like Facebook, WhatsApp and Instagram to staff a centralized unit that will provide services to the entire company. The research will span everything from societal topics of politics, equity, health and climate to credibility topics like misinformation and account safety. The new division will be managed by Meta head of research Pratiti Raychoudhury. Additionally, Meta is deploying the new Few-Shot Learner artificial intelligence system to help moderate content, identify trends, monitor data and implement rules. Continue reading Meta Reorganizes Research Team and Deploys ‘Few-Shot’ AI
By
Paula ParisiDecember 3, 2021
The U.S. House of Representatives is signaling intent to proceed with legislation to scale back the Section 230 liability shield for Big Tech. The move follows a frontal assault on Australia’s version of the law by the Parliament and global saber-rattling against protections that prevent social platforms being held legally accountable for user-posted content that harms others. At a Wednesday hearing on various Section 230 bills, House Energy and Commerce Committee chairman Frank Pallone (D-New Jersey) said that while the protections were vital to Internet growth, they have resulted in anti-social behavior. Continue reading Government Questions Liability Shield Offered by Section 230
By
Paula ParisiNovember 23, 2021
Twitter has earned praise for transparency after it published “unflattering” research findings. The company analyzed “millions of Tweets” in an attempt to measure how its recommendation algorithms handle political content, and subsequently reported that it amplifies more content from right-wing politicians and media outlets than from left-wing sources. The findings, which were released in late October, were well-received at a time when social platforms are fast to tout positive findings, but quickly discredit critical data, as was the case with Facebook and whistleblower Frances Haugen. Continue reading Twitter Earns Praise for Transparency in Its Research Findings
By
Bella ChenNovember 18, 2021
After Facebook promised in July that it would limit its algorithms that track online behavior of users under 18 as a step toward curtailing a method used by advertisers to target children and teenagers, the social giant is again being accused of collecting such data. Facebook was found harvesting data of young users through its ad delivery system, according to a report published by advocacy groups Fairplay, Global Action Plan and Reset Australia. The research suggests that Facebook is maintaining the ability to track younger users so that it can maximize engagement and increase advertising revenue. Continue reading Facebook Is Criticized for Continuing to Collect Data of Teens
By
Paula ParisiNovember 10, 2021
Facebook whistleblower Frances Haugen’s meetings with European Union officials have accelerated the lawmakers’ plans to tamp down Big Tech. Officials are calling for quick action to strengthen and enact measures of a 2020 bill that would impose strict obligations on social media companies. As currently drafted the bill would require technology platforms to monitor and mitigate risks from illegal content or suffer stiff fines. Likening Europe to “a digital Wild West,” EU digital commissioner Thierry Breton said, “Speed is everything” and EU members must pass the new tech legislation in the first half of 2022. Continue reading FB Whistleblower Testimony Accelerates EU Regulatory Push
By
Paula ParisiOctober 28, 2021
Executives from Snap, TikTok and YouTube tried to distance themselves from Facebook and one another in a Tuesday Senate hearing about online safety for young users. In a combative exchange lasting nearly four hours, the participating social platforms tried to make the case they are already taking steps to protect minors, while lawmakers countered that their staff was able to find posts featuring inappropriate content on their sites, sometimes while logged in as teens. “Being different from Facebook is not a defense,” said Senator Richard Blumenthal (D-Connecticut). Continue reading Social Platforms Face Government Questions on Teen Safety
By
Paula ParisiOctober 21, 2021
Riding the momentum of Washington hearings and media criticism, legislators are pushing forward various bills to regulate Big Tech. Amy Klobuchar (D-Minnesota) and Chuck Grassley (R-Iowa) led fellow Senators in pushing legislation that would prevent tech platforms from favoring their own products and services, lending weight to efforts already progressing in the House. House Energy and Commerce Committee leaders have put forward their own proposal to prevent social-media companies from boosting circulation of harmful content. At the forefront are initiatives to limit the collection of personal info from minors, as well as restrictions on marketing to children. Continue reading Bipartisan Congressional Effort Afoot for Tougher Tech Laws
By
Paula ParisiOctober 20, 2021
Although Facebook leadership has suggested that artificial intelligence will solve the company’s challenge to keep hate speech and violent content at bay, AI may not be a thoroughly effective near-term solution. That evaluation comes as part of a new examination of internal Facebook documents that allegedly indicate the social media company removes only a small percentage — quantified as low-single-digits — of posts deemed to violate its hate-speech rules. Algorithmic uncertainty as to whether content violates the rules results only in that it is fed to users less frequently, rather than flagged for further scrutiny. Continue reading Facebook Said to Inflate AI Takedown Rates for Hate Speech