By
Paula ParisiFebruary 1, 2022
Spotify is taking steps to clarify its position regarding COVID-19 misinformation and stabilize its fluctuating stock price after Neil Young and Joni Mitchell yanked their music from the streaming service over objections to vaccine remarks on “The Joe Rogan Experience” podcast. “These issues are incredibly complex,” Spotify CEO Daniel Ek said on Sunday, when the company published platform rules and announced the creation of a COVID-19 Hub to provide “easy access to data-driven facts” from “scientists, physicians, academics and public health authorities” from around the world. Spotify will not be removing the offending content, Ek said. Continue reading Spotify Acts on Boycott, Posts COVID Facts After Stock Falls
By
Paula ParisiDecember 20, 2021
TikTok is tweaking its For You feed so users won’t be recommended too much of the same content. The move follows a global spate of regulatory hearings that took social media companies to task for promoting content that intentionally polarizes adults and potentially harms children. In addition to “diversifying recommendations” in order to “interrupt repetitive patterns” around topics that may provide negative reinforcement, like “loneliness or weight loss,” the popular ByteDance platform said it is also introducing a new feature that will allow users to avoid specific topics or creators. Continue reading TikTok Adjusts Feed to Curb Repetition, Offers Users Control
By
Paula ParisiDecember 17, 2021
The metaverse is in its early days, but many are already concerned as they anticipate the content moderation problems that have bedeviled traditional social media increasing exponentially in virtual worlds. The confluence of realistic immersive environments, the anonymity of avatars and potential for deepfakes is enough to give anyone pause. Throw in machine learning that will make today’s ad targeting seem primitive and it’s an even more volatile mix. Experts agree, the very qualities that make the metaverse appealing — false facades and hyperreality — make it potentially more dangerous than the digital platforms of today. Continue reading Policing the Metaverse Looms as a Challenge for Tech Firms
By
Paula ParisiDecember 15, 2021
Meta Platforms shareholders are pressing for changes that address allegations of harm caused by its social platforms, which include Facebook and Instagram, as well as overall governance reforms. An investor group that includes the New York State Common Retirement Fund and Illinois State Treasurer filed eight resolutions to be considered at the company’s annual shareholder meeting, which is expected to be held in May. The proposals include board oversight in reducing harmful content, analysis of the company’s metaverse investment and a review of audit and risk committee, according to public reports. Continue reading Investors Plan Changes for Vote at Meta Shareholder Meeting
By
Paula ParisiDecember 14, 2021
The U.S. Senate has introduced the bipartisan Platform Accountability and Transparency Act (PATA), which if passed into law would allow independent researchers to sue Big Tech for failing to provide requested data. The move follows last week’s Instagram hearing, where leaked internal research suggested the platform’s negative effects on the mental health of teens. On December 6, an international coalition of more than 300 scientists sent an open letter to Mark Zuckerberg — CEO of Meta Platforms, the company that owns Instagram and Facebook — requesting the social behemoth voluntarily share research. Continue reading Senate Wants Social Firms to Pay for Holding Back Research
By
Paula ParisiDecember 10, 2021
Meta Platforms is restructuring its internal research department, drawing on employees from individual divisions like Facebook, WhatsApp and Instagram to staff a centralized unit that will provide services to the entire company. The research will span everything from societal topics of politics, equity, health and climate to credibility topics like misinformation and account safety. The new division will be managed by Meta head of research Pratiti Raychoudhury. Additionally, Meta is deploying the new Few-Shot Learner artificial intelligence system to help moderate content, identify trends, monitor data and implement rules. Continue reading Meta Reorganizes Research Team and Deploys ‘Few-Shot’ AI
By
Paula ParisiDecember 1, 2021
Twitter is tweaking its Birdwatch crowdsourced fact-check feature, adding aliases so contributors can conceal their identities when notating someone’s tweet. The company says its goal in having people append anonymously is “keeping focus on the content of notes rather than who’s writing them,” reducing bias and tempering polarization. To ensure aliases don’t overshadow accountability, all Birdwatch accounts now have profile pages that aggregate past contributions, and the ratings those contributions received from other Birdwatchers, accruing credibility to contributors whose notes and ratings are consistently found helpful by others. Continue reading Twitter Formalizes Its Birdwatch Program with Aliases, Profiles
By
Paula ParisiOctober 20, 2021
Although Facebook leadership has suggested that artificial intelligence will solve the company’s challenge to keep hate speech and violent content at bay, AI may not be a thoroughly effective near-term solution. That evaluation comes as part of a new examination of internal Facebook documents that allegedly indicate the social media company removes only a small percentage — quantified as low-single-digits — of posts deemed to violate its hate-speech rules. Algorithmic uncertainty as to whether content violates the rules results only in that it is fed to users less frequently, rather than flagged for further scrutiny. Continue reading Facebook Said to Inflate AI Takedown Rates for Hate Speech
By
Paula ParisiOctober 15, 2021
U.S. lawmakers agitated by the recent testimony of Facebook whistleblower Frances Haugen and related media reports are homing in on the social network’s News Feed algorithm as ripe for regulation, although First Amendment questions loom. The past year has seen Congress introduce or reintroduce no fewer than five bills that expressly focus on software coding that decides who sees what content on social media platforms. In addition to the U.S., laws advancing the idea of regulating such algorithms are gaining momentum in the European Union, Britain and China. Continue reading Lawmakers See Solution in Regulating Facebook’s Algorithm
By
Paula ParisiOctober 12, 2021
Facebook vice president of global affairs Nick Clegg in a round of Sunday morning news appearances advocated his company’s position in the midst of senatorial attack, discussing new safety tools and emphasizing the company’s repeated requests for congressional guidelines. Means to deflect users from harmful content, curb political content and put programming power in the hands of parents were among the new measures by which to impede vulnerabilities. Instagram in particular will invite adult supervision over accounts belonging to minors. Clegg stressed Instagram Kids for 13-and-under as part of the solution. Continue reading Facebook Vies with Whistleblower to Spin Latest News Cycle
By
Paula ParisiOctober 5, 2021
Whistleblower Frances Haugen said on “60 Minutes” Sunday night that Facebook was cognizant of problems with apps, including Instagram, that allowed misinformation to be spread and caused societal harm, especially among young girls. Haugen revealed on the CBS news show to be the source of documents leaked to The Wall Street Journal that led to congressional inquiry. She also filed eight complaints with the Securities and Exchange Commission alleging Facebook hid research from investors and the public. The former product manager worked for nearly two years on the civic integrity team before exiting the social network in May. Continue reading Whistleblower Contends Facebook Values Profits Over Safety
By
Paula ParisiOctober 4, 2021
Google Lens visual search will be updated to incorporate the company’s new AI technology, the Multitask Unified Model (MUM), which understands context and draws from various formats, including text, images and videos. With MUM, users will be able to incorporate text in order to specify queries on visual search. For instance, you could use your phone to snap a photo of a favorite shirt using the Google Lens feature — or find a shirt you like through Google Search — then tap the Lens icon on the open image and type in “socks with this pattern” to search with specificity. Continue reading Google Search Will Use MUM AI to Combine Text and Images
By
Paula ParisiOctober 1, 2021
A third of U.S. adults continue to get their news regularly from Facebook, though the number has declined from 36 percent in 2020 to 31 percent in 2021. This reduction marks an overall drop in the number of Americans who say they get their news from any social media source — a figure that dropped by 5 percentage points year-over-year (from 53 percent in 2020 to just under 48 percent this year). TikTok was the only major platform to gain during this period. The general decline comes as social media companies face criticism for not doing enough to stem the flow of misinformation on their platforms, Pew Research notes. Continue reading Top Social Platforms Losing Some Traction as News Sources
By
Paula ParisiSeptember 16, 2021
Facebook apologized to researchers this week for data released years ago but only recently outed as inaccurately representing how U.S. users interact with posts and links. Reaching out via email and on a conference call with 47 people, the social media giant attempted to mitigate the harm caused by academics and analysts who have already spent about two years studying what they now say, and Facebook seems to agree, is flawed data about how misinformation spreads on its platform. The problem was identified as Facebook having underreported by about half the number of U.S. users and their data. Continue reading Facebook Apologizes for Providing Researchers Flawed Data
By
Paula ParisiSeptember 15, 2021
Twitter is testing a new feature that allows bots to self-identify with a label on their account profiles. Although the feature will allow users to differentiate automated accounts that perform legitimate services — such as retweeting news, providing customer service, PSAs or community alerts — it will not flag the problematic “bad bots” that spread misinformation and spam. Last year, Twitter requested developers specify if an account was a bot, who was powering it and its intended use. The new automated accounts to designate “good bots” will be issued to more than 500 accounts for testing and feedback before they are made available to all developers. Continue reading Twitter Asks Developers to ID ‘Good Bots’ Using New Badge