TikTok Adjusts Feed to Curb Repetition, Offers Users Control
December 20, 2021
TikTok is tweaking its For You feed so users won’t be recommended too much of the same content. The move follows a global spate of regulatory hearings that took social media companies to task for promoting content that intentionally polarizes adults and potentially harms children. In addition to “diversifying recommendations” in order to “interrupt repetitive patterns” around topics that may provide negative reinforcement, like “loneliness or weight loss,” the popular ByteDance platform said it is also introducing a new feature that will allow users to avoid specific topics or creators.
In a carefully worded statement that does not mention the word “algorithm,” TikTok said it “considers a range of engagement signals, such as likes, follows, and videos watched, to show people other videos they might be interested in.”
Couching the news as a “safety update,” the video sharing platform says it is “testing ways to avoid recommending a series of similar content — such as around extreme dieting or fitness, sadness, or breakups — to protect against viewing too much of a content category.”
But The Wall Street Journal flatly states, “TikTok said it would adjust its recommendation algorithm.” The work, TikTok says, is “informed by ongoing conversations with experts across medicine, clinical psychology, and AI ethics, members of our Content Advisory Council, and our community.”
TikTok concedes that “Getting these systems and tools right will take time and iteration” as it seeks to “ensure our system is making a diversity of recommendations.” But WSJ clarifies, “TikTok isn’t diversifying its algorithm because people are complaining of seeing one too many cute puppy videos — it’s doing so because regulators are cracking down on tech and questioning the harmful impacts of unchecked recommendation algorithms, particularly when it comes to teen mental health.”
Facebook and Instagram are cited in the WSJ article as having its executives “hauled into Congress and questioned about how their apps have been directing users to dangerous content — including to topics like pro-anorexia and eating disorder content, for example.”
Lawmakers on both sides of the pond are taking the position that “the time for self-policing is over,” but that shouldn’t stop social media firms from trying to improve.
TikTok says it’s “working on a feature that would let users pick words or hashtags associated with content they don’t wish to see on their video feed.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.