By
Paula ParisiJuly 2, 2024
Deepfake videos are becoming increasingly problematic, not only in spreading disinformation on social media but also in enterprise attacks. Now researchers at Drexel University College of Engineering say they have developed an advanced algorithm with a 98 percent accuracy rate in detecting deepfake videos. Called the MISLnet algorithm, for the school’s Multimedia and Information Security Lab where it was invented, the platform uses machine learning to recognize and extract the “digital fingerprints” of video generators including Stable Video Diffusion, VideoCrafter and CogVideo. Continue reading Drexel Claims Its AI Has 98 Percent Rate Detecting Deepfakes
By
Paula ParisiJune 20, 2024
YouTube is experimenting with a feature that allows viewers to add contextual “Notes” under videos, similar to what X does with its Community Notes. The Google-owned company says the intent is to provide clarity around things like “when a song is meant to be a parody,” when newly reviewed products are available for purchase, or “when older footage is mistakenly portrayed as a current event.” However, the timing preceding a pivotal U.S. presidential election and facing concerns about deepfakes and misinformation is no doubt intentional. The pilot will initially be available on mobile in the United States. Continue reading YouTube to Tackle Misinformation with Crowdsourced Notes
By
ETCentric StaffMarch 20, 2024
YouTube has added new rules requiring those uploading realistic-looking videos that are “made with altered or synthetic media, including generative AI” to label them using a new tool in Creator Studio. The new labeling “is meant to strengthen transparency with viewers and build trust between creators and their audience,” YouTube says, listing examples of content that require disclosure as “likeness of a realistic person” including voice as well as image, “altering footage of real events or places” and “generating realistic scenes” of fictional major events, “like a tornado moving toward a real town.” Continue reading YouTube Adds GenAI Labeling Requirement for Realistic Video
By
ETCentric StaffFebruary 20, 2024
OpenAI has debuted a generative video model called Sora that could be a game changer. In OpenAI’s demonstration clips, Sora depicts both fantasy and natural scenes with photorealistic intensity that makes the images appear to be photographed. Although Sora is said to be currently limited to one-minute clips, it is only a matter of time until that expands, which suggests the technology could have a significant impact on all aspects of production — from entertainment to advertising to education. Concerned about Sora’s disinformation potential, OpenAI is proceeding cautiously, and initially making it available only to a select group to help it troubleshoot. Continue reading OpenAI’s Generative Video Tech Is Described as ‘Eye-Popping’
By
Paula ParisiNovember 8, 2023
CBS is launching a unit charged with identifying misinformation and avoiding deepfakes. Called CBS News Confirmed, it will operate out of the news-and-stations division, ferreting out false information generated by artificial intelligence. Claudia Milne, senior VP of CBS News and Stations and its standards and practices chief will run the new group with Ross Dagan, EVP and head of news operations and transformation, CBS News and Stations. CBS plans to hire forensic journalists and will expand training and invest in technologies to assist them in their role. In addition to flagging deepfakes, CBS News Confirmed will also report on them. Continue reading CBS News Confirmed: New Fact-Checking Unit Examining AI
By
Paula ParisiAugust 18, 2023
After announcing a partnership with OpenAI last month, the Associated Press has issued guidelines for using generative AI in news reporting, urging caution in using artificial intelligence. The news agency has also added a new chapter in its widely used AP Stylebook pertaining to coverage of AI, a story that “goes far beyond business and technology” and is “also about politics, entertainment, education, sports, human rights, the economy, equality and inequality, international law, and many other issues,” according to AP, which says stories about AI should “show how these tools are affecting many areas of our lives.” Continue reading AP Is Latest Org to Issue Guidelines for AI in News Reporting
By
Paula ParisiJune 8, 2023
The European Union wants deepfakes and other AI-generated content labeled, and is pressing signatories to its Code of Practice on Online Disinformation to adopt technology that will clearly identify output that is generated or manipulated by machines. “The new AI technologies can be a force for good” that offers “new avenues for increased efficiency and creative expression. But, as always, we have to mention the dark side,” EU values and transparency commissioner Vera Jourova said, citing “new risks and the potential for negative consequences for society.” Continue reading EU Urges Tech Companies to Label All AI-Generated Content
By
Paula ParisiMay 18, 2023
A new government agency that licenses artificial intelligence above a certain capability, regular testing, and independent audits were some of the ideas to spring from a three-hour Senate judiciary subcommittee hearing to explore ways in which the government might regulate the nascent field. OpenAI co-founder and CEO Sam Altman advocated for all of the above, stressing the need for external validation by independent experts, strict cybersecurity, and a “whole of society approach” to combatting disinformation. While Altman emphasized AI’s advantages, he warned “if this technology goes wrong, it can go quite wrong.” Continue reading Politicians and Tech Leaders Gather to Discuss Regulating AI
By
Paula ParisiMay 10, 2023
For those worried about AI creep — the insidious influence of artificial intelligence over everything from school classwork to career aspirations — Princeton University undergrad Edward Tian has an app for that. Tian has received $3.5 million in funding for an invention called GPTZero, which analyzes text to identify the work of generative AI. The 10-person team claims the tool has a 99 percent accuracy rate for human text and can score at about 85 percent when it comes to text written by AI. The company is now talking to media leaders about partnerships for AI detection and analysis. Continue reading GPTZero Fights Online AI Disinformation, School Plagiarism
By
Paula ParisiFebruary 16, 2023
Social media companies appear to be reducing efforts to combat misinformation at a time when the capabilities to foist false narratives is reaching new levels of sophistication. As a result of staff cuts at Alphabet, Google’s YouTube subsidiary is reportedly left with one person overseeing worldwide misinformation policy. Twitter eliminated its safety and trust division, while Meta also made changes to its disinformation filtering. Meanwhile, The Guardian has unearthed Israeli misinformation contractors operating under the name “Team Jorge” that says it manipulated more than 30 presidential elections worldwide. Continue reading Disinformation Rising on Social Platforms as Policing Wanes
By
Paula ParisiJuly 13, 2022
Meta Platforms has unveiled Sphere, an AI-powered tool designed to verify open web content. “Building on Meta AI’s research and advancements, we’ve developed the first model capable of automatically scanning hundreds of thousands of citations at once to check whether they truly support the corresponding claims,” Meta says, noting that Sphere has “a dataset of 134 million public webpages — an order of magnitude larger and significantly more intricate than ever used for this sort of research.” Sphere is open sourced, which means third parties may be able to tailor its fact-checking algorithms for specialized use, such as legal, medical and architectural. Continue reading Meta’s New Sphere AI Tool Filters Web Content for Accuracy
By
Paula ParisiJune 27, 2022
As the U.S. approaches the 2022 midterm elections, social media platforms are being criticized for dropping the ball on misinformation safeguards. Meta Platforms’ Facebook has triggered alarm over plans to scrap CrowdTangle, a relevance filter Facebook has promoted as a discovery tool. Advocacy groups have described CrowdTangle as “indispensable” to finding false information online. Meta is accused of reducing CrowdTangle support and losing interest in election security overall as it shifts focus from the real world to the metaverse. CrowdTangle is cross-platform, and used to analyze content on Twitter and Reddit, among others. Continue reading Concern Expressed Over Meta Scrapping CrowdTangle Filter
By
Paula ParisiJune 21, 2022
The European Union unveiled a new code of practice for disinformation, a glimpse at the regulation Big Tech companies will be dealing with under upcoming digital content laws. Meta Platforms, Twitter, TikTok and Google have agreed to the new rules, which update voluntary guidelines. The revised standards direct social media companies to avoid advertising adjacent to intentionally false or misleading content. EU policymakers have said they will make parts of the new code mandatory under the Digital Services Act. Platforms agreeing to comply with the new rules must submit implementation reports by early 2023. Continue reading European Union Creates Code of Practice on Disinformation
By
Paula ParisiJune 2, 2022
As various states undergo primary elections and the nation gears up for midterm elections in the fall, the social network misinformation machines are becoming more active, too. Connecticut is actively addressing the problem with a marketing budget of nearly $2 million to counter unfounded rumors. The state is also creating a new position to monitor the disinformation mill. Salaried at $150,000 per year, the job involves combing fringe sites like Gettr, Rumble and 4chan as well as mainstream social media sites to weed-out falsehoods before they go viral, alerting platforms to remove or flag such posts. Continue reading States Fight Misinformation on Social Media Before Midterms
By
Rob ScottApril 26, 2022
Twitter’s board has accepted billionaire Elon Musk’s offer to purchase the social media company for $44 billion, a financial value that reflects his April 14th offer of $54.20 per share. “Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” said Musk, the CEO of Tesla Motors and SpaceX, who earlier revealed a desire to make Twitter a private company. “I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans.” Continue reading Twitter Accepts Musk’s $44 Billion Offer to Acquire Company