Drexel Claims Its AI Has 98 Percent Rate Detecting Deepfakes

Deepfake videos are becoming increasingly problematic, not only in spreading disinformation on social media but also in enterprise attacks. Now researchers at Drexel University College of Engineering say they have developed an advanced algorithm with a 98 percent accuracy rate in detecting deepfake videos. Called the MISLnet algorithm, for the school’s Multimedia and Information Security Lab where it was invented, the platform uses machine learning to recognize and extract the “digital fingerprints” of video generators including Stable Video Diffusion, VideoCrafter and CogVideo.

The MISLnet algorithm “can learn to detect new AI generators after studying just a few examples of their videos.”

“According to Live Science, the MISLnet algorithm represents a significant new milestone in detecting fake images and video content because many of the ‘digital breadcrumbs’ that existing systems look for in regular digitally edited media are not present in entirely AI-generated media,” writes PetaPixel.

The new MISLnet algorithm “has been trained using a method called a constrained neural network, which can differentiate between normal and unusual values at the sub-pixel level of images or video clips, rather than searching for the common indicators of image manipulation,” PetaPixel explains.

“With AI-generated video, there is no evidence of image manipulation frame-to-frame,” as would be found in video manipulated by Photoshop or other traditional image editing software, Drexel Associate Professor of Engineering Matthew Stamm tells Live Science, adding that “for a detection program to be effective it will need to be able to identify new traces left behind by the way generative AI programs construct their videos.”

Stamm and his team explain in a scientific paper why common deepfake detection approaches don’t work with video generators like Stable Diffusion, Sora and Pika, noting “forensic traces in synthetic video are substantially different than those in synthetic images.” The team writes that “synthetic video traces can be learned” and “video-level detection can be performed to boost performance over frame-level detection.”

“Stamm’s lab has been active in efforts to flag digitally manipulated images and videos for more than a decade,” Drexel News reported in April, adding that “the group has been particularly busy in the last year, as editing technology is being used to spread political misinformation.”

Wired reports on how some Fortune 500 companies have begun testing software from GetReal Labs “that can spot a deepfake of a real person in a live video call.”

Related:
Deepfakes Will Cost $40 Billion by 2027 as Adversarial AI Gains Momentum, VentureBeat, 7/1/24
GetReal Labs Emerges From Stealth to Tackle Deepfakes. SecurityWeek, 6/28/24

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.