Big Tech, Academics Launch Deepfake Detection Challenge
December 16, 2019
A coalition of Big Tech companies and academics have banded together to fight deepfakes. Facebook, Amazon Web Services (AWS), Microsoft, the Partnership on AI, and academics at Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland College Park and the State University of New York at Albany just launched the Deepfake Detection Challenge announced in September. The problem is serious; deepfakes have swindled companies and could sway public opinion during upcoming elections.
VentureBeat reports, “Amsterdam-based cybersecurity startup Deeptrace found 14,698 deepfake videos on the Internet during its most recent tally in June and July, up from 7,964 last December — an 84 percent increase within only seven months.” The Deepfake Detection Challenge is intended to spur researchers to develop open source detection tools. Facebook has invested $10+ million; AWS “is contributing up to $1 million in service credits and offering to host entrants’ models if they choose, and Google’s Kaggle data science and machine learning platform is hosting both the challenge and the leaderboard.”
“Deepfake techniques … have significant implications for determining the legitimacy of information presented online,” wrote Facebook chief technology officer Mike Schroepfer in a blog post. “Yet the industry doesn’t have a great data set or benchmark for detecting them.” The goal, he added, is “to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.”
Of the already-released dataset, UC Berkeley and USC researchers developed a way to identify deepfakes with greater than 90 percent accuracy. But Pinscreen chief executive Hao Li pointed out that, “synthesis techniques are constantly evolving such that at some point, it might become nearly impossible to distinguish AI fakes from reality.”
The coalition hired a vendor to create videos “depicting realistic scenarios in a variety of settings, poses, and backgrounds, with accompanying labels describing whether they were manipulated with AI.” Then “tampered videos were created based on a subset of the original footage using a range of machine learning techniques,” including face swapping and voice alterations.
The resulting dataset contains 100,000+ videos and was tested through a “targeted technical working session … at the International Conference on Computer Vision,” said Facebook AI research manager Christian Ferrer. “Ensuring that cutting-edge research can be used to detect deepfakes depends on large-scale, close-to-reality, useful, and freely available data sets,” he said. “Since that resource didn’t exist, we’ve had to create it from scratch.”
Those competing in the Deepfake Detection Challenge can “download the corpus to train deepfake-detecting AI models” once they’ve registered. Their final designs will be submitted “into a black box validation environment, which hosts a mechanism that scores the model’s effectiveness against test sets.” The Challenge is set to run through the end of March 2020.
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.