Scientists and Military Look for Key to Identifying Deepfakes
October 19, 2018
The term “deepfakes” describes the use of artificial intelligence and computer-generated tricks to make a person (usually a well-known celebrity or politician) appear to do or say “fake” things. For example, actor Alden Ehrenreich’s face was recently replaced by Harrison Ford’s face in footage from “Solo: A Star Wars Story.” The technique could be meant simply for entertainment or for more sinister purposes. The more convincing deepfakes become, the more unease they create among AI scientists, and military and intelligence communities. As a result, new methods are being developed to help combat the technology.
The Verge reports that the Han Solo switcheroo was done using footage from YouTuber derpfakes, “who rose to prominence in the community around the machine learning-based tool by using it to substitute actor (and human meme) Nicolas Cage into famous film scenes.” But the reality of derpfakes (screenshot below) leaves something to be desired, with “facial expressions and lip movements [that] just don’t match up, and Ford’s face just … weirdly floating over Ehrenreich.”
Wired notes that computer scientist Siwei Lyu felt his team’s deepfake videos, created via a machine learning algorithm, also “felt eerie” and not quite right. Examining them closer, he realized the digital human’s eyes were always open, because “the images that the program learned from didn’t include many with closed eyes,” which created a bias.
Lyu later wrote that such deepfake programs may well miss “physiological signals intrinsic to human beings … such as breathing at a normal rate, or having a pulse.” Within weeks of putting a draft of his results online, the team got “anonymous emails with links to deeply faked YouTube videos whose stars opened and closed their eyes more normally.” Lyu stated that, “blinking can be added to deepfake videos by including face images with closed eyes or using video sequences for training.”
Although deepfakes can improve over time, Lyu said he wants to “make the process more difficult, more time-consuming,” noting that it’s easy now to download software and Google images of Hillary Clinton, for example, to create a fake. “The line between what is true and what is false is blurry,” he said.
Military and intelligence communities are particularly concerned, which is why MediFor (Media Forensics), a DARPA program that started in 2016, funds Lyu’s research. The project “aims to create an automated system that looks at three levels of tells, fuses them, and comes up with an ‘integrity score’ for an image or video.”
DARPA “hopes to have prototype systems it can test at scale.” “What you might see in a few years’ time is things like fabrication of events,” said DARPA program manager Matt Turek. “Not just a single image or video that’s manipulated but a set of images or videos that are trying to convey a consistent message.”
At the Los Alamos National Lab, cyber scientist Juston Moore is “worried that if evidentiary standards don’t (or can’t) evolve with the fabricated times, people could easily be framed.” “The algorithms can create images of faces that don’t belong to real people, and they can translate images in strange ways, such as turning a horse into a zebra,” said Moore. “It could be that you don’t trust any photographic evidence anymore, which is not a world I want to live in.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.