The Reel Thing: Machine Learning Powers Restoration Engine
August 27, 2018
During last week’s The Reel Thing at the Academy’s Linwood Dunn Theater in Hollywood, Video Gorillas managing director/chief executive Jason Brahms, formerly a Sony Cloud Media Services executive, and chief technology officer Alex Zhukov described the Bigfoot “Frame Compare” solution that leverages machine learning to speed up preservation, asset management, and mastering workflows. The engine, whose development dates back to 2007, relies on a proprietary, patented technology, frequency domain descriptor (FDD).
“We knew early on that pixel matching methods had limitations and would not support many media and entertainment use cases,” Brahms said. “So we looked at existing interest point matching methods.” Such methods (SURF/SIFT) enable support for matching zoomed/cropped frames in reference masters to film scans as well as VFX-heavy shots in reference master to films scans, clean plates and green screen elements. But, with more research, they found that inefficient processing speeds and large index sizes made SURF/SIFT solutions unable to scale — and led to the development of FDD, which is now a patented computer vision/visual analysis tool.
Brahms described the proprietary technology as first breaking an interest point and its surrounding area into subareas, and then creating the FDD of each created by applying discrete Fourier transform (DFT). Then, “frequency domain features are coded bitwise by comparing them to predefined thresholds.”
After testing FDD development on an episode of “Designing Women” in 2011, the team filed a patent application in 2013, and released the first Bigfoot version in 2014. The patent was granted in 2016, and Bigfoot version 1 rolled out commercially in 2017. Zhukov reported that FotoKem, CBS Digital and Visual Data Media Services are among the facilities with Bigfoot, and that the company also worked on Netflix’s “The Other Side of the Wind.”
In addition to FDD and machine learning technology that finds and processes “interest points,” Bigfoot features purpose-built use case-centric UIs. “Front end validation UIs for Conform and Compare use cases, built to work in a web browser, are used by operators to validate and approve Bigfoot results,” explained Brahms. Bigfoot can be deployed in a cloud environment or on premises. Zhukov added that, “once the Bigfoot index exists, it can continue to support a myriad of ongoing use cases related to media asset management workflows.”
Digging down into the Bigfoot Compare process, Zhukov described that it offers a “differential analysis of picture cuts/versions, including frames unique between the two cuts; frames that are common in the same sequence and frames that are common that have been shifted/moved.” For restoration and remastering use cases, the tool can also reverse-engineer EDLs by comparing frames from a master to film scans and reconstructing the timeline from the result.
“I think there’s a lot of opportunity there,” said Brahms. “There are some additional conform use cases such as matching VFX plates, reference picture conform, trailer reconstruction from scans, news reel reconstruction, and matching stock footage.” Zhukov also described early results of current research on a neural network-based tool to extrapolate missing frames of stock footage.
“Bigfoot allows you to push more material through your workflow,” concluded Brahms. “With the conform step no longer a bottleneck, you gain lots of efficiencies. It decreases time to market and justifies ROI, while being non-intrusive to the existing operation and only slight modifications to the workflow.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.