Meta Rolls Out Watermarking, Behavioral and Concept Models

Meta’s FAIR (Fundamental AI Research) team has unveiled recent work in areas ranging from transparency and safety to agents, and architectures for machine learning. The projects include Meta Motivo, a foundation model for controlling the behavior of virtual embodied agents, and Video Seal, an open-source model for video watermarking. All were developed in the unit’s pursuit of advanced machine intelligence, helping “models to learn new information more effectively and scale beyond current limits.” Meta announced it is sharing the new FAIR research, code, models and datasets so the research community can build upon its work.

“We aim to democratize access to state-of-the-art technologies that transform our interaction with the physical world, which is why we’re committed to fostering a collaborative and open ecosystem that accelerates progress and discovery,” Meta explains in a blog post.

Video Seal is a comprehensive framework for neural video watermarking, Meta says, explaining it “adds a watermark (with an optional hidden message) into videos that is imperceptible to the naked eye and can later be uncovered to determine a video’s origin.”

TechCrunch points out that “the commoditization of generative AI has led to an absolute explosion of fake content online,” citing statistics from ID verification platform Sumsub that indicate “a 4x increase in deepfakes worldwide from 2023 to 2024,” with fictionalized realities accounting for 7 percent of all fraud this year. Offenses range “from impersonations and account takeovers to sophisticated social engineering campaigns.”

Although Meta officially disbanded its Responsible AI team in late 2023, the company “has been keeping a close eye when it comes to AI-generated content popping up on its platforms,” reports Silicon Republic. “Earlier this year, the business revealed plans to set up a dedicated team to combat disinformation and AI misuse ahead of the EU elections” in June.

“Video Seal joins Meta’s other watermarking tools, Audio Seal and Watermark Anything (which was also recently re-released under a permissive license),” Silicon Republic notes.

Meta describes Motivo as “a first-of-its-kind behavioral foundation model to control a virtual physics-based humanoid agent” that moves more naturally and realistically in “a wide range of whole-body tasks.”

The idea is the research “could pave the way for fully embodied agents in the metaverse, leading to more lifelike NPCs, democratization of character animation, and new types of immersive experiences,” Meta says.

Meta also introduced a new model “called the Large Concept Model (LCM), which aims to ‘decouple reasoning from language representation,’” per Reuters, which calls it “a significant departure from a typical LLM,” in that “rather than predicting the next token, the LCM is trained to predict the next concept or high-level idea.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.