Snapchat Previews Instant AR Filters, GenAI Developer Tools

Snap Inc. teased a new on-device AI model capable of real-time filter creation in-app using Snapchat. At last week’s Augmented World Expo in Long Beach, California, Snap co-founder and CTO Bobby Murphy explained that the model, which runs on smartphones, can re-render frames on the fly guided by text prompts. Snap’s unnamed prototype model “can instantly bring your imagination to life in AR,” Snap says, explaining “this early prototype makes it possible to type in an idea for a transformation and generate vivid AR experiences in real time.”

According to TechCrunch, Murphy said the new generative AI image diffusion models, while exciting for image creation, are still too slow for augmented reality applications, which need to run in real time. That’s why, Murphy said, Snap’s robust internal AI development teams have been working on their own acceleration solutions.

“This and future real time on device generative ML models speak to an exciting new direction for augmented reality, and is giving us space to reconsider how we imagine rendering and creating AR experiences altogether,” Murphy said.

CNET reports Snap has made AI the centerpiece of a “new business strategy” that will leverage generative capability in two ways: “it will invoke real-time AR experiences in the phone app, and it’ll be used in developer tools to help make new AR experiences faster.”

TechCrunch reports “Snapchat users will start to see Lenses with this generative model in the coming months, and Snap plans to bring it to creators by the end of the year.”

The Verge provides examples, demonstrating “a person’s clothing and background transforming in real time based on the prompt ‘50s sci-fi film.’”

Snap says its Lens Studio 5.0 development tool is out of beta and is now Snap AR’s official pro platform. Snap introduced the GenAI Suite of tools for Lens Studio, “enabling AR creators to generate custom ML models and assets to power their Lenses,” according to a company announcement.

The new tools could shave weeks — or even months — “of time creating new models from scratch, making it possible to build high-quality Lenses faster.”

“Some of the tools now available with the latest Lens Studio update include new face effects that let creators write a prompt or upload an image to create a custom lens that completely transforms a user’s face,” The Verge writes, explaining it “also includes a feature, called Immersive ML, that applies a ‘realistic transformation over the user’s face, body, and surroundings in real time.’”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.