Twelve Labs Creating AI That Can Search and Analyze Video

Twelve Labs has raised $30 million in funding for its efforts to train video-analyzing models. The San Francisco-based company has received strategic investments from notable enterprise infrastructure providers Databricks and SK Telecom as well as Snowflake Ventures and HubSpot Ventures. Twelve Labs targets customers using video across a variety of fields including media and entertainment, professional sports leagues, content creators and business users. The funding coincides with the release of Twelve Labs’ new video foundation model, Marengo 2.7, which applies a multi-vector approach to video understanding.

Using Twelve Labs’ models, “users can search through videos for specific moments, summarize clips, or ask questions like ‘When did the person in the red shirt enter the restaurant?,’” reports TechCrunch, describing the company’s AI as having “a powerful set of capabilities” that previously attracted investors including Nvidia, Samsung and Intel.

Twelve Labs’ technology lets customers “input natural language prompts that can ‘pinpoint and extract’ exact moments in vast libraries of videos,” allowing them to “quickly locate scenes that would otherwise be lost amid hours of footage,” SiliconANGLE writes.

“Video is the fastest-growing — and most data-intensive — medium, yet most organizations aren’t going to devote human resources to cull through all their video archives,” Twelve Labs co-founder Jae Lee shares with TechCrunch, adding that “even if you tried manually tagging, it wouldn’t solve the issue. Finding a specific moment or angle in videos can be like looking for a needle in a haystack.”

Twelve Labs has built its AI model specifically to map “text-to-video content, identifying actions, objects, and sounds,” ReadWrite explains, emphasizing that focus sets it apart from “general-purpose models from companies like Google and OpenAI.”

The company offers “custom models tailored to specific needs,” with APIs that can be used to build apps to “search across video footage, images, and audio” for use in applications ranging from advertising placement and content moderation or even creating highlight reels, according to ReadWrite.

Twelve Labs describes its two multimodal foundation models, Marengo and Pegasus, as bringing “human-like understanding to videos, enabling precise semantic search, summarization, analysis, Q&A, and more.”

As part of their investments, Databricks and Snowflake “will deliver Twelve Labs’ capabilities to users through interoperability with their vector databases,” while Snowflake is developing an advanced integration with Twelve Labs that will leverage Snowflake Cortex AI, Twelve Labs says in an announcement.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.