MIT Spinoff Liquid Eschews GPTs for Its Fluid Approach to AI

AI startup Liquid, founded by alums of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), has released its first models. Called Liquid Foundation Models, or LFMs, the multimodal family approaches “intelligence” differently than the pre-trained transformer models that dominate the field. Instead, the LFMs take a path of “first principles,” which MIT describes as “the same way engineers build engines, cars, and airplanes,” explaining that the models are large neural networks with computational units “steeped in theories of dynamic systems, signal processing and numeric linear algebra.”

“This unique blend allows us to leverage decades of theoretical advances in these fields in our quest to enable intelligence at every scale,” explains an announcement by Liquid.

“LFMs are general-purpose AI models that can be used to model any kind of sequential data, including video, audio, text, time series, and signals,” the team says, noting that the fluidity of its name “pays homage to our roots in dynamic and adaptive learning systems.”

Exploring model-building beyond Generative Pre-trained Transformers (GPTs) has paid off, according to VentureBeat, which writes that “the new LFM models already boast superior performance to other transformer-based ones of comparable size such as Meta’s Llama 3.1-8B and Microsoft’s Phi-3.5 3.8B.”

The Liquid LFMs debut in three sizes, ranging from the lightweight LFM 1.3B, “for resource-constrained environments,” a medium-sized LFM 3B, “optimized for edge deployment,” and the high-end LFM 40B MoE, “a ‘Mixture-of-Experts’ model similar to Mistral’s Mixtral,” reports VentureBeat.

Liquid shares performance charts showing how the LFMs favorably compare to other systems. Its three models “are designed to offer state-of-the-art performance while optimizing for memory efficiency, with Liquid’s LFM-3B requiring only 16 GB of memory compared to the more than 48 GB required by Meta’s Llama-3.2-3B model,” VentureBeat writes.

The Boston-based startup’s founding team of MIT CSAIL researchers includes Ramin Hasani, Mathias Lechner, Alexander Amini and Daniela Rus, “said to be pioneers in the concept of ‘liquid neural networks,’” which are quite different from the GPT-based models “we know and love today, such as OpenAI’s GPT series and Google LLC’s Gemini,” according to SiliconANGLE.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.