By
Paula ParisiSeptember 27, 2024
Meta’s Llama 3.2 release includes two new multimodal LLMs, one with 11 billion parameters and one with 90 billion — considered small- and medium-sized — and two lightweight, text-only models (1B and 3B) that fit onto edge and mobile devices. Included are pre-trained and instruction-tuned versions. In addition to text, the multimodal models can interpret images, supporting apps that require visual understanding. Meta says the models are free and open source. Alongside them, the company is releasing “the first official Llama Stack distributions,” enabling “turnkey deployment” with integrated safety. Continue reading Meta Unveils New Open-Source Multimodal Model Llama 3.2
By
ETCentric StaffMarch 19, 2024
Apple researchers have gone public with new multimodal methods for training large language models using both text and images. The results are said to enable AI systems that are more powerful and flexible, which could have significant ramifications for future Apple products. These new models, which Apple calls MM1, support up to 30 billion parameters. The researchers identify multimodal large language models (MLLMs) as “the next frontier in foundation models,” which exceed the performance of LLMs and “excel at tasks like image captioning, visual question answering and natural language inference.” Continue reading Apple Unveils Progress in Multimodal Large Language Models
By
ETCentric StaffFebruary 27, 2024
Qualcomm raised the curtain on a variety of artificial intelligence, 5G, and Wi-Fi technologies at Mobile World Congress Barcelona, which runs through Thursday. The San Diego-based chip designer unveiled an AI Hub it says will help developers create voice-, text- and image-based applications using pre-optimized AI models. Qualcomm’s flagship AI chips — the mobile Snapdragon 8 Gen 3 processor and the PC-centric Snapdragon X Elite — were announced last year. With the first splash of products now heading to market the company is promising to push the boundaries of 5G and 6G. Continue reading MWC: Qualcomm Unveils AI Hub and Promotes 5G, 6G Tech
By
Paula ParisiOctober 11, 2023
Startup Reka AI is releasing in preview its first artificial intelligence assistant, Yasa-1. The multimodal AI is described as “a language assistant with visual and auditory sensors.” The year-old company says it “trained Yasa-1 from scratch,” including pretraining foundation models “from ground zero,” then aligning them and optimizing to its training and server infrastructures. “Yasa-1 is not just a text assistant, it also understands images, short videos and audio (yes, sounds too),” said Reka AI co-founder and Chief Scientist Yi Tay. Yasa-1 is available via Reka’s APIs and as docker containers for on-site or virtual private cloud deployment. Continue reading Yasa-1: Startup Reka Launches New AI Multimodal Assistant
By
Paula ParisiAugust 24, 2023
Meta Platforms is releasing SeamlessM4T, the world’s “first all-in-one multilingual multimodal AI translation and transcription model,” according to the company. SeamlessM4T can perform speech-to-text, speech-to-speech, text-to-speech, and text-to-text translations for up to 100 languages, depending on the task. “Our single model provides on-demand translations that enable people who speak different languages to communicate more effectively,” Meta claims, adding that SeamlessM4T “implicitly recognizes the source languages without the need for a separate language identification model.” Continue reading Meta’s Multimodal AI Model Translates Nearly 100 Languages