By
ETCentric StaffMarch 19, 2024
Apple researchers have gone public with new multimodal methods for training large language models using both text and images. The results are said to enable AI systems that are more powerful and flexible, which could have significant ramifications for future Apple products. These new models, which Apple calls MM1, support up to 30 billion parameters. The researchers identify multimodal large language models (MLLMs) as “the next frontier in foundation models,” which exceed the performance of LLMs and “excel at tasks like image captioning, visual question answering and natural language inference.” Continue reading Apple Unveils Progress in Multimodal Large Language Models
By
Paula ParisiDecember 21, 2023
As the pressure ratchets up for AI companies to go beyond the wow factor and make money, Stability AI has formalized three subscription tiers as it seeks to expand commercial use of its open-source, multimodal core models. The Stability AI Membership offerings include a free tier for personal and research (i.e., non-commercial) use, a professional tier that costs $20 a month, and a custom-priced enterprise tier for large outfits. The company says that with the three tiers it is “striking a balance between fostering competitiveness and maintaining openness in AI technologies.” Continue reading Stability AI Is Offering Paid Membership for Commercial Users
By
Paula ParisiDecember 15, 2023
Microsoft is releasing Phi-2, a text-to-text small language model (SLM) that outperforms some LLMs, yet is light enough to run on a mobile device or laptop, according to Microsoft CEO Satya Nadella. The 2.7 billion-parameter SLM beat Meta Platforms’ Llama 2 and Mistral 7B from France (each with 7 billion parameters) says Microsoft, emphasizing its complex reasoning and language comprehension are exceptional for a model with less than 13 billion parameters. For now, Microsoft is making it available “for research purposes only” under a custom license. Continue reading Microsoft Says Phi-2 Can Outperform Large Language Models