Microsoft Testing Bing Generative Search for User Feedback

Microsoft has begun the release of Bing generative search, making it available for “a small percentage of user queries.” The company says it will solicit user feedback and undertake further testing prior to a broader rollout. Google began dabbling in what it called the Search Generative Experience last summer, then upped the ante by adding a search-optimized version of its Gemini model this spring. The journey was not without controversy, something Microsoft will surely try to avoid. Microsoft says its new AI-driven search functionality “combines the foundation of Bing’s search results with the power of large and small language models (LLMs and SLMs).” Continue reading Microsoft Testing Bing Generative Search for User Feedback

Apple Unveils OpenELM Tech Optimized for Local Applications

The trend toward small language models that can efficiently run on a single device instead of requiring cloud connectivity has emerged as a focus for Big Tech companies involved in artificial intelligence. Apple has released the OpenELM family of open-source models as its entry in that field. OpenELM uses “a layer-wise scaling strategy” to efficiently allocate parameters within each layer of the transformer model, resulting in what Apple claims is “enhanced accuracy.” The “ELM” stands for “Efficient Language Models,” and one media outlet couches it as “the future of AI on the iPhone.” Continue reading Apple Unveils OpenELM Tech Optimized for Local Applications

Microsoft Small Language Models Are Ideal for Smartphones

Microsoft, which has been developing small language models (SLMs) for some time, has announced its most-capable SLM family, Phi-3. SLMs can accomplish some of the same functions as LLMs, but are smaller and trained on less data. That smaller footprint makes them well suited to run in a local environment, which means they’re ideal for smartphones, where in theory they would not even need an Internet connection to run. Microsoft claims the Phi-3 open models can outperform “models of the same size and next size up across a variety of benchmarks that evaluate language, coding and math capabilities.” Continue reading Microsoft Small Language Models Are Ideal for Smartphones

Microsoft Says Phi-2 Can Outperform Large Language Models

Microsoft is releasing Phi-2, a text-to-text small language model (SLM) that outperforms some LLMs, yet is light enough to run on a mobile device or laptop, according to Microsoft CEO Satya Nadella. The 2.7 billion-parameter SLM beat Meta Platforms’ Llama 2 and Mistral 7B from France (each with 7 billion parameters) says Microsoft, emphasizing its complex reasoning and language comprehension are exceptional for a model with less than 13 billion parameters. For now, Microsoft is making it available “for research purposes only” under a custom license. Continue reading Microsoft Says Phi-2 Can Outperform Large Language Models