By
Paula ParisiJune 18, 2024
Nvidia is expanding its substantive influence in the AI sphere with Nemotron-4 340B, a family of open models designed to generate synthetic LLM training data for commercial applications across numerous fields. Through what Nvidia is calling a “uniquely permissive” free open model license, Nemotron-4 340B provides a scalable way for developers to build LLMs. Synthetic data is artificially generated data designed to mimic the characteristics and structure of data found in the real world. The offering is being called “groundbreaking” and an important step toward the democratization of artificial intelligence. Continue reading Nvidia’s Open Models to Provide Free Training Data for LLMs
By
Paula ParisiJune 7, 2024
Stability AI has added another audio product to its lineup, releasing the open-source text-to-audio generator Stable Audio Open 1.0 for sound design. The new model can generate up to 47 seconds of samples and sound effects, including drum beats, instrument riffs, ambient sounds, foley and production elements. It also allows for adapting variations and changing the style of audio samples. Stability AI — best known for the image generator Stable Diffusion — in September released Stable Audio, a commercial product that can generate sophisticated music tracks of up to three minutes. Continue reading Stability AI Releases Free Sound FX Tool, Stable Audio Open
By
Paula ParisiMay 15, 2024
IBM has released a family of its Granite AI models to the open-source community. The series of decoder-only Granite code models are purpose-built to write computer code for enterprise developers, with training in 116 programming languages. These Granite models range in size from 3 to 34 billion parameters in base model and instruction-tuned variants. They offer a range of uses, from modernizing older code with new languages to optimizing programs for on-device memory constraints, such as might be experienced when conforming for mobile gadgets. In addition to generation, the models can repair and explain code. Continue reading IBM Introduces Granite LLMs for Enterprise Code Developers
By
ETCentric StaffApril 26, 2024
The trend toward small language models that can efficiently run on a single device instead of requiring cloud connectivity has emerged as a focus for Big Tech companies involved in artificial intelligence. Apple has released the OpenELM family of open-source models as its entry in that field. OpenELM uses “a layer-wise scaling strategy” to efficiently allocate parameters within each layer of the transformer model, resulting in what Apple claims is “enhanced accuracy.” The “ELM” stands for “Efficient Language Models,” and one media outlet couches it as “the future of AI on the iPhone.” Continue reading Apple Unveils OpenELM Tech Optimized for Local Applications
By
ETCentric StaffApril 25, 2024
Microsoft, which has been developing small language models (SLMs) for some time, has announced its most-capable SLM family, Phi-3. SLMs can accomplish some of the same functions as LLMs, but are smaller and trained on less data. That smaller footprint makes them well suited to run in a local environment, which means they’re ideal for smartphones, where in theory they would not even need an Internet connection to run. Microsoft claims the Phi-3 open models can outperform “models of the same size and next size up across a variety of benchmarks that evaluate language, coding and math capabilities.” Continue reading Microsoft Small Language Models Are Ideal for Smartphones
By
ETCentric StaffApril 22, 2024
Pursuant to his goal of “building the world’s leading AI,” Meta Platforms CEO Mark Zuckerberg announced Friday that Meta AI is upgrading to Llama 3 concurrent with a rollout of its open-source chatbot across the company’s social platforms, integrating it into the search boxes atop WhatsApp, Instagram, Facebook and Messenger. There is also a website, meta.ai, for those who prefer browser access. Reports of Meta upgrading its social AI capabilities began leaking out early last week, albeit on a more limited test scale than what Zuckerberg announced, which, excepting Threads, is cross-platform. Continue reading Meta AI Assistant Is Launching Across Platforms with Llama 3
By
ETCentric StaffMarch 29, 2024
Databricks, a San Francisco-based company focused on cloud data and artificial intelligence, has released a generative AI model called DBRX that it says sets new standards for performance and efficiency in the open source category. The mixture-of-experts (MoE) architecture contains 132 billion parameters and was pre-trained on 12T tokens of text and code data. Databricks says it provides the open community and enterprises who want to build their own LLMs with capabilities previously limited to closed model APIs. Compared to other open models, Databricks claims it outperforms alternatives including Llama 2-70B and Mixtral on certain benchmarks. Continue reading Databricks DBRX Model Offers High Performance at Low Cost
By
ETCentric StaffMarch 19, 2024
Elon Musk’s xAI has released its Grok chatbot and open-sourced part of the underlying Grok-1 model architecture for any developer or entrepreneur to use for purposes including commercial applications. Musk unveiled Grok in November and announced that it would be publicly released this month. The chatbot itself is available to X social premium members, who can ask the cheeky AI questions and get answers with a snarky attitude inspired by “The Hitchhiker’s Guide to the Galaxy” sci-fi novel. The training for Grok’s foundation LLM is said to include X social posts. Continue reading Grok-1 Architecture Open-Sourced for General Release by xAI
By
ETCentric StaffFebruary 16, 2024
Stability AI, purveyor of the popular Stable Diffusion image generator, has introduced a completely new model called Stable Cascade. Now in preview, Stable Cascade uses a different architecture than Stable Diffusion’s SDXL that the UK company’s researchers say is more efficient. Cascade builds on a compression architecture called Würstchen (German for “sausage”) that Stability began sharing in research papers early last year. Würstchen is a three-stage process that includes two-step encoding. It uses fewer parameters, meaning less data to train on, greater speed and reduced costs. Continue reading Stability AI Advances Image Generation with Stable Cascade
By
ETCentric StaffFebruary 9, 2024
Apple has released MGIE, an open-source AI model that edits images using natural language instructions. MGIE, short for MLLM-Guided Image Editing, can also modify and optimize images. Developed in conjunction with University of California Santa Barbara, MGIE is Apple’s first AI model. The multimodal MGIE, which understands text and image input, also crops, resizes, flips, and adds filters based on text instructions using what Apple says is an easier instruction set than other AI editing programs, and is simpler and faster than learning a traditional program, like Apple’s own Final Cut Pro. Continue reading Apple Launches Open-Source Language-Based Image Editor
By
Paula ParisiDecember 7, 2023
IBM and Meta Platforms have launched the AI Alliance, a coalition of companies and educational institutions committed to responsible, transparent development of artificial intelligence. The group launched this week with more than 50 global founding participants from industry, startup, academia, research and government. Among the members and collaborators: AMD, CERN, Cerebras, Cornell University, Dell Technologies, Hugging Face, Intel, Linux Foundation, NASA, Oracle, Red Hat, Sony Group, Stability AI, the University of Tokyo and Yale Engineering. The group’s stated purpose is “to support open innovation and open science in AI.” Continue reading IBM and Meta Debut AI Alliance for Safe Artificial Intelligence
By
Paul BennunDecember 4, 2023
Stability AI, developer of Stable Diffusion (one of the leading visual content generators, alongside Midjourney and DALL-E), has introduced SDXL Turbo — a new AI model that demonstrates more of the latent possibilities of the common diffusion generation approach: images that update in real time as the user’s prompt updates. This feature was always a possibility even with previous diffusion models given text and images are comprehended differently across linear time, but increased efficiency of generation algorithms and the steady accretion of GPUs and TPUs in a developer’s data center makes the experience more magical. Continue reading Stability AI Intros Real-Time Text-to-Image Generation Model
By
Paula ParisiOctober 31, 2023
The United Nations has formed an advisory board on artificial intelligence comprised of 39-members from government, academia and industry who will “undertake analysis and advance recommendations for the international governance of AI.” The move comes as U.S. legislators and tech industry players are also prioritizing model governance. “Globally coordinated AI governance is the only way to harness AI for humanity while addressing its risks and uncertainties,” the UN announced in unveiling the initiative, co-chaired by Carme Artigas, Spain’s secretary of state for digitalization and AI, and James Manyika, SVP of research, technology and society at Google. Continue reading Google, Microsoft, Sony Tapped for UN AI Governance Board
By
Paula ParisiAugust 29, 2023
Hugging Face has collected $235 million in an investment round that includes contributions from Amazon, IBM, Google, Nvidia, Salesforce, AMD, Intel and Qualcomm. The New York-based startup creates and distributes open-source tools for artificial intelligence development, carving an AI-centric niche similar to the more general programming approach taken by the Microsoft-owned GitHub. The incoming cash infusion — earmarked for talent recruitment — gives Hugging Face a lofty $4.5 billion valuation that experts say indicates momentum for open source in what has to date been a highly competitive AI sector. Continue reading Big Tech Firms Propel Hugging Face to $4.5 Billion Valuation
By
Paula ParisiJuly 20, 2023
This week, Meta Platforms released Llama 2, the next generation of its open-source large language model that is free for research and commercial use. Llama 2’s pretrained and fine-tuned language models are available in sizes ranging from 7 to 70 billion parameters. Meta also named Microsoft Azure its “preferred partner for Llama 2,” offering it through the Azure AI model catalog for use with cloud-native tools that leverage content filtering and safety features. Meta says Llama 2 is “also optimized to run locally on Windows,” providing developers a seamless workflow across enterprise and consumer platforms. Continue reading Meta Unveils Llama 2 LLM with Microsoft as Preferred Partner