By
Paula ParisiDecember 7, 2023
IBM and Meta Platforms have launched the AI Alliance, a coalition of companies and educational institutions committed to responsible, transparent development of artificial intelligence. The group launched this week with more than 50 global founding participants from industry, startup, academia, research and government. Among the members and collaborators: AMD, CERN, Cerebras, Cornell University, Dell Technologies, Hugging Face, Intel, Linux Foundation, NASA, Oracle, Red Hat, Sony Group, Stability AI, the University of Tokyo and Yale Engineering. The group’s stated purpose is “to support open innovation and open science in AI.” Continue reading IBM and Meta Debut AI Alliance for Safe Artificial Intelligence
By
Paula ParisiJuly 24, 2023
Cerebras Systems has unveiled the Condor Galaxy 1, powered by nine networked supercomputers designed for a total of 4 exaflops of AI compute via 54 million cores. Cerebras says the CG-1 greatly accelerates AI model training, completing its first run on a large language AI trained for Abu Dhabi-based G42 in only 10 days. Cerebras and G42 have partnered to offer the Santa Clara, California-based CG-1 as a cloud service, positioning it as an alternative to Nvidia’s DGX GH200 cloud supercomputer. The companies plan to release CG-2 and CG-3 in early 2024. Continue reading Cerebras, G42 Partner on a Supercomputer for Generative AI
By
Paula ParisiNovember 16, 2022
Cerebras Systems has unveiled its Andromeda AI supercomputer. With 13.5 million cores, it can calculate at the rate of 1 exaflop — roughly one quintillion (1 followed by 18 zeroes) operations — per second using a 16-bit floating point format. Andromeda’s brain is built of 16 linked Cerebras CS-2 systems, AI computers that use giant Wafer-Scale Engine 2 chips. Each chip has hundreds of thousands of cores, but is more compact and powerful than servers that use standard CPUs, according to Cerebras, which is making Andromeda available for commercial and academic research. Continue reading Cerebras Supercomputer Calculates at 1 Exaflop per Second
By
Paula ParisiSeptember 19, 2022
Nvidia, Intel and ARM have published a draft specification for a common AI interchange format aimed at faster and more efficient system development. The proposed “8-bit floating point” standard, known as FP8, will potentially accelerate both training and operating the systems by reducing memory usage and optimizing interconnect bandwidth. The lower precision number format is a key factor in driving efficiency. Transformer networks, in particular, benefit from an 8-bit floating point precision, and having a common interchange format should facilitate interoperability advances for both hardware and software platforms. Continue reading Nvidia, Intel and ARM Publish New FP8 AI Interchange Format
By
Paula ParisiMarch 24, 2022
Nvidia CEO Jensen Huang announced a host of new AI tech geared toward data centers at the GTC 2022 conference this week. Available in Q3, the H100 Tensor Core GPUs are built on the company’s new Hopper GPU architecture. Huang described the H100 as the next “engine of the world’s AI infrastructures.” Hopper debuts in Nvidia DGX H100 systems designed for enterprise. With data centers, “companies are manufacturing intelligence and operating giant AI factories,” Huang said, speaking from a real-time virtual environment in the firm’s Omniverse 3D simulation platform. Continue reading Nvidia Introduces New Architecture to Power AI Data Centers
By
Debra KaufmanAugust 25, 2021
Deep learning requires a complicated neural network composed of computers wired together into clusters at data centers, with cross-chip communication using a lot of energy and slowing down the process. Cerebras has a different approach. Instead of making chips by printing dozens of them onto a large silicon wafer and then cutting them out and wiring them to each other, it is making the largest computer chip in the world, the size of a dinner plate. Texas Instruments tried this approach in the 1960s but ran into problems. Continue reading Cerebras Chip Tech to Advance Neural Networks, AI Models
By
Debra KaufmanNovember 19, 2020
Cerebras Systems and its partner, the Department of Energy’s National Energy Technology Laboratory (NETL), revealed that its CS-1 system, featuring a single massive chip that features an innovative design, is 10,000+ times faster than a graphics processing unit (GPU). The CS-1, built around Cerebas’ Wafer-Scale Engine (WSE) and its 400,000 AI cores, was first announced in November 2019. The partnership between the Energy Department and Cerebras includes deployments with the Argonne National Laboratory and Lawrence Livermore National Laboratory. Continue reading The Cerebras CS-1 Chip Is 10,000 Times Faster Than a GPU
By
Debra KaufmanAugust 26, 2019
Los Altos, CA-based startup Cerebras, dedicated to advancing deep learning, has created a computer chip almost nine inches (22 centimeters) on each side — huge by the standards of today’s chips, which are typically the size of postage stamps or smaller. The company plans to offer this chip to tech companies to help them improve artificial intelligence at a faster clip. The Cerebras Wafer-Scale Engine (WSE), which took three years to develop, has impressive stats: 1.2 trillion transistors, 46,225 square millimeters, 18 gigabytes of on-chip memory and 400,000 processing cores. Continue reading Cerebras Builds Enormous Chip to Advance Deep Learning
By
Debra KaufmanDecember 12, 2018
Amazon revealed last month that it had spent the previous few years building a chip for use in its worldwide data centers. It’s not alone; Apple and Google also seek to design and manufacture their own chips, as part of a cost-saving strategy. Intel, which thus far hasn’t had much competition, will feel the impact as its own customers undercut the company’s annual $412 billion in sales. Amazon’s massive need for chips means it will likely continue to buy from Intel, with which it will enjoy a better bargaining position. Continue reading Tech Companies Challenge Intel by Building Their Own Chips