By
Paula ParisiDecember 6, 2023
IBM has produced two quantum computing systems to meet its 2023 roadmap, one based on a chip named Condor, which at 1,121 functioning qubits is the largest transmon-based quantum processor released to date. Transmon-based chips use a type of superconducting qubit that is more error-resistant than typical qubits, which are notoriously unstable. The second IBM system uses three Heron chips, each with 133 qubits. The more modestly scaled Heron and its successor, Flamingo, play a vital role in IBM’s quantum plan, which boasts major progress as a result of these developments. Continue reading IBM Announces Significant Advances in Quantum Computing
By
Paula ParisiNovember 17, 2023
Aurora, built by Intel and Hewlett Packard Enterprise, is the latest supercomputer to come online at the Department of Energy’s Argonne National Laboratory outside of Chicago and is among a new breed of exascale supercomputers that draws on artificial intelligence. When fully operational in 2024, Aurora is expected to be the first such computer that will be able to achieve two quintillion operations per second. Brain analytics and the design of batteries that last longer and charge faster are among the vast potential uses of exascale machines. Continue reading Aurora Supercomputer Targets 2 Quintillion Ops per Second
By
Paula ParisiNovember 3, 2023
The UK government plans to invest at least £225 million (about $273 million) in AI supercomputing with the aim of bringing Great Britain into closer parity with AI leaders the U.S. and China. Among the new machines coming online is Dawn, which was built by the University of Cambridge Research Computing Services, Intel and Dell and is being hosted by the Cambridge Open Zettascale Lab. “Dawn Phase 1 represents a huge step forward in AI and simulation capability for the UK, deployed and ready to use now,” said Dr. Paul Calleja, director of Research Computing at Cambridge. Continue reading United Kingdom Investing $273 Million in AI Supercomputing
By
Paula ParisiOctober 12, 2023
Europe is moving forward in the supercomputer space, with two new exascale machines set to come online. Jupiter will be installed at the Jülich Supercomputing Centre in Munich, with assembly set to start as early as Q1 2024. Scotland will be home to the UK’s first exascale supercomputer, to be hosted at the University of Edinburgh, with installation commencing in 2025. An exascale supercomputer can run calculations at speeds of one exaflop (1,000 petaflops) or greater. On completion, these two new supercomputers will land in the top percent of the world’s high-performers. Continue reading Germany, UK to Host Europe’s First Exascale Supercomputers
By
Paula ParisiJuly 24, 2023
Cerebras Systems has unveiled the Condor Galaxy 1, powered by nine networked supercomputers designed for a total of 4 exaflops of AI compute via 54 million cores. Cerebras says the CG-1 greatly accelerates AI model training, completing its first run on a large language AI trained for Abu Dhabi-based G42 in only 10 days. Cerebras and G42 have partnered to offer the Santa Clara, California-based CG-1 as a cloud service, positioning it as an alternative to Nvidia’s DGX GH200 cloud supercomputer. The companies plan to release CG-2 and CG-3 in early 2024. Continue reading Cerebras, G42 Partner on a Supercomputer for Generative AI
By
Paula ParisiMay 31, 2023
Nvidia CEO Jensen Huang’s keynote at Computex Taipei marked the official launch of the company’s Grace Hopper Superchip, a breakthrough in accelerated processing, designed for giant-scale AI and high-performance computing applications. Huang also raised the curtain on Nvidia’s new supercomputer, the DGX GH200, which connects 256 Hopper chips into a single data-center-sized GPU with 144 terabytes of scalable shared memory to build massive AI models at the enterprise level. Google, Meta and Microsoft are among the first in line to gain access to the DGX GH200, positioned as “a blueprint for future hyperscale generative AI infrastructure.” Continue reading Nvidia Announces a Wide Range of AI Initiatives at Computex
By
Paula ParisiMay 22, 2023
Meta Platforms has shared additional details on its next generation of AI infrastructure. The company has designed two custom silicon chips, including one for training and running AI models and eventually powering metaverse functions like virtual reality and augmented reality. Another chip is tailored to optimize video processing. Meta publicly discussed its internal chip development last week ahead of a Thursday virtual event on AI infrastructure. The company also showcased an AI-optimized data center design and talked about phase two of deployment of its 16,000 GPU supercomputer for AI research. Continue reading Meta In-House Chip Designs Include Processing for AI, Video
By
Paula ParisiMay 15, 2023
Ten years ago AMD introduced the concept of smaller, interconnected chips that together work like one digital brain. Sometimes called “chiplets,” they’re generally less expensive than building one large chip, and when grouped together into bundles have often outperformed single wafters. In addition to AMD, companies including Apple, Amazon, Intel, IBM and Tesla have embraced the chiplet formula, which leverages advanced packaging technology, an integral part of building advanced semiconductors. Now experts are predicting packaging is going to be even more of a focus in coming years, as the global chip wars heat up. Continue reading Advanced Packaging for ‘Chiplets’ a Focus of CHIPS Funding
By
Paula ParisiMarch 29, 2023
Nvidia is launching new cloud services to help businesses leverage AI at scale. Under the banner Nvidia AI Foundations, the company is providing tools to let clients build and run their own generative AI models that are custom trained on data specific to the intended task. The individual cloud offerings are Nvidia NeMo for language models and Nvidia Picasso for 3D visuals including video and images. Speaking at Nvidia’s annual GPU Technology Conference (GTC) last week, CEO Jensen Huang said “the impressive capabilities of generative AI have created a sense of urgency for companies to reimagine their products and business models.” Continue reading Nvidia Introduces Cloud Services to Leverage AI Capabilities
By
Paula ParisiMarch 16, 2023
The demand for artificial intelligence by enterprise as well as consumers is putting tremendous pressure on cloud service providers to meet the vast data center resources required to train the models and deploy the resulting apps. Microsoft recently opened up about the pivotal role it played in getting OpenAI’s ChatGPT to the release phase via its Azure cloud computing platform, linking “tens of thousands” of Nvidia A100 GPUs to train the model. Microsoft is already upgrading Azure with Nvidia’s new H100 chips and latest InfiniBand networking to accommodate the next generation of AI supercomputers. Continue reading Microsoft Believes Azure Platform Is Unlocking ‘AI Revolution’
By
Paula ParisiNovember 18, 2022
Microsoft has entered into a multi-year deal with Nvidia to build what they’re calling “one of the world’s most advanced supercomputers,” powered by Microsoft Azure’s advanced supercomputing infrastructure combined with Nvidia GPUs, networking and full stack of AI software to help enterprises train, deploy and scale AI, including large, state-of-the-art models. “AI is fueling the next wave of automation across enterprises and industrial computing, enabling organizations to do more with less as they navigate economic uncertainties,” Microsoft cloud and AI group executive VP Scott Guthrie said of the alliance. Continue reading Microsoft, Nvidia Partner on Azure-Hosted AI Supercomputer
By
Debra KaufmanJuly 23, 2020
The University of Florida (UF) and Nvidia joined forces to enhance the former’s HiPerGator supercomputer with DGX SuperPOD architecture. Set to go online by early 2021, HiPerGator will deliver 700 petaflops (one quadrillion floating-point operations per second), making it the fastest academic AI supercomputer. UF and Nvidia said the HiPerGator will enable the application of AI to a range of studies, including “rising seas, aging populations, data security, personalized medicine, urban transportation and food insecurity.” Continue reading Nvidia and University of Florida Partner on AI Supercomputer
By
Debra KaufmanMay 21, 2020
At Microsoft’s Build 2020 developer conference, the company debuted a supercomputer built in collaboration with, and exclusively for, OpenAI on Azure. It’s the result of an agreement whereby Microsoft would invest $1 billion in OpenAI to develop new technologies for Microsoft Azure and extend AI capabilities. OpenAI agreed to license some of its IP to Microsoft, which would then sell to partners as well as train and run AI models on Azure. Microsoft stated that the supercomputer is the fifth most powerful in the world. Continue reading Microsoft Announces Azure-Hosted OpenAI Supercomputer
By
Debra KaufmanSeptember 5, 2019
Dell EMC and Intel introduced Frontera, an academic supercomputer that replaces Stampede2 at the University of Texas at Austin’s Texas Advanced Computing Center (TACC). The companies announced plans to build the computer in August 2018 and were funded by a $60 million National Science Foundation grant. According to Intel, Frontera’s peak performance can reach 38.7 quadrillion floating point operations per second (petaflops), making it one of the fastest such computers for modeling, simulation, big data and machine learning. Continue reading Academic Supercomputer Is Unveiled by Intel and Dell EMC
By
Emily WilsonMay 9, 2019
This week, AMD announced a partnership with Cray to build a supercomputer called Frontier, which the two companies predict will become the world’s fastest supercomputer, capable of “exascale” performance when it is released in 2021. All told, they expect Frontier to be capable of 1.5 exaflops, performing somewhere around 50 times faster than the top supercomputers out today, and faster than the currently available top 160 supercomputers combined. Frontier will be built at Oak Ridge National Laboratory in Tennessee.
Continue reading AMD’s New Frontier Will Be World’s Fastest Supercomputer