By
Paula ParisiOctober 10, 2023
Dell Technologies is expanding its Generative AI Solutions portfolio to help enterprise customers add GenAI to their workflow. The expansion includes support for advanced infrastructure and collaborative data solutions that optimize and help secure intelligence gathering and utilization. Dell takes a “validated design” approach to optimization and acceleration, testing different hardware configurations designed to fit the needs of various use cases. Dell has partnered with Nvidia for validated GenAI design for model customization, and with Starburst on data lakehouse solutions that tap multi-cloud data for AI end-use. Continue reading Dell Partnering with Nvidia and Starburst for GenAI Solutions
By
Rob ScottJanuary 5, 2023
Nvidia announced during CES this week that it plans to roll out its RTX Video Super Resolution feature in February for web video content viewed through Google Chrome and Microsoft Edge browsers. The company promises AI upscaling up to 4K quality, but the feature requires a PC running a Nvidia 30- or 40-series GPU. The technology — which can upscale video with resolution between 360p and 1440p, including 1080p, and up to a 144Hz frame rate — has already been available on Nvidia Shield TV and Shield TV Pro streaming media players. However, introducing browser support should significantly increase its audience. Continue reading CES: Nvidia’s AI Upscaling Tech to Tackle Blurry Web Video
By
Paula ParisiSeptember 27, 2022
Nvidia Research is introducing a new AI model that largely automates the process of creating virtual worlds, making it easier for developers to populate games and VR experiences with a diverse array of 3D buildings, vehicles, characters and more. Trained using only 2D images, GET3D generates 3D shapes with high-fidelity textures and complex geometric details. GET3D can generate “a virtually unlimited number of 3D shapes based on the data it’s trained on,” according to Nvidia, which says the objects can be used in 3D representations of buildings or the great outdoors, in games or the metaverse. Continue reading Nvidia Debuts New AI Model That Quickly Generates Objects
By
Paula ParisiSeptember 22, 2022
“Computing is advancing at incredible speeds. Acceleration is propelling this rocket, and it’s fuel is AI,” Nvidia founder and CEO Jensen Huang said in his 2022 GTC conference keynote, announcing two new AI services: the Nvidia NeMo large language model service, which helps customize LLMs, and the Nvidia BioNeMo LLM service, aimed at bio researchers. Nvidia also unveiled its GeForce RTX 40 Series GPUs, shipping Q4. Powered by the company’s new architecture, Ada Lovelace, the two new models — GeForce RTX 4090 and GeForce RTX 4080 — offer better ray tracing performance and AI-based neural graphics. Continue reading Nvidia Introduces AI-Powered GPUs and Cloud LLM Services
By
Paula ParisiMay 10, 2022
Nvidia has begun previewing its latest H100 Tensor Core GPU, promising “an order-of-magnitude performance leap for large-scale AI and HPC” over previous iterations, according to the company. Nvidia founder and CEO Jensen Huang announced the Hopper earlier this year, and IT professionals’ website ServeTheHome recently had a chance to see a H100 SXM5 module demonstrated. Consuming up to 700W in an effort to deliver 60 FP64 Tensor teraflops, the module — which features 80 billion transistors and has 8448/16896 FP64/FP32 cores in addition to 538 Tensor cores — is described as “monstrous” in the best way. Continue reading Nvidia Touts New H100 GPU and Grace CPU Superchip for AI
By
Paula ParisiOctober 14, 2021
Microsoft and Nvidia have trained what they describe as the most powerful AI-driven language model to date, the Megatron-Turing Natural Language Generation model (MT-NLG), which has “set the new standard for large-scale language models in both model scale and quality,” the firms say. As the successor to the companies’ Turing NLG 17B and Megatron-LM, the new MT-NLG has 530 billion parameters, or “3x the number of parameters compared to the existing largest model of this type” and demonstrates unmatched accuracy in a broad set of natural language tasks. Continue reading Microsoft and Nvidia Debut World’s Largest Language Model
By
Debra KaufmanAugust 18, 2021
Samsung is using Synopsys’ DSO.ai tool to design some of its next-gen Exynos mobile processors for 5G and AI, which will be used in smartphones including its own and other devices. Synopsys chair and co-chief executive Aart de Geus said this is the first example of a “real commercial processor design with AI.” Google, IBM and Nvidia are among the other companies that have discussed designing chips with AI. Synopsys, which works with dozens of companies, also has years of expertise in creating advanced designs to train an AI algorithm. Continue reading Samsung First to Design Commercial Semiconductor with AI
By
Debra KaufmanNovember 23, 2020
Last month Nvidia launched Maxine, a software development kit containing technology the company claims will cut the bandwidth requirements of video-conferencing software by a factor of ten. A neural network creates a compressed version of a person’s face which, when sent across the network, is decompressed by a second neural network. The software can also make helpful corrections to the image, such as rotating a face to look straight forward or replacing it with a digital avatar. Nvidia is now waiting for software developers to productize the technology. Continue reading Nvidia Cuts Video-Conferencing Bandwidth by Factor of Ten
By
Debra KaufmanSeptember 3, 2020
Nvidia debuted its 28-billion transistor Ampere-based 30 Series graphics chips for PC gamers, ideal for Microsoft and Sony’s next-generation consoles to unveil by the holidays. The 30 Series GeForce RTX chips (available September 17) are comprised of the RTX 3070 ($500), 3080 ($700), and 3090 ($1,500), with second generation RTX (real-time ray tracing graphics). According to chief executive Jensen Huang, there are “hundreds of RTX games” in development, joining “Minecraft,” “Control” and “Wolfenstein: Youngblood,” which already feature RTX. Continue reading Nvidia Debuts GeForce RTX Chip Series with Lower Latency
By
Debra KaufmanJanuary 7, 2019
At Nvidia’s CES 2019 press conference, founder/chief executive Jensen Huang was enthused about gaming. “Usually I also focus on AI and self-driving cars,” he said. “We have a lot of announcements about that. But today it’s all about gaming.” One big announcement was the company’s new GeForce RTX 2060, which is based on Turing architecture and is enabled by both ray-tracing and artificial intelligence. The RTX 2060, priced at $349, will be available January 15 “from every major OEM, system builder and graphics card partner.” Continue reading Nvidia Debuts Next-Gen Gaming with Ray-Tracing, AI at CES
By
Debra KaufmanAugust 22, 2018
At SIGGRAPH 2018, Nvidia debuted its new Turing architecture featuring ray tracing, a kind of rendering, for professional and consumer graphics cards. Considered the Holy Grail by many industry pros, ray tracing works by modeling light in real time as it intersects with objects. Ray tracing is ideal for creating photorealistic lighting and VFX. Up until now, ray tracing has not been possible to do because it requires an immense amount of expensive computing power, but Nvidia’s professional Turing card costs $10,000. Continue reading Nvidia Ray-Tracing Technology a Quantum Leap in Rendering
By
Debra KaufmanAugust 15, 2018
Nvidia unveiled new Turing architecture during a keynote at SIGGRAPH 2018 as well as three new Quadro RTx workstation graphics cards aimed at professionals. Nvidia dubs the Turing architecture as its “greatest leap since the invention of the CUDA GPU in 2006.” The RTx chips are the first to use the company’s ray tracing rendering method, which results in more realistic imagery. Also at SIGGRAPH, Porsche showed off car designs accomplished with Epic Games’ Unreal engine and Nvidia’s RTx chips. Continue reading Nvidia Quadro RTx Chips Offer AI and Real-Time Ray Tracing