By
Paula ParisiSeptember 19, 2022
Nvidia, Intel and ARM have published a draft specification for a common AI interchange format aimed at faster and more efficient system development. The proposed “8-bit floating point” standard, known as FP8, will potentially accelerate both training and operating the systems by reducing memory usage and optimizing interconnect bandwidth. The lower precision number format is a key factor in driving efficiency. Transformer networks, in particular, benefit from an 8-bit floating point precision, and having a common interchange format should facilitate interoperability advances for both hardware and software platforms. Continue reading Nvidia, Intel and ARM Publish New FP8 AI Interchange Format
By
Paula ParisiAugust 29, 2022
Google has launched an AI Test Kitchen and is inviting users to sign up to test experimental AI-powered systems and provide feedback before the applications are deployed for commercial use. First up is the Language Model for Dialogue Applications (LaMDA), which has shown promising early results. The AI Test Kitchen has begun a gradual rollout to small groups of U.S. on Android with plans to include iOS in the coming weeks. According to Google, “as we move ahead with development, we feel a great responsibility to get this right.” Continue reading Google Debuts AI Test Kitchen, LaMDA Language Generator
By
Paula ParisiJuly 26, 2022
OpenAI is expanding its beta outreach for DALL-E 2 by inviting an additional one million waitlisted people to join the AI imaging platform over the coming weeks. DALL-E users will receive 50 credits during their first month of use and 15 credits every subsequent month, with each credit redeemable for an original DALL-E-prompted generation (returning four images) or an edit or variation prompt (which returns three images). Additional credits may be purchased in 115-generation increments for $15. Starting this month, users get rights to commercialize their DALL-E images. However, the move highlights the legal implications of AI and possible copyright infringement. Continue reading Legal Questions Loom as OpenAI Widens Access to DALL-E
By
Paula ParisiMay 17, 2022
Alphabet-backed research lab DeepMind last week released a new AI system called Gato that takes a step toward artificial general intelligence, or AGI, technology that enables machines to undertake and learn the same tasks as humans. Described by DeepMind as a “general purpose AI, Gato’s debut coincides with LinkedIn co-founder Reid Hoffman and DeepMind co-founder Mustafa Suleyman securing $225 million to fund Inflection AI, an artificial intelligence company the pair launched earlier this year to simplify communication between humans and computers. Neither investors nor valuation were disclosed. Continue reading DeepMind’s Gato Moves Toward Artificial General Intelligence
By
Debra KaufmanMarch 9, 2020
Transforming 2D objects into 3D ones is a challenge that has defeated numerous artificial intelligence labs, including those at Facebook, Nvidia and startup Threedy.ai. Now, a Microsoft Research team stated it has created the first “scalable” training technique to derive 3D models from 2D data. Their technology can, furthermore, learn to generate better shapes when trained exclusively with 2D images. The Microsoft team took advantage of software that produces images from display data, as featured in industrial renderers. Continue reading Microsoft Develops Scalable 2D-to-3D Conversion Technique
By
Debra KaufmanJuly 16, 2019
Social networks, dating services, photo websites and surveillance cameras are just some of the sources of a growing number of databases compiling people’s faces. According to privacy advocates, Microsoft and Stanford University are among the many groups gathering images, with one such repository holding two million images. All these photos will be used to allow neural networks to build pattern recognition, in the quest to create cutting edge facial recognition platforms. Some companies have collected images for 10+ years. Continue reading Privacy Concerns Grow Over Facial Recognition Data Sets
By
Debra KaufmanJuly 15, 2019
At VentureBeat’s Transform 2019 conference in San Francisco, Intel vice president/chief technology officer of AI products Amir Khosrowshahi and IoT general manager Jonathan Ballon discussed the evolution of AI adoption. A Lopez Research survey revealed that 86 percent of companies believe AI will be strategic to their business, but only 36 percent of them report having made “meaningful progress.” Khosrowshahi pointed out that more companies than ever have access to the necessary data, tools and training. Continue reading AI Development Accelerates, Chips Speed Model Training
By
Debra KaufmanApril 22, 2019
Facebook is bringing back FMV (full motion video) games, which use pre-recorded video files to display action. With the work of Facebook AI Research scientists, the new FMV games are much improved, with a system that can extract controllable characters from real-world videos and then control their motion, thus generating new image sequences. Facebook AI Research scientists, in collaboration with Tel Aviv University, also unveiled a system that, unsupervised, converts audio of one singer to the voice of another. Continue reading Facebook Uses AI to Improve Games, Swap Singers’ Voices
By
Debra KaufmanApril 4, 2019
Amazon introduced AWS Deep Learning Containers, a collection of Docker images preinstalled with preferred deep learning frameworks, with the aim of making it more seamless to get AI-enabled apps on Amazon Web Services. At AWS, general manager of deep learning Dr. Matt Wood noted that the company has “done all the hard work of building, compiling, and generating, configuring, optimizing all of these frameworks,” taking that burden off of app developers. The container images are all “preconfigured and validated by Amazon.” Continue reading AWS Tool Aims to Simplify the Creation of AI-Powered Apps
By
Debra KaufmanMarch 7, 2019
Google has unveiled GPipe, an open-sourced library that makes training deep neural networks more efficient under the TensorFlow framework Lingvo for sequence modeling. According to Google AI software engineer Yanping Huang, “in GPipe … we demonstrate the use of pipeline parallelism to scale up DNN training,” noting that larger DNN models “lead to better task performance.” Huang and his colleagues published a paper on “Efficient Training of Giant Neural Networks Using Pipeline Parallelism.” Continue reading Google GPipe Library Speeds Deep Neural Network Training
By
Debra KaufmanDecember 5, 2018
Intel revealed that it has made progress in an anonymized, encrypted method of model training. Industries such as healthcare that need a way to use AI tools on sensitive, personally identifiable information have been waiting for just such a capability. At the NeurIPS 2018 conference in Montreal, Intel showed off its open-sourced HE-Transformer that works as a backend to its nGraph neural network compiler, allowing AI models to work on encrypted data. HE-Transformer is also based on a Microsoft Research encryption library. Continue reading Intel Describes Tool to Train AI Models with Encrypted Data