By
Debra KaufmanMay 27, 2021
IBM’s AI research unit debuted Project CodeNet, a dataset to develop machine learning models for software programming. The name is a take-off on ImageNet, the influential dataset of photos that pushed the development of computer vision and deep learning. Creating “AI for code” systems has been challenging since software developers are constantly discovering new problems and exploring different solutions. IBM researchers have taken that into consideration in developing a multi-purpose dataset for Project CodeNet. Continue reading IBM Project CodeNet Employs AI Tools to Program Software
By
Debra KaufmanOctober 2, 2019
Both IBM and Google recently advanced development of Text-to-Speech (TTS) systems to create high-quality digital speech. OpenAI found that, since 2012, the compute power needed to train TTS models has exploded to more than 300,000 times. IBM created a much less compute-intensive model for speech synthesis, stating that it is able to do so in real-time and adapt to new speaking styles with little data. Google and Imperial College London created a generative adversarial network (GAN) to create high-quality synthetic speech. Continue reading Google and IBM Create Advanced Text-to-Speech Systems
By
Debra KaufmanMarch 27, 2017
The traditional bluescreen/greenscreen method of extracting foreground content from the background for film and video production may be on its way out. That’s due to research that Adobe is doing in collaboration with the Beckman Institute for Advanced Science and Technology, to develop a new system that relies on deep convolutional neural networks. A recent paper, “Deep Image Matting,” reports that the new method uses a dataset of 49,300 training images to teach the algorithm how to distinguish and eliminate backgrounds. Continue reading Adobe’s AI-Enabled System Could Replace Greenscreen Tech