Intel Describes Tool to Train AI Models with Encrypted Data

Intel revealed that it has made progress in an anonymized, encrypted method of model training. Industries such as healthcare that need a way to use AI tools on sensitive, personally identifiable information have been waiting for just such a capability. At the NeurIPS 2018 conference in Montreal, Intel showed off its open-sourced HE-Transformer that works as a backend to its nGraph neural network compiler, allowing AI models to work on encrypted data. HE-Transformer is also based on a Microsoft Research encryption library. Continue reading Intel Describes Tool to Train AI Models with Encrypted Data

Nvidia Reveals Use of Neural Networks to Create Virtual City

Nvidia used processing power and neural networks to create a very convincing virtual city, which will be open for tours by attendees to this year’s NeurIPS AI conference in Montreal. Nvidia’s system, which uses existing videos of scenery and objects to create these interactive environments, also makes it easier for artists to create similar virtual worlds. Nvidia vice president of applied deep learning Bryan Catanzaro said generative models are key to making the process of creating virtual worlds cost effective. Continue reading Nvidia Reveals Use of Neural Networks to Create Virtual City

IBM, Harvard University Develop New Tool for AI Translation

At the IEEE Conference on Visual Analytics Science and Technology in Berlin, IBM and Harvard University researchers presented Seq2Seq-Vis, a tool to debug machine translation tools. Translation tools rely on neural networks, which, because they are opaque, make it difficult to determine how mistakes were made. For that reason, it’s known as the “black box problem.” Seq2Seq-Vis allows deep-learning app creators to visualize AI’s decision-making process as it translates a sequence of words from one language to another. Continue reading IBM, Harvard University Develop New Tool for AI Translation

Facebook’s VideoStory Relies on AI to Automate Storytelling

Facebook’s video clips get over 8 billion views a day on average, but people with bad Internet connections or disabilities often don’t have access to them. That led Facebook to create VideoStory, which the company described in a research paper as “A Dataset for Telling the Stories of Social Media Videos.” The paper, to be delivered at the Conference on Empirical Methods in Natural Language Processing, noted that, “automatically telling the stories using multi-sentence descriptions of videos would allow bridging this gap.” Continue reading Facebook’s VideoStory Relies on AI to Automate Storytelling

EA Announces New AI-Powered, Cloud-Native Gaming Tech

Electronic Arts unveiled Project Atlas, its “cloud-native gaming” technology, via a Medium blog post by chief technology officer Ken Moss. Although he did not say when it would be fully deployed and functional, Moss described Project Atlas as designed to “harness the massive power of cloud computing and artificial intelligence and putting it into the hands of game makers in a powerful, easy to use, one-stop experience.” The game engine combines rendering, game logic, physics, animation, audio, and more. Continue reading EA Announces New AI-Powered, Cloud-Native Gaming Tech

Accounting, Finance Industries Demand Explainable AI Tools

As artificial intelligence-based tools become more widespread in the business industry, cloud service companies are debuting tools that explain the artificial intelligence algorithms they use to provide more transparency and assure users of their ethical behavior. That’s because regulated industries are demanding it. Capital One and Bank of America are just two such companies interested in using AI to improve detection of fraud, but want to know how the algorithms work before they implement such tools. Continue reading Accounting, Finance Industries Demand Explainable AI Tools

The Reel Thing: Machine Learning Powers Restoration Engine

During last week’s The Reel Thing at the Academy’s Linwood Dunn Theater in Hollywood, Video Gorillas managing director/chief executive Jason Brahms, formerly a Sony Cloud Media Services executive, and chief technology officer Alex Zhukov described the Bigfoot “Frame Compare” solution that leverages machine learning to speed up preservation, asset management, and mastering workflows. The engine, whose development dates back to 2007, relies on a proprietary, patented technology, frequency domain descriptor (FDD). Continue reading The Reel Thing: Machine Learning Powers Restoration Engine

IBM Creates Machine-Learning Aided Watermarking Process

IBM now has a patent-pending, machine learning enabled watermarking process that promises to stop intellectual property theft. IBM manager of cognitive cybersecurity intelligence Marc Ph. Stoecklin described how the process embeds unique identifiers into neural networks to create “nearly imperceptible” watermarks. The process, recently highlighted at the ACM Asia Conference on Computer and Communications Security (ASIACCS) 2018 in Korea, might be productized soon, either within IBM or as a product for its clients. Continue reading IBM Creates Machine-Learning Aided Watermarking Process

Google, Nvidia Train Neural Networks to Post-Process Video

Google researchers have created a machine learning system that adds color to black & white videos, and can also choose which specific objects, people and pets receive the color treatment. The technology is based on what’s called a convolutional neural network, which is architecturally suited for object tracking and video stabilization. Meanwhile, Nvidia has debuted an algorithm that slows down video, without the jitters, after it’s been captured, by using a neural network to create “in between” frames required for smooth motion. Continue reading Google, Nvidia Train Neural Networks to Post-Process Video

OpenAI Beats Human-Player Team at Complex Video Game

OpenAI, an artificial intelligence research group backed by Elon Musk, stated that its software can beat “teams of five skilled human players” in Valve’s video game “Dota 2.” If verified, the achievement would be a milestone in computer science and a leap beyond other AI researchers working on mastering complex games. IBM’s software mastered chess in the late 1990s, and Alphabet’s DeepMind created software that dominated “Go” in 2016. “Dota 2” is a multiplayer sci-fi fantasy game where teams advance through exploration. Continue reading OpenAI Beats Human-Player Team at Complex Video Game

Intel AI Lab Reveals Plans to Open-Source More NLP Libraries

The Intel AI Lab, which open-sourced a library for natural language processing, plans to open-source more such libraries, to help developers and researchers speed up the process of giving virtual assistants and chatbots functions such as name entity recognition, intent extraction and semantic parsing. With new libraries, these developers can also publish research, train and deploy artificial intelligence and reproduce the latest innovations in the AI community. Intel’s first conference for AI developers was held May 23-24 in San Francisco. Continue reading Intel AI Lab Reveals Plans to Open-Source More NLP Libraries

Nvidia Emphasizes Software at Technicolor Experience Event

At the Technicolor Experience Center in Culver City, Nvidia held an event highlighting its decisive move into software, with artificial intelligence, virtual reality and other areas. Vice president of developer programs Greg Estes noted that the company has 850,000 developers all over the world in universities and labs as well as companies like Adobe. Its developer program provides hands-on training in AI and parallel computing, impacting the media and entertainment industry, as well as smart cities, autonomous vehicles and more. Continue reading Nvidia Emphasizes Software at Technicolor Experience Event

NAB Program to Look at Machine Intelligence, Content Creation

As part of the Next-Generation Media Technologies education track at the upcoming NAB Show in Las Vegas, a half-day conference produced by Rochelle Winters will examine the latest trends in Machine Intelligence and Content Creation (Tuesday, April 10, 9:00 am – 12:00 pm). The program will examine how studios, creative service companies and filmmakers are using machine learning, deep learning and artificial intelligence to help produce content. Leading technologists, production execs and content creators will share the latest research and case studies involving machine intelligence. Continue reading NAB Program to Look at Machine Intelligence, Content Creation

NAB Program to Look at Machine Intelligence, Content Creation

As part of the Next-Generation Media Technologies education track at the upcoming NAB Show in Las Vegas, a half-day conference produced by Rochelle Winters will examine the latest trends in Machine Intelligence and Content Creation (Tuesday, April 10, 9:00 am – 12:00 pm). The program will examine how studios, creative service companies and filmmakers are using machine learning, deep learning and artificial intelligence to help produce content. Leading technologists, production execs and content creators will share the latest research and case studies involving machine intelligence. Continue reading NAB Program to Look at Machine Intelligence, Content Creation

Google’s Machine-Generated Speech Will Sound More Human

According to members of Google’s Brain and Machine Perception teams, researchers at the tech giant have developed “ways to make machine-generated speech sound more natural to humans,” even providing examples of the more expressive speech in a company blog post, reports VentureBeat. Google also announced the release of its Cloud Text-to-Speech services, which could “be used to bring more natural speech to devices, apps or digital services that utilize voice control or voice computing,” the article explains.

Continue reading Google’s Machine-Generated Speech Will Sound More Human