Privacy Concerns Grow Over Facial Recognition Data Sets

Social networks, dating services, photo websites and surveillance cameras are just some of the sources of a growing number of databases compiling people’s faces. According to privacy advocates, Microsoft and Stanford University are among the many groups gathering images, with one such repository holding two million images. All these photos will be used to allow neural networks to build pattern recognition, in the quest to create cutting edge facial recognition platforms. Some companies have collected images for 10+ years. Continue reading Privacy Concerns Grow Over Facial Recognition Data Sets

AI Development Accelerates, Chips Speed Model Training

At VentureBeat’s Transform 2019 conference in San Francisco, Intel vice president/chief technology officer of AI products Amir Khosrowshahi and IoT general manager Jonathan Ballon discussed the evolution of AI adoption. A Lopez Research survey revealed that 86 percent of companies believe AI will be strategic to their business, but only 36 percent of them report having made “meaningful progress.” Khosrowshahi pointed out that more companies than ever have access to the necessary data, tools and training. Continue reading AI Development Accelerates, Chips Speed Model Training

Facebook Uses AI to Improve Games, Swap Singers’ Voices

Facebook is bringing back FMV (full motion video) games, which use pre-recorded video files to display action. With the work of Facebook AI Research scientists, the new FMV games are much improved, with a system that can extract controllable characters from real-world videos and then control their motion, thus generating new image sequences. Facebook AI Research scientists, in collaboration with Tel Aviv University, also unveiled a system that, unsupervised, converts audio of one singer to the voice of another. Continue reading Facebook Uses AI to Improve Games, Swap Singers’ Voices

AWS Tool Aims to Simplify the Creation of AI-Powered Apps

Amazon introduced AWS Deep Learning Containers, a collection of Docker images preinstalled with preferred deep learning frameworks, with the aim of making it more seamless to get AI-enabled apps on Amazon Web Services. At AWS, general manager of deep learning Dr. Matt Wood noted that the company has “done all the hard work of building, compiling, and generating, configuring, optimizing all of these frameworks,” taking that burden off of app developers. The container images are all “preconfigured and validated by Amazon.” Continue reading AWS Tool Aims to Simplify the Creation of AI-Powered Apps

Google GPipe Library Speeds Deep Neural Network Training

Google has unveiled GPipe, an open-sourced library that makes training deep neural networks more efficient under the TensorFlow framework Lingvo for sequence modeling. According to Google AI software engineer Yanping Huang, “in GPipe … we demonstrate the use of pipeline parallelism to scale up DNN training,” noting that larger DNN models “lead to better task performance.” Huang and his colleagues published a paper on “Efficient Training of Giant Neural Networks Using Pipeline Parallelism.” Continue reading Google GPipe Library Speeds Deep Neural Network Training

Intel Describes Tool to Train AI Models with Encrypted Data

Intel revealed that it has made progress in an anonymized, encrypted method of model training. Industries such as healthcare that need a way to use AI tools on sensitive, personally identifiable information have been waiting for just such a capability. At the NeurIPS 2018 conference in Montreal, Intel showed off its open-sourced HE-Transformer that works as a backend to its nGraph neural network compiler, allowing AI models to work on encrypted data. HE-Transformer is also based on a Microsoft Research encryption library. Continue reading Intel Describes Tool to Train AI Models with Encrypted Data