AWS Tool Aims to Simplify the Creation of AI-Powered Apps

Amazon introduced AWS Deep Learning Containers, a collection of Docker images preinstalled with preferred deep learning frameworks, with the aim of making it more seamless to get AI-enabled apps on Amazon Web Services. At AWS, general manager of deep learning Dr. Matt Wood noted that the company has “done all the hard work of building, compiling, and generating, configuring, optimizing all of these frameworks,” taking that burden off of app developers. The container images are all “preconfigured and validated by Amazon.” Continue reading AWS Tool Aims to Simplify the Creation of AI-Powered Apps

Google GPipe Library Speeds Deep Neural Network Training

Google has unveiled GPipe, an open-sourced library that makes training deep neural networks more efficient under the TensorFlow framework Lingvo for sequence modeling. According to Google AI software engineer Yanping Huang, “in GPipe … we demonstrate the use of pipeline parallelism to scale up DNN training,” noting that larger DNN models “lead to better task performance.” Huang and his colleagues published a paper on “Efficient Training of Giant Neural Networks Using Pipeline Parallelism.” Continue reading Google GPipe Library Speeds Deep Neural Network Training

Intel Describes Tool to Train AI Models with Encrypted Data

Intel revealed that it has made progress in an anonymized, encrypted method of model training. Industries such as healthcare that need a way to use AI tools on sensitive, personally identifiable information have been waiting for just such a capability. At the NeurIPS 2018 conference in Montreal, Intel showed off its open-sourced HE-Transformer that works as a backend to its nGraph neural network compiler, allowing AI models to work on encrypted data. HE-Transformer is also based on a Microsoft Research encryption library. Continue reading Intel Describes Tool to Train AI Models with Encrypted Data