By
Paula ParisiNovember 22, 2022
Intel has debuted FakeCatcher, touting it as the first real-time deepfake detector. capable of determining whether digital video has been altered to change context or meaning. Intel says FakeCatcher has a 96 percent accuracy rate and returns results in milliseconds by analyzing the “blood flow” of pixel patterns, a process called photoplethysmography (PPG) that Intel borrowed from medical research. The company says potential use cases include social media platforms screening to prevent uploads of harmful deepfake videos and helping global news organizations to avoid inadvertent amplification of deepfakes. Continue reading Intel Promises 96 Percent Accuracy with New Deepfake Filter
By
Debra KaufmanFebruary 4, 2020
Meet Meena, Google’s new chatbot powered by a neural network. According to the tech giant, Meena was trained on 341 gigabytes of public social-media chatter (8.5 times as much data as OpenAI’s GPT-2) and can talk about anything and even make jokes. With Meena, Google hopes to have made a chatbot that feels more human, always a challenge for AI-enabled media, whether it’s a chatbot or a character in a video game. To do so, Google created the Sensibleness and Specificity Average (SSA) as a metric for natural conversations. Continue reading Google Debuts Chatbot with Natural Conversational Ability
By
Debra KaufmanJune 18, 2019
At its re:MARS AI conference earlier this month, Amazon previewed Alexa Conversations, a module within the Alexa Skills Kit that melds Alexa voice apps to allow users to perform complicated tasks requiring multiple skills — all with fewer lines of code. That’s because a recurrent neural network will be able to “generate dialogue flow” automatically, thus limiting the number of steps a user needs to order food or reserve a ticket. Amazon vice president David Limp dubbed Conversations “the Holy Grail of voice science.” Continue reading Alexa Conversations for Complex Tasks with Less Coding
By
Debra KaufmanNovember 16, 2018
Intel just announced its latest invention: the Neural Compute Stick 2 (NCS2), a self-contained neural network on a thumb drive. NCS2 is intended to make the process of embedding intelligence into Internet of Things and network edge devices faster and easier. Edge devices, which include routers, switches, gateways and a range of IoT devices, are defined as any hardware that controls the flow of data between the boundaries of two networks. The announcement came just before Intel’s first AI developers’ conference in Beijing. Continue reading Intel Launches Neural Network Stick to Embed AI in IoT Devices
By
Debra KaufmanJuly 5, 2018
A research team at Google’s AI unit DeepMind, led by Ali Eslami and Danilo Rezende, has created software via a generative query network (GQN) to create a new perspective of a scene that the neural network has never seen. The U.K.-based unit developed the deep neural network-based software that applies the network to a handful of shots of a virtual scene to create a “compact mathematical representation” of the scene — and then uses that representation to render an image with a new perspective unfamiliar to the network. Continue reading DeepMind Intros Intriguing Deep Neural Network Algorithm
By
Debra KaufmanJuly 2, 2018
Niantic, the company that released “Pokémon Go,” just acquired Matrix Mill, a U.K.-based computer vision/machine learning startup, with the goal of expanding its augmented reality capabilities. Niantic chief executive John Hanke also stated that the company this year will release a “major update” to its “Ingress” game as well as a new AR game, “Harry Potter: Wizards Unite,” and reveal additional games in the next few weeks. At an event, developers and journalists were able to try out the platform powering these games. Continue reading Niantic Acquires Matrix Mill to Advance AR Gaming Features
By
Debra KaufmanApril 26, 2018
Nvidia debuted a deep learning method that can edit or reconstruct an image that is missing pixels or has holes via a process called “image inpainting.” The model can handle holes of “any shape, size, location or distance from image borders,” and could be integrated in photo editing software to remove undesirable imagery and replace it with a realistic digital image – instantly and with great accuracy. Previous AI-based approaches focused on rectangular regions in the image’s center and required post processing. Continue reading Nvidia’s New AI Method Can Reconstruct an Image in Seconds
By
Debra KaufmanApril 12, 2018
Tools powered by artificial intelligence and machine learning can also be used in animation and visual effects. Nvidia senior solutions architect Rick Grandy noted that the benefit of such tools is that artists don’t have to replicate their own work. That includes deep learning used for realistic character motion created in real-time via game engines and AI, as well as a phase-functioned neural network for character control, whereby the network can be trained by motion capture or animation. Continue reading NAB 2018: Artificial Intelligence Tools for Animation and VFX
By
Debra KaufmanFebruary 20, 2018
The latest project out of Google Brain, the company’s machine learning research lab, has been using AI software to write Wikipedia-style articles by summarizing information on the Internet. But it’s not easy to condense social media, blogs, articles, memes and other digital information into salient articles, and the project’s results have been mixed. The team, in a paper just accepted at the International Conference on Learning Representations (ICLR), describes how difficult it has been. Continue reading Google Brain Leverages AI to Generate Wikipedia-Like Articles
By
Debra KaufmanDecember 6, 2017
In May, research project Google Brain debuted its AutoML artificial intelligence system that can generate its own AIs. Now, Google has unveiled an AutoML project to automate the design of machine learning models using so-called reinforcement learning. In this system, AutoML is a controller neural network that develops a “child” AI network for a specific task. The near-term goal is that AutoML would be able to create a child that outperforms human versions. Down the line, AutoML could improve vision for autonomous vehicles and AI robots. Continue reading Google Intends to Advance Machine Learning With its AutoML
By
Debra KaufmanSeptember 15, 2017
Apple ARKit for iOS 11, which enables developers to create augmented reality apps, has caught the attention of developers. With the new iPhone 8, iPhone 8 Plus and iPhone X, those same developers now have the best hardware and software for creating new AR apps. IKEA quickly jumped on board, and Apple also showed a multiplayer game using iPhones. Apple ARKit does have drawbacks: it doesn’t detect vertical surfaces, such as walls, and although it works on iPhones as old at the 6s, it really shines on the latest iPhone hardware. Continue reading Apple ARKit and New iPhones Set the Stage for AR Adoption
By
Debra KaufmanAugust 18, 2017
Google just acquired AIMatter, a Belarus startup that will boost the tech giant’s efforts in computer vision, the artificial intelligence sector that helps computers process images as well as, or even better, than humans. AIMatter has already built a neural-network-powered AI platform and SDK that quickly processes images on mobile devices, as well as Fabby, a photo/video editing app that has been used as a proof-of-concept. AIMatter has employees in Minsk, the San Francisco Bay Area, and Zurich, Switzerland. Continue reading Google Purchases AIMatter to Boost Computer Vision Efforts
By
ETCentricFebruary 6, 2017
Facebook’s Lumos computer vision platform, which was originally created to help visually impaired members of the social network’s community, is now being used for a more sophisticated image search. It allows users to find images on Facebook via key words that describe content, rather than a search that is limited to tags and captions. “Facebook trained an ever-fashionable deep neural network on tens of millions of photos,” explains TechCrunch. “The model essentially matches search descriptors to features pulled from photos” and “ranks its output using information from both the images and the original search.” Facebook may apply the tech to videos in the future and potentially raise the bar on its targeted ad offerings. Continue reading Artificial Intelligence Now Powers Photo Searches on Facebook
By
Debra KaufmanNovember 1, 2016
After acquiring the face-tracking and 3D face replacement company MSQRD, Facebook integrated its augmented reality selfie lenses, dubbed Masks, starting with a Halloween skeleton, witch and pumpkin. Users in the U.S., U.K. and New Zealand, and public figures, will be able to use the iOS version of Masks on Facebook Mentions. The company says it will rollout masks to Android and other countries in coming months. Facebook also demonstrated stylized filters, which will be a real-time processing option for Live Video. Continue reading Facebook Shows Off AR Masks, Stylized Filters for Live Video
By
Rob ScottJanuary 5, 2016
During the Nvidia keynote at CES 2016, CEO and co-founder Jen-Hsun Huang introduced a new computer for autonomous vehicles called the Drive PX2. Following last year’s Drive CX, the PX2 touts processing power equivalent to 150 MacBook Pros, according to Huang. The lunchbox-sized, water-cooled computer features 12 CPU cores that support eight teraflops and 24 “deep learning” tera operations per second. As a result, the PX2 can reportedly process data in real time from 12 video cameras, radar, lidar and additional sensors to enhance the self-driving car experience. Continue reading CES: Nvidia Unveils New ‘Supercomputer’ for Self-Driving Cars