Google Hopes its Virtual Brain Technology Will Help Improve Products
By Karla Robinson
November 1, 2012
November 1, 2012
- Over the summer, Google applied its new artificial intelligence software to YouTube videos to recognize cats, faces and other objects just as the human brain does. The search giant is now leveraging this technology to improve its speech recognition to rival Apple’s Siri.
- “Google’s learning software is based on simulating groups of connected brain cells that communicate and influence one another,” Technology Review explains.
- “When such a neural network, as it’s called, is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kind — and the network is said to have learned something.”
- This “learning” process allows the software to determine which features of data are relevant to the particular task at hand. In speech recognition, the neural networks allow Google’s Android OS and iOS search app to eliminate errors.
- Currently, the software is only applied to U.S. English but will eventually be available for other languages.
- “Other Google products will likely improve over time with help from the new learning software,” the article suggests. “The company’s image search tools, for example, could become better able to understand what’s in a photo without relying on surrounding text. And Google’s self-driving cars and mobile computer built into a pair of glasses could benefit from software better able to make sense of more real-world data.”
- These neural networks are more flexible in that they can determine the context of data. They also mimic the visual cortex in mammals, which will help the technology to one day come close to human intelligence.
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.