Google is open-sourcing SyntaxNet, a neural network framework that provides a foundation for Natural Language Understanding (NLU), and Parsey McParseface, a computer program that helps machines understand written English. Offering the code for free lets anyone develop, modify and distribute it, furthering natural language and potentially making Google’s code the standard. Earlier, Google open-sourced its machine-learning code TensorFlow (which SyntaxNet runs on top of); other companies that have similarly open-sourced code include Amazon and Facebook.
The Wall Street Journal reports that SyntaxNet “helps machines in breaking down sentences into their component parts of speech and in decoding the relationships between words and phrases.” Parsey McParseface, named so as a playful homage to U.K. research ship Boaty McBoatface, hones in on the most relevant words. The natural language program is already in use at Google for search and Smart Reply, as well as Knowledge Graph.
The ability of computers to understand natural language is the foundation of virtual assistants, chatbots, automated language translation and “decision-support systems” for healthcare and science.
“Our hope is that people will just use this instead of building their own,” said SyntaxNet product manager Dave Orr. “They don’t have to reinvent the wheel.” Google says Parsey McParseface is 94 percent accurate in understanding the relationship between words in a sentence, compared to the company’s human linguists, who have an accuracy of 95 percent.
Wired notes that Google is not alone in pursuing a more advanced virtual assistant. In addition to Apple’s Siri, Microsoft has Cortana, Amazon has Echo, and a startup named Viv was founded by two Siri developers. Facebook M is a project for an assistant that communicates via text.
But a lot of work remains on virtual assistants. WSJ notes that Google’s Parsey McParseface “can’t figure out relationships between sentences” and is stumped by longer sentences. Wired quotes Google research director Fernando Pereira as saying, “We are very far from where we want to be.”
“We want to encourage the research community — and everyone who works on natural language understanding — to move beyond parsing, towards the deeper semantic reasoning that is necessary,” he said.
But, progress is still progress. WSJ quotes University of Arizona NLP expert Mihai Surdeanu. “I would say this is in an incremental improvement. But there is nothing wrong with that. Most science is incremental.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.