IBM, Harvard University Develop New Tool for AI Translation
November 5, 2018
At the IEEE Conference on Visual Analytics Science and Technology in Berlin, IBM and Harvard University researchers presented Seq2Seq-Vis, a tool to debug machine translation tools. Translation tools rely on neural networks, which, because they are opaque, make it difficult to determine how mistakes were made. For that reason, it’s known as the “black box problem.” Seq2Seq-Vis allows deep-learning app creators to visualize AI’s decision-making process as it translates a sequence of words from one language to another.
VentureBeat reports that the black box problem is now “one of the serious challenges of the AI industry, especially as deep learning finds its way into more critical domains.” According to IBM Research scientist Hendrik Strobelt, “sequence-to-sequence models can learn to transform an arbitrary-length input sequence into an arbitrary-length output sequence,” adding that this “sequence-to-sequence” method is also used in “question-answering, summarization of long text and image captioning.”
“These models are really powerful and state-of-the art in most of these tasks,” Strobelt stated, adding that the sequence-to-sequence translation model runs a source string “through several neural networks to map it to the target language and refine the output to make sure it is grammatically and semantically correct.”
Although neural networks have greatly improved translation results, they also made the applications more complex. Strobelt compared the pre-neural network days as being able to look in a book to “find out the rule that was producing the error message” and fix it. Today’s more complex networks don’t lend themselves to rulebooks, which is what Seq2Seq-Vis is intended to replace. It works by creating “a visual representation of the different stages of the sequence-to-sequence translation process,” which lets the user “examine the model’s decision process and find where the error is taking place.”
Seq2Seq-Vis also “shows how each of the words in the input and output sentences map to training examples in the neural networks of the AI model.”
“The most complicated part of the explanation is how to connect the decisions to the training examples,” said Stobelt. “The model doesn’t know more about the world than what was presented by the training data. And so it makes sense to take a look at the training data when debugging a model.”
Others are also attempting to solve the black box problem, including IBM, which “proposed a separate initiative to increase transparency in AI using factsheets,” and “several academic institutions, large tech companies, and DARPA.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.