Amazon is working on a new wearable, codenamed Dylan, that reportedly can discern human emotions. The voice-activated gadget, developed by Amazon in collaboration with Lab126 and the Alexa voice software team, is worn on the wrist and is meant to address health and wellness. Lab126 previously worked with Amazon to build its Fire phone and Echo speaker. According to sources, the wearable includes microphones that pair with software and work with a smartphone app to glean the user’s emotional state via the sound of his/her voice.
Bloomberg reports that, as per documents it’s examined, “eventually the technology could be able to advise the wearer how to interact more effectively with others.” Because Amazon permits its teams to experiment, it’s unknown whether this project will actually be commercialized.
One source revealed that the company is beta testing the product, “though it’s unclear whether the trial includes prototype hardware, the emotion-detecting software or both.” Although Amazon has publicly professed its aim to “build a more lifelike voice assistant,” it would not comment on Dylan.
But the company filed a U.S. patent in 2017, detailing “a system in which voice software uses analysis of vocal patterns to determine how a user is feeling, discerning among ‘joy, anger, sorrow, sadness, fear, disgust, boredom, stress, or other emotional states’.”
Bloomberg states that the patent “suggests Amazon could use knowledge of a user’s emotions to recommend products or otherwise tailor responses,” with the patent’s diagram showing Alexa suggesting a chicken soup to a sick woman who says she is hungry. Amazon also was awarded a second patent “that uses techniques to distinguish the wearer’s speech from background noises.”
Amazon has long targeted being dominant with regard to consumer electronics with embedded speech recognition software; its Echo smart speaker products “have popularized the use of voice commands in the home.” The company is expanding the use of Alexa, developing “wireless earbuds, similar to Apple AirPods, that are expected to include the Alexa voice software … [and] distributing Echo Auto, a dashboard-mounted speaker and microphone array designed to pair with a smartphone,” as well as working on a robot for the U.S. market, codenamed Vesta.
Amazon isn’t alone in using machine learning and voice/image recognition to develop similar products. Microsoft, Google and IBM, among other tech companies, are working on technologies that recognize emotions from audio, images and “other inputs.”
Related:
When Quantum Computing Meets AI: Smarter Digital Assistants and More, The Wall Street Journal, 5/23/19
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.