Robot Technology News  
ROBO SPACE
Machine-learning system tackles speech and object recognition, all at once
by Rob Matheson for MIT News
Boston MA (SPX) Sep 27, 2018

MIT computer scientists have developed a system that learns to identify objects within an image, based on a spoken description of the image.

MIT computer scientists have developed a system that learns to identify objects within an image, based on a spoken description of the image. Given an image and an audio caption, the model will highlight in real-time the relevant regions of the image being described.

Unlike current speech-recognition technologies, the model doesn't require manual transcriptions and annotations of the examples it's trained on. Instead, it learns words directly from recorded speech clips and objects in raw images, and associates them with one another.

The model can currently recognize only several hundred different words and object types. But the researchers hope that one day their combined speech-object recognition technique could save countless hours of manual labor and open new doors in speech and image recognition.

Speech-recognition systems such as Siri and Google Voice, for instance, require transcriptions of many thousands of hours of speech recordings. Using these data, the systems learn to map speech signals with specific words. Such an approach becomes especially problematic when, say, new terms enter our lexicon, and the systems must be retrained.

"We wanted to do speech recognition in a way that's more natural, leveraging additional signals and information that humans have the benefit of using, but that machine learning algorithms don't typically have access to.

"We got the idea of training a model in a manner similar to walking a child through the world and narrating what you're seeing," says David Harwath, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Spoken Language Systems Group. Harwath co-authored a paper describing the model that was presented at the recent European Conference on Computer Vision.

In the paper, the researchers demonstrate their model on an image of a young girl with blonde hair and blue eyes, wearing a blue dress, with a white lighthouse with a red roof in the background.

The model learned to associate which pixels in the image corresponded with the words "girl," "blonde hair," "blue eyes," "blue dress," "white light house," and "red roof." When an audio caption was narrated, the model then highlighted each of those objects in the image as they were described.

One promising application is learning translations between different languages, without need of a bilingual annotator. Of the estimated 7,000 languages spoken worldwide, only 100 or so have enough transcription data for speech recognition.

Consider, however, a situation where two different-language speakers describe the same image. If the model learns speech signals from language A that correspond to objects in the image, and learns the signals in language B that correspond to those same objects, it could assume those two signals - and matching words - are translations of one another.

"There's potential there for a Babel Fish-type of mechanism," Harwath says, referring to the fictitious living earpiece in the "Hitchhiker's Guide to the Galaxy" novels that translates different languages to the wearer.

The CSAIL co-authors are: graduate student Adria Recasens; visiting student Didac Suris; former researcher Galen Chuang; Antonio Torralba, a professor of electrical engineering and computer science who also heads the MIT-IBM Watson AI Lab; and Senior Research Scientist James Glass, who leads the Spoken Language Systems Group at CSAIL.

Audio-visual associations
This work expands on an earlier model developed by Harwath, Glass, and Torralba that correlates speech with groups of thematically related images. In the earlier research, they put images of scenes from a classification database on the crowdsourcing Mechanical Turk platform. They then had people describe the images as if they were narrating to a child, for about 10 seconds. They compiled more than 200,000 pairs of images and audio captions, in hundreds of different categories, such as beaches, shopping malls, city streets, and bedrooms.

They then designed a model consisting of two separate convolutional neural networks (CNNs). One processes images, and one processes spectrograms, a visual representation of audio signals as they vary over time. The highest layer of the model computes outputs of the two networks and maps the speech patterns with image data.

The researchers would, for instance, feed the model caption A and image A, which is correct. Then, they would feed it a random caption B with image A, which is an incorrect pairing. After comparing thousands of wrong captions with image A, the model learns the speech signals corresponding with image A, and associates those signals with words in the captions. As described in a 2016 study, the model learned, for instance, to pick out the signal corresponding to the word "water," and to retrieve images with bodies of water.

"But it didn't provide a way to say, 'This is exact point in time that somebody said a specific word that refers to that specific patch of pixels,'" Harwath says.

Making a matchmap
In the new paper, the researchers modified the model to associate specific words with specific patches of pixels. The researchers trained the model on the same database, but with a new total of 400,000 image-captions pairs. They held out 1,000 random pairs for testing.

In training, the model is similarly given correct and incorrect images and captions. But this time, the image-analyzing CNN divides the image into a grid of cells consisting of patches of pixels. The audio-analyzing CNN divides the spectrogram into segments of, say, one second to capture a word or two.

With the correct image and caption pair, the model matches the first cell of the grid to the first segment of audio, then matches that same cell with the second segment of audio, and so on, all the way through each grid cell and across all time segments. For each cell and audio segment, it provides a similarity score, depending on how closely the signal corresponds to the object.

The challenge is that, during training, the model doesn't have access to any true alignment information between the speech and the image. "The biggest contribution of the paper," Harwath says, "is demonstrating that these cross-modal [audio and visual] alignments can be inferred automatically by simply teaching the network which images and captions belong together and which pairs don't."

The authors dub this automatic-learning association between a spoken caption's waveform with the image pixels a "matchmap." After training on thousands of image-caption pairs, the network narrows down those alignments to specific words representing specific objects in that matchmap.

"It's kind of like the Big Bang, where matter was really dispersed, but then coalesced into planets and stars," Harwath says. "Predictions start dispersed everywhere but, as you go through training, they converge into an alignment that represents meaningful semantic groundings between spoken words and visual objects."


Related Links
Massachusetts Institute of Technology
All about the robots on Earth and beyond!


Thanks for being here;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceDaily Contributor
$5 Billed Once


credit card or paypal
SpaceDaily Monthly Supporter
$5 Billed Monthly


paypal only


ROBO SPACE
Amazon aims to make Alexa assistant bigger part of users' lives
Seattle (AFP) Sept 21, 2018
From the kitchen to the car, Amazon on Thursday sought to make its Alexa digital assistant and online services a bigger part of people's lives with an array of new products and partnerships. Updates to the internet giant's Alexa-infused Echo smart speakers will allow them to tend to microwave cooking and even have "hunches" regarding what users may want or have forgotten. When Alexa is told "corn on the cob," a digital Echo speaker starts an AmazonBasics microwave oven in a faux home demonstrati ... read more

Comment using your Disqus, Facebook, Google or Twitter login.



Share this article via these popular social media networks
del.icio.usdel.icio.us DiggDigg RedditReddit GoogleGoogle

ROBO SPACE
Self-flying glider 'learns' to soar like a bird

General Atomics contracted for Reaper drone ground control work

RUDN University mathematicians proposed to improve cellular network coverage by using UAVs

Airborne Response teams with Edgybees and UgCS to provide UAS software to responders

ROBO SPACE
Small satellite demonstrates possible solution for 'space junk'

Three NASA Missions Return 1st-Light Data

Chip-sized device could help manufacturers measure laser power in real time

Chemists functionalize boron nitride with other nano systems

ROBO SPACE
A new way to count qubits

Qualcomm alleges Apple gave swiped chip secrets to Intel

Smaller, faster and more efficient modulator sets to revolutionize optoelectronic industry

DARPA contracts USC for circuit development program

ROBO SPACE
Framatome wins I and C modernization contract for EDF's 900 MW reactors

Framatome to deliver ATRIUM 11 fuel to Talen Energy's Susquehanna Station

US Nuclear Lab Building Micro-Reactor That Can Power an Army Brigade

Engie denies plans to sell Belgian nuclear plants

ROBO SPACE
Bosnia arrests Syrian, Algerian migrants with weapons

IS leader Baghdadi, world's 'most wanted', sought in Syria offensive

France warns against chemical attacks in last Syria rebel stronghold

'Etched in my mind': UN's Myanmar probe

ROBO SPACE
Electricity crisis leaves Iraqis gasping for cool air

Energy-intensive Bitcoin transactions pose a growing environmental threat

Germany thwarts China by taking stake in 50Hertz power firm

Global quadrupling of cooling appliances to 14 billion by 2050

ROBO SPACE
What powers deep space travel

New battery gobbles up carbon dioxide

X-rays uncover a hidden property that leads to failure in a lithium-ion battery material

A novel approach of improving battery performance

ROBO SPACE
China tests propulsion system of space station's lab capsules

China unveils Chang'e-4 rover to explore Moon's far side

China's SatCom launch marketing not limited to business interest

China to launch space station Tiangong in 2022, welcomes foreign astronauts









The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.