Language barriers may be consigned to history by earpiece gadget

Earpieces underpinned by cloud translation software 'will always be whispering the one language you want to hear'

Charlotte Beale
Saturday 13 February 2016 18:48 GMT
Comments

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

"Language barrier" may be a phrase lost in translation to the next generation.

By 2025, when someone speaks to you in a foreign language, an earpiece will be able instantly to translate their words into your native language, Hillary Clinton’s former innovation advisor Alec Ross has written in The Wall Street Journal.

Monotonous computer voices will be consigned to the past, too. The voice of your interlocutor – its wavelength, frequency and other unique properties – will be recreated by the cloud software supporting your earpiece, as advances in bioacoustic engineering make voice replication possible.

Your response to the foreign-language speaker would in turn be translated into the speaker’s own native language by their earpiece.

The software of the 2020s will be able to handle more than Google's Translate's mere two-way translation.

“You could host a dinner party with eight people at the table speaking eight different languages, and the voice in your ear will always be whispering the one language you want to hear”, writes Mr Ross.

The earpieces won’t necessarily spell the end of foreign language learning, however.

“I can't imagine a time when we don't value the ability to communicate in languages other than our own”, Mr Ross told The Independent. “But I can't help but think that this will have some kind of impact for the future of foreign language learning. Exactly what, I don't know.”

The next generation may “be able to understand anything that is spoken to them”, said Mr Ross.

But “real communication” entails “the nuance that comes with engaging directly without a translator or a piece of hardware”.

While the globalisation of the last few decades has depended on English as a common language, such translation potential means billions of the world’s non-English speakers could enter markets and networks previously inaccessible.

The machine learning underlying the translation technology is developed by processing billions of translations a day. As computing power increases, “machines will grow exponentially more accurate and be able to parse the smallest detail”, writes Mr Ross.

“More data, more computing power and better software….will fill in the communication gaps in areas including pronunciation and interpreting a spoken response”.

Research into translation and voice biometric technology is largely funded by the defence and intelligence sectors, according to Mr Ross.

Siri, Apple’s speech recognition programme, was spun out from the US Department of Defense’s Defense Advanced Research Projects Agency (DARPA).

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in