Microsoft Demonstrates Real-Time Speech Translator
Rick Rashid speaks Mandarin Chinese without learning it
Microsoft has demonstrated a software engine that can translate spoken language almost instantly, while preserving the speaker’s intonation and rhythm.
The new feature, expected to be introduced into Bing Translate, was first shown by Microsoft’s senior VP for Research Rick Rashid during a presentation in Tianjin, China late last month. Rashid doesn’t speak Chinese, but thanks to the clever software he could address the local audience without any issues.
Speaking foreign languages without learning them is a theme that has been discussed in science fiction for decades. The technology, developed in partnership with theUniversity of Toronto, could push the boundaries of speech recognition, pioneered by companies like Dragon Systems and made mainstream by Apple’s Siri and its competitors.
Universal translator
In his blog post, Rashid said that recent scientific breakthroughs have made it possible to reduce the number of translation errors and interpret human speech the way human brain does it.
Previously, voice recognition research focused on matching patterns in speech. A more recent approach uses so-called ‘hidden Markov modelling’, which helps to better capture the characteristics of an individual’s speech by processing samples from thousands of users.
Although technical progress meant hardware for voice analysis was becoming more powerful, and algorithms more sophisticated, the technology wasn’t dependable enough for widespread use. “In the realm of natural user interfaces, the single most important one – yet also one of the most difficult for computers – is that of human speech,” wrote Rashid.
However, a few years ago researchers from the University Of Toronto developed the “deep neural networks” method that cut the number of translation errors and made the Microsoft’s engine viable.
Even though Rashid had to ‘train’ the software to recognise his voice, after an hour of listening to pre-recorded speeches the engine allowed him to finish his presentation in Mandarin, with appropriate vocal emphasis.
Essentially, the engine would convert English speech into Chinese text, then re-order the words and read the result out loud.
“During my October 25 presentation in China, I had the opportunity to showcase the latest results of this work. We have been able to reduce the word error rate for speech by over 30 percent compared to previous methods. This means that rather than having one word in 4 or 5 incorrect, now the error rate is one word in 7 or 8,” wrote Rashid.
“While still far from perfect, this is the most dramatic change in accuracy since the introduction of hidden Markov modeling in 1979, and as we add more data to the training we believe that we will get even better results,” he added.
According to the BBC, several technology companies, including AT&T, NTT Docomo and Google, are currently working on similar translation projects.
You can see the Microsoft technology demo at the end of the presentation here.
Test your Microsoft knowledge! Take our quiz!