Google Conversation Mode automates interpreting

Automated translation of text on your smartphone? That’s so 2010. Now that a brand new year is underway, Google is venturing further into spoken language territory than ever before, enabling millions of Android users to overcome language barriers through its new Conversation Mode feature. And, it comes in time for the one-year anniversary of the company’s launch of the Google Translate app for Android.

The news comes as no surprise. We watched with interest as Eric Schmidt previewed the new feature for German and English a few months ago in Berlin. While development is obviously underway for other languages, Android users will only be able to use Conversation Mode for English and Spanish for now.

Automated interpretation is, in many ways, a no-brainer in Google’s quest to make information available to more people on the planet. We’ve written before about machine interpretation as a means of overcoming the limitations of literacy and the orality of communication.

While Google notes that the feature is still in experimental phases, the concept is not exactly new. We discussed JAHJAH‘s Mandarin<>English phone-based speech-to-speech translation offering back in 2008, and our research has repeatedly mentioned machine interpretation as a replacement technology for telephone interpreting. We also recently wrote about new apps that accomplish a similar purpose by connecting users to human telephone interpreters, following in the footsteps of Language Line’s iPhone application.

Unsurprisingly, Google points out that Conversation Mode has trouble coping with some of the same issues human interpreters have wrestled with for centuries — background noise, strong regional accents, and fast-paced speech. Of course, flesh-and-blood interpreters are still the gold standard — so long as they are professionally trained. There is an unfortunate shortage of qualified human interpreters in many parts of the world. Yet, the ubiquity of demand for their services is undeniable, especially in the quest to improve global access to information.

How will Google’s announcement affect the world? We believe the development is a very important one, for several reasons:

* Heightened societal awareness of spoken language access. Machine interpretation has been available for awhile, but Google’s involvement will draw plenty of attention to this growing area of technology. As we noted in our annual predictions for 2011 and our report on global product development, expect to see more integrations of this type for devices used by the average consumer.
* A boost in demand for services. In the near term, we don’t expect machine interpretation to replace the hard-working human interpreters who work each day in diplomatic, medical, legal, and business settings throughout the world. Instead, we believe that the increased communication that is likely to result from increased awareness will fuel the demand for high-quality human interpreting.
* Greater visibility in the marketplace. Now that Google is hanging out a shingle, expect to see more language service providers start to pay attention to machine interpretation. According to our most recent study, the global language services market will reach US$29.789 billion in 2011. The data from our segmentation exercise revealed that the interpretation technology sector was worth less than 2% of that total.


Researchers at Google are tackling one of the most difficult challenges in artificial intelligence — translating poetry

There are added problems of length, meter and rhyme a computer has to solve to understand and translate poetry. At a conference a few months ago, Dmitriy Genzel, a research scientist at Google, presented a paper outlining those problems and described the ways Google’s computers work to solve them.
Researchers have no way around the pick-and-choose process, so the translations are far from instant and no beta is public. But there’s a reason why it’s useful to improve translation software of any kind, Genzel says.
“Most of the content on the Web is not in English anymore,” he says. “So even for English speakers, there’s a huge amount of stuff on the Web that you don’t have access to.”
In poetry, the translating perspective is more difficult. The value of preserving meter and rhyme in poetic translation has been highly debated. Vladimir Nabokov famously claimed that, since it is impossible to preserve both the
meaning and the form of the poem in translation, one must abandon the form altogether.”But there’s quite a big aspect of [poetry translation] that machines can do pretty well,” Genzel says. Read More...


Cloud computing faces a major obstacle in Europe

In the world of ideas, cloud computing has the potential to revolutionize the way people work. By bundling the processing power of thousands of computer servers, a company, for example, could allow two employees from different countries who speak different languages to communicate directly by phone, using voice recognition software to process what is being said and translation programs to interpret it into another language. The result, ideally, would be a seamless conversation, without struggle and without the limitations of speaking a foreign language.
Such cloud-based breakthroughs face a formidable obstacle in Europe, however: strict privacy laws that place rigid limits on the movement of information beyond the borders of the 27-country European Union.
European governments fear that personal information could fall prey to aggressive marketers and cybercriminals once it leaves the jurisdictions of individual members, a concern that may protect consumers but one that hinders the free flow of data essential to cloud computing.
Facing legal obstacles in Europe, the U.S. businesses with the greatest stake in cloud computing — primarily Microsoft, Google, H.P. and Oracle — are lobbying lawmakers to loosen restrictions on cross-border data transfers. Alternatively, some are developing new methods to make cloud computing work within Europe’s complicated legal landscape. Read More...


Language vendors announce end-to-end partnership

This week, three language technology suppliers and a language service provider (LSP) launched a partnership to pull several advanced technologies together into a unified translation and localization solution. The four companies — Acrolinx , Asia Online, Clay Tablet, and Milengo — announced that they would work together to “create an end-to-end translation solution that delivers on the promise of the next generation of translation advances.”
The partnership brings together three products that we have identified as class leaders, including Acrolinx for authoring improvements, Asia Online for machine translation, and Clay Tablet for middleware, all integrated within Milengo’s production center.
While they did not name names, the four companies decided to position themselves mano-a-mano against the single-vendor portfolio-based solution offered by SDL, which recently announced its acquisition of machine translation supplier Language Weaver.
n their approach to offering a single solution, the four have chosen a best-of-breed approach built on products that they say will “all work together seamlessly” because they are standards-based.
Their goal is to deliver on some core supplier capabilities that we have long espoused in our research, including manageability, effectiveness, and efficiency. Read More...


Text-to-speech technology is no longer science fiction

For the past four years, scientists at NIST have been conducting detailed performance evaluations of speech translation systems for the Defense Advanced Research Projects Agency (DARPA). Previous systems used microphones and portable computers. In the most recent tests, the NIST team evaluated three two-way, real-time, voice-translation devices designed to improve communications between the U.S. military and non-English speakers in foreign countries.
Traditionally, the military has relied on human translators for communicating with non-English speakers in foreign countries, but the job is dangerous and skilled translators often are in short supply. And, sometimes, translators may have ulterior motives, according to NIST’s Brian Weiss. The DARPA project, called TRANSTAC (spoken language communication and TRANSlation system for TACtical use), aims to provide a technology-based solution. Currently, the focus is on Pashto, a native Afghani tongue, but NIST has also assessed machine translation systems for Dari—also spoken in Afghanistan—and Iraqi Arabic.
All new TRANSTAC systems all work much the same way, says project manager Craig Schlenoff. An English speaker talks into the phone. Automatic speech recognition distinguishes what is said and generates a text file that software translates to the target language. Text-to-speech technology converts the resulting text file into an oral response in the foreign language. This process is reversed for the foreign language speaker. Read More...