Microsoft Translator’s collection of neural network languages grew by one this week.
The company has added Korean, bringing the total of number of languages supported by the system to 11. Other languages include English, Arabic, Chinese, French, German, Italian, Japanese, Portuguese, Russian and Spanish.
In addition to infusing its cloud services and business software ecosystem with artificial intelligence, Microsoft is banking on the technology to help break down language barriers.
“Neural Network translation uses the full context of a sentence to translate words based not only on a few words before and after it, but on the full sentence, generating more fluent and more human sounding translations,” wrote the Microsoft Translator team in a blog post. “This new AI-powered technology delivers the most significant improvement in machine translation quality since statistical machine translation became the industry standard 10 years ago.”
Microsoft’s neural network translation technology works in two stages, first modeling a word based on its context within a complete sentence. Then the system translates this model into another language, finding the most appropriate words and word order to properly convey an idea or concept.
The result is more fluent translations that approximate those from human interpreters. Developers can begin incorporating Korean into their projects with the Microsoft Translator API (application programming interface). Microsoft doesn’t charge extra for using neural network models instead of the traditional statistical models.
Microsoft has some competition in the market for neural network-based translation services.
Google Makes Gains in AI Translation, Too
Earlier this month, Google announced that the Neural Machine Translation technology used in the Google Translate apps and web services would be available in Hindi, Russian and Vietnamese. Neural Machine Translation made its public debut in November 2016 with support for eight languages, English, Chinese, French, German, Japanese, Korean, Portuguese, Spanish, and Turkish.
Like Microsoft’s system, Google’s AI-enabled translation technology processes sentences in full rather than piecing together translated phrases. For users, this means better translations overall, according to Barak Turovsky, product lead at Google Translate.
“Since it’s easier to understand each sentence, translated paragraphs and articles are a lot smoother and easier to read,” wrote Turovsky in a Nov. 15 announcement. “And this is all possible because of end-to-end learning system built on Neural Machine Translation, which basically means that the system learns over time to create better, more natural translations.”
Although it covers relatively few languages now, Turovsky revealed that Google plans to rollout Neural Machine Translation to 103 languages over time.
Yesterday, Google announced a major update to SyntaxNet, an open source neural-network framework used to analyze the grammatical structure of sentences. The update enables the TensorFlow machine intelligence software library to generate more sophisticated language models while processing sentences or entire documents.
“Our upgrade makes it, for example, easy to build character-based models that learn to compose individual characters into words (e.g. ‘c-a-t’ spells ‘cat’),” wrote Google research scientists David Weiss and Slav Petrov, in a March 15 blog post.
“By doing so, the models can learn that words can be related to each other because they share common parts (e.g. ‘cats’ is the plural of ‘cat’ and shares the same stem; ‘wildcat’ is a type of ‘cat’),” the scientist wrote.