Google’s Neural Machine Translation is Breaking New Ground for AI - IQVIS Inc.

Google’s Neural Machine Translation is Breaking New Ground for AI

By: Rae Steinbach

Most people are familiar with artificial intelligence (AI) to some degree; if the knowledge doesn’t come from its real-world practical applications, they have at least some reference from popular fiction. While we may be a great distance from the advanced forms of AI that we see in movies or read about in science fiction novels, the technology has made some great strides in the past decade. Perhaps surprisingly, some of the most significant developments have come in the field of machine translations.

A Look at the Past of Artificial Intelligence

The first artificially intelligent systems used narrow AI. This is a type of AI that performs specific tasks by following a set of preprogrammed rules. Examples of this would be the AI you’d find in popular video games or some of the simpler chatbot programs.

In the 1990s, programmers started using algorithms to teach AI systems through a discipline known as machine learning. With machine learning, AI systems didn’t have to be specifically programmed to perform tasks. By exposing the system to enough data, it could analyze the information, find patterns and learn how to perform the intended objective.

In the 21st Century, researchers starting working on deep learning for machines. With deep learning, the system will have a network of artificial neurons that are designed to learn in a way that is similar to that of the human brain.

What Makes Deep Learning Different?

With earlier types of machine learning, the system could learn, but it needed human supervision. Not only that, it also needed to be told what it was supposed to take away from the information. Deep learning models itself on the neural networks in the human brain, which allows it to learn more independently.

An artificial neural network has layers of ‘neurons’. As the system learns, it will create neural pathways, much in the same way that the human brain does. As the system is exposed to more data and gains more experience, these pathways grow stronger or weaker depending on how well they help the system achieve the desired result.

Google’s Breakthrough in Machine Translation

Earlier forms of machine translation operated on a type of narrow AI that performed the task by following a simple set of rules. The program would break the sentence down into smaller parts, translate the fragments, and then use a set of rules to reconstruct the phrase.

The narrow set of rules could perform translations, but, unlike human media translation services, it would often provide flawed results. This is usually due to the fact that technology is not yet able to place translations into the cultural context or comprehend things like idioms or jokes.

What Changed?

Originally, Google translates used narrow AI programs to perform translations. Then, in September of 2016, they announced a switch to a single system that uses artificial neural networks to provide translations. They called their new tool Google Neural Machine Translation (GNMT). The system continues to learn from experience, and it provides translations that are much more fluent and natural sounding than previous systems that operate on narrow AI.

Researchers at Google expected the structure to keep learning and improving; that is part of the point of using deep learning. However, they were surprised to find that the GNMT could learn to perform translations for a language pair that had never experienced. Researchers tested the GNMT to see if the system was trained for Japanese/English translation and Korean/English translation if it would then learn how to perform Japanese/Korean translations without having to be specifically trained for the task.

The GNMT was, in fact, capable of using its experience with Japanese/English and Korean/English translations to develop a system for performing Japanese/Korean translations. Researchers at Google called this a zero-shot translation. It is believed that the GNMT created an artificial language to facilitate translations between the new language pair. The system could then translate the source language into the artificial language, using this to produce the finished translation.

Instead of going word-for-word and moving from one language to the next, the GNMT came up with its own original creation to solve a problem. That interests researcher as it is a very human-like thing for a machine to do, and it serves to provide a glimpse into the future potential of AI systems that use deep learning.

 


Rae is a graduate of Tufts University with a combined International Relations and Chinese degree. After spending time living and working abroad in China, she returned to NYC to pursue her career and continue curating quality content. Rae is passionate about travel, food, and writing, of course. She is working at Morningside Translations.

Related Posts

Comments (1)

Hi Rae Steinbach, I like the way you explained about “Google’s neural machine translation for A1”. Your article is clearly understandable with quality content.

Thanks for updating the recent posts. Keep sharing.

Leave a comment