sciencehabit shares a report from Science Magazine: Automatic language translation has come a long way, thanks to neural networks — computer algorithms that take inspiration from the human brain. But training such networks requires an enormous amount of data: millions of sentence-by-sentence translations to demonstrate how a human would do it. Now, two new papers show that neural networks can learn to translate with no parallel texts — a surprising advance that could make documents in many languages more accessible.
The two new papers, both of which have been submitted to next year’s International Conference on Learning Representations but have not been peer reviewed, focus on another method: unsupervised machine learning. To start, each constructs bilingual dictionaries without the aid of a human teacher telling them when their guesses are right. That’s possible because languages have strong similarities in the ways words cluster around one another. The words for table and chair, for example, are frequently used together in all languages. So if a computer maps out these co-occurrences like a giant road atlas with words for cities, the maps for different languages will resemble each other, just with different names. A computer can then figure out the best way to overlay one atlas on another. Voila! You have a bilingual dictionary. The studies — “Unsupervised Machine Translation Using Monolingual Corpora Only” and “Unsupervised Neural Machine Translation” —
were both submitted to the e-print archive arXiv.org.
Powered by WPeMatico