The Evolution of Machine Translation

In this post, we will explore the evolution of machine translation starting with its origin, then moving on to the present and ending with the future.

The term “machine translation” is used to describe a translation process mainly generated by a computer. Although it makes communication quicker and, as is the case of more exotic languages, possible even, machine translation continues to be strongly criticised for neglecting textual meaning when dispensing with the intervention of a human translator. Meanwhile, there are several types of machine translation, some more advanced than others. 

Do the more advanced ones already dispense with human intervention? And are the less advanced ones still worth using? Which category does Google Translate fall into? How did this evolution take place? And what will the future be like?

The origin of machine translation

In 1947, Warren Weaver – who worked as a mathematician during World War II and later pioneered machine translation – presented the possibility of using digital computers to translate documents between natural human languages for the very first time. Two years later, his colleagues at the Rockefeller Foundation encouraged him to develop his ideas. The result was a memorandum written in July 1949 – Translation. In this influential publication, he developed groundbreaking objectives and methods that fostered machine translation research in the United States and, indirectly, the whole world.

He essentially presented four proposals.

 

 

  1. Solve word polysemy(1) by examining the immediate context

If one examines the words in a book, one at a time as through an opaque mask with a hole in it one word wide, then it is obviously impossible to determine, one at a time, the meaning of the words. (…) But, if one lengthens the slit in the opaque mask, until one can see not only the central word in question but also say N words on either side, then, if N is large enough one can unambiguously decide the meaning of the central word.

 

  1. Assume languages are logical 

A robot (or computer) constructed with regenerative loops of a certain formal character is capable of deducing any legitimate conclusion from a finite set of premises.

  1. Apply cryptographic methods

Weaver exemplified the point of this proposal with a war experiment – the decoding of a Turkish text. The text was given to a mathematician who, without knowing what the original language was, managed to “recreate” the Turkish source text.

However, the weak point of Weaver’s proposal soon became apparent – the merging of decoding and translation, which occurs precisely when the same person performs both activities, as often happens with cryptanalysis(2).

  1. Apply language universals(3)

In a utopic way, Weaver believed there were logical properties common to all languages, as well as language universals that could be quickly applied to machine translation systems.

He soon realised that applying this premise to machine translation involved a “tremendous amount of work in the logical structures of languages before one would be ready for any mechanisation.”

Nevertheless, the memorandum promoted machine translation and the first theories on pre- and post-edition of texts involved in machine translation.

The present of machine translation

In an essay titled A Statistical Approach to Machine Translation (1990), Brown et al. present a statistical approach to machine translation – five statistical models, as well as algorithms for estimating the ways in which these models perform translation, taking into account several bilingual sentence pairs. These algorithms would then assign a probability factor to each alignment, word by word, for any sentence pair. 

Currently, Google Translate – for example – resorts to these predictive algorithms for its statistical machine translation system. Understandably, this is a process in which continuous improvement is necessary or, in other words, in which it is necessary to continuously teach a computer to translate better by providing translated texts. 

Statistics-based machine translation has the advantage of speed, but the disadvantage of resulting from an automatic analysis of several previous translations, and it is not always possible to ensure these are reliable or sufficient.

The future of machine translation

Recently proposed by several researchers, such as Kalchbrenner and Blunsom (2013), neural machine translation – unlike conventional statistical machine translation – aims to build a single neural network that can be synchronised, thus maximising translation performance. 

Much more accurate than statistical translation, neural translation tries to mimic the human brain, in which information is sent to several “layers” before being processed. 

Neural translation is faster and closer to human translation as it is not only the result of statistical operations, but also of the application of linguistic rules, which the neural machine translation system is able to learn.

However, some are moving away from this neuro-statistical coalition and going further. In Neural Machine Translation by Jointly Learning to Align and Translate (2014), Bahdanau et al. show that the computer-based learning proposal, which combines alignment and translation, can significantly improve translation performance as compared to the basic approach of the encoder-decoder model. It can also be seen that the improvement is most evident in longer sentences, but can be observed in sentences of any length.

This is due to the fact that, instead of integrating a neural network into the system, Bahdanau’s model works on its own, generating a translation directly from an original sentence.

At Letrário, our clients already benefit from state-of-the-art machine translation texts – whether a simple piece, when they want to proofread the text themselves and get a much more affordable option – or one done with a human translator, when the objective is to get a ready-to-use piece. 

Are you looking for an affordable, high-quality solution? Do not hesitate to contact Letrário – discover the advantages of our machine translation services.

(1) The coexistence of many possible meanings for a word or phrase.

(2) Set of techniques and methods for decoding characters of an unknown writing system.

(3) A term used in linguistics, especially in generative grammar, to refer to the common properties of human languages.

Bibliography

Bahdanau, D., Cho, K., Bengio, Y. (2014). “Neural machine translation by jointly learning to align and translate. In: Proceedings of ACL – IJCNLP 2015, vol. 1 (2015).

Brown, Peter F. et al. (1989). “A statistical approach to machine translation. In: Computational Linguistics archive, Volume 16, Issue 2, June 1990, pp. 79-85. MIT Press Cambridge, MA, USA.

Kalchbrenner, N. and Blunsom, P. (2013). “Recurrent continuous translation models. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, October. Association for Computational Linguistics.

Weaver, W. (1949). “Translation”. Locke, W.N. e Booth, A.D. (eds.) Machine translation of languages: fourteen essays (Cambridge, Mass.: Technology Press of the Massachusetts Institute of Technology, 1955), pp. 15-23.