
- Table of Contents
1. The History of AI Translation Development
AI translation (machine translation by computers) was first developed in the 1950s. Back then, the technology used was all rule-based, but by the late 1980s, statistical machine translation emerged, followed by neural machine translation in the 2010s.
In rule-based machine translation, translation rules are manually created based on dictionaries and strict grammar guides. The system requires an enormous number of rules, which makes the initial development and new language updates quite labor-intensive. The accuracy of these systems was relatively low, and it struggled to translate anything but formulaic sentences.
In statistical machine translation, computers learn the rules on their own, instead of relying on human input. The system reads many pairs of documents containing an original text and its translations, and it learns to statistically associate the original words and phrases with the translations that appear most often in the data (called a corpus). If you collect new bilingual texts, it is relatively easy to add new phrases to the system's vernacular. However,when put into practice, using statistical machine translation to translate between languages with significantly different grammar, such as English and Japanese, was difficult, and the translation accuracy was too unreliable for practical business use.
As an alternative, technologies such as hybrid machine translation, which combined statistical machine translation with the rule-based method, as well as example-based machine translation, which extracted and spliced similar phrases from existing bilingual text pairs, were also developed around this time. These machine translation methods improved translation accuracy compared to rule-based and statistical machine translation methods alone.
Neural machine translation, much like statistical machine translation, is trained by feeding a large number of bilingual text pairs into a computer. However, by using a combination of neural networks and deep learning, which are types of machine learning, the system extracts and utilizes far more information for translation than statistical machine learning. Compared to older machine translation methods, the accuracy of these translations has significantly improved. The translation results are characterized by their fluency, producing natural translations that resemble human composition. Since the advent of neural machine translation, machine translation has gained a lot of attention and is now widely used in everyday life and business.
2. The Differences Between Machine Translation Technology New and Old
The main difference between traditional automatic translation technologies, such as rule-based machine translation and statistical machine translation, and the latest automatic translation technology, neural machine translation, is the fluency of the translated text. Traditional machine translation methods often result in unnatural translations that are easily recognized as machine-generated. Additionally, because translations are processed sentence by sentence, the flow between lines can often be awkward. In contrast, the translations produced by the latest neural machine translation engines sound natural. When a translation system like DeepL translates full paragraph units, the connections between sentences in the translated text are more natural, making it difficult for the reader to distinguish whether the text was translated by a human or by a machine.
To be fair, there are some issues that have become more prominent with neural machine translation compared to older models. These include translation duplication and omission. In traditional machine translation, since phrases from the original text are translated one by one as they are reflected in the output, information is rarely repeated or omitted. However, in neural machine translation, there may be cases of duplicated translations or omissions. Because the translations produced by neural machine translation are so much more fluent, it can be particularly difficult to notice omissions when reading only the translated text. So, to find omissions, an editor must compare the translated text to the original source.
3. Features of Popular Machine Translation Services Worldwide
The most widely recognized machine translation services include DeepL, Google, and Microsoft, all of which utilize neural machine translation technology. Among these, DeepL is particularly noteworthy. The standout characteristic of DeepL is its fluency, which is achieved through paragraph translation. By translating text at the paragraph level, rather than sentence by sentence, it better captures the field and context, resulting in more appropriate terminology being used, as well as more natural connections between sentences. As a result, the translated text becomes more fluent.
For more on the latest trends in machine translation and a comparison of translation results between DeepL, Google, Microsoft, and Amazon, check out the following blog article.
The Latest Trends in Machine Translation, Comparing DeepL and Google Translate
4. The Future of AI Translation Technology
The evolution of AI translation technology has accelerated with the advent of large language models (LLMs), such as ChatGPT by OpenAI. LLMs learn from massive quantities of text data, allowing them to understand language and generate text that is similar to what a human might produce. This enables automatic translations that transcend simple language conversion, managing to incorporate the appropriate context, nuance, and cultural background for more natural compositions. LLMs have the flexibly to adapt to the translation style and tone requested by its users, providing stylistically appropriate translations across a wide range of circumstances, from casual conversations to formal business documents and even emotionally laden content. For more details on the translation accuracy of LLMs, be sure to check out the blog article below.
What Is the Translation Accuracy of OpenAI's New Model GPT-4.1? A Comparison with DeepL!
Unfortunately, AI translation technology also has an issue known as "hallucination." This occurs when AI generates information that does not actually exist or produces erroneous content as the translation result. It is more likely to happen when the context is unclear or when the AI makes too many conjectures, so human review is still necessary for AI translations. Post-editing (human correction) continues to play an important role when generating translations on niche or specialized topics or those that require subtle nuance.
Better real-time capabilities is another of the important advancements made possible by AI translation. Now, AI systems can generate subtitles for foreign language audio in real time and translate them simultaneously. Many people have likely utilized this in online meetings or seminars. In addition, quantum computers have the potential to dramatically enhance the processing power of AI translation. Compared to conventional computers, quantum computers can perform parallel processing on vast amounts of data, accelerating AI machine learning and large-scale data analysis. As a result, AI can be expected to handle translation tasks with even greater accuracy and efficiency.
5. Summary
The foundation for AI translation was laid in the 1950s with rule-based methods, which evolved into statistical translation in the late 1980s and then advanced to neural translation in the 2010s. At present, DeepL, Google, and Microsoft are the leading AI translation service providers, with DeepL in particular offering more natural translations at the paragraph level. With the advent of large language models (LLMs), translations that understand context and nuance have become possible; however, the issue of "hallucination" remains, and human review is still necessary. In the future, real-time capabilities and quantum computing may further improve accuracy.
Our company, Human Science, offers the automatic translation software "MTrans for Office," which incorporates translation engines such as DeepL, Google, Microsoft, and OpenAI. OpenAI not only translates, but it can also generate, rewrite, and proofread text according to your prompts, for better work efficiency and multilingual support. Try out MTrans for Office with our 14-day free trial offer. Please contact us for more information.
Features of MTrans for Office
① Unlimited number of file translations and glossary integration for a fixed fee
② One-click translation from Office products!
③ Secure API connection
・For customers who want further security enhancement, we also offer SSO, IP restrictions, and so on.
④ Japanese language support by a Japanese company
・Response to security check sheets
・Payment by bank transfer available
MTrans for Office is an easy-to-use translation software for Office.











