Machine Translation (Automatic Translation) Engine Quality Evaluation Case Study

Cross Language Inc.

Translation
Machine Translation

We use objective evaluations to understand the quality of our in-house machine translation engine, which helps us consider future development strategies.

Machine Translation (Automatic Translation) Engine Quality Evaluation Case Study

Request from Cross Language Inc.

Cross Language Inc. is a software company that provides translation software and system solutions. They have been developing and selling machine translation software for almost 30 years and are one of the leading machine translation engine manufacturers in Japan. They offer various solutions while incorporating the latest technology.


<Challenges Faced in the Development and Sales of Machine Translation Engines>

・It takes time and effort to evaluate the quality internally while developing products.

- In order to give credibility to the evaluation, I want to know the objective evaluation results from a neutral standpoint, rather than "according to our research".

- We also want to conduct an objective evaluation of the selection of the original text to be evaluated and the evaluation of the translated text, not based on our own criteria, but by outsourcing to external sources.


Previously, our company had been conducting quality evaluations of machine translation engines internally, but evaluating them simultaneously with development was a situation that required a considerable amount of effort. Additionally, there were concerns that the evaluation results from our company's research lacked credibility, so we decided to outsource the selection of the original text to be evaluated to an external vendor and conduct a completely blind test, as per your request.


At our company, as a neutral party that does not develop our own engine, we evaluated the translation quality of Cross Language Inc.'s products and compiled the analysis results into a quality evaluation report to report to you.

 

Human Science Solutions

  • 1. Output translated text for 8 language pairs (Japanese-English, English-Japanese, Japanese-Chinese, Chinese-Japanese, Japanese-Korean, Korean-Japanese, Japanese-Thai, Thai-Japanese), and evaluate quality based on our own standards.

  • 2. Submit a detailed report summarizing the quality evaluation results.

     

    We have outputted translations for 8 language pairs (Japanese-English, English-Japanese, Japanese-Chinese, Chinese-Japanese, Japanese-Korean, Korean-Japanese, Japanese-Thai, Thai-Japanese) and conducted quality evaluations of machine translation results.


    Evaluation is based on our unique evaluation criteria, taking into account the customer's usage and points of emphasis on quality when introducing Cross Language Inc.'s engine.

    In addition, we have conducted evaluations of machine translations by two native translators for each language suitable for that field.


    In addition, one of the points that we have made an effort on is the extraction of the original text for evaluation.

    Due to differences in specialties and sentence structures, the evaluation of the original text chosen arbitrarily by the machine translation engine cannot be considered completely objective.

    Therefore, in order to ensure fairness, we randomly extracted and outputted translated texts using our developed tool from thousands to tens of thousands of original data.


    The evaluation results were compiled into a report, which includes a detailed evaluation from multiple perspectives such as accuracy, fluency, grammar, and terminology, along with specific examples of errors and sentences to analyze the characteristics of each company.


     

    Customer Feedback

    This was the first time we requested a quality evaluation from an external vendor. While the evaluation was generally within our expectations, it was greatly helpful to objectively understand our company's strengths, areas for improvement, and areas of expertise.

    The evaluation results from third parties are still impactful within the company and seem to be useful in considering development strategies.


    We aim to further utilize our strengths and improve by making improvements, in order to strive for even higher accuracy in machine translation quality. Additionally, we are also considering using this report as a reference material for users in the future.


    Do not be too happy or sad about the evaluation results, but use this evaluation as a starting point to work on improvements and regularly conduct quality evaluations at update timings. In the future, we also hope to be able to request more detailed evaluations, such as analyzing trends in error numbers.

    From Human Science representative

    IMG_2928

    This is a case where we, as a neutral machine translation consultant who does not develop our own engine, helped evaluate the quality of the engine maker's product.


    This is the first attempt for Cross Language Inc. to outsource the quality evaluation that has been conducted in-house so far. It is also a very important project in terms of considering the future development direction, and we have worked on it with a great sense of responsibility.


    Many companies considering the introduction of machine translation use our machine translation quality evaluation service. However, there are also cases where we are requested by engine manufacturers to conduct quality evaluations, such as in this case. We are very pleased to be able to contribute to the development of future machine translation engines by providing the results of our quality evaluations.


    Cross Language Inc. plans to regularly evaluate our products, including when updates are made.


    In the future, as a machine translation consultant, I strongly hope to assist in the development and sales of your products by exchanging information and evaluating quality from a neutral standpoint.

     

    Related Services

    Localization

    Machine Translation

  •  

    Contact Us / Request for Materials

    TOP