Automated Evaluation of Language Translation
Keywords:
Language Translation, BLEU, NLGAbstract
Human evaluations of machine translation are expensive and extensive. Human evaluations can take a longer time to finish and involve human labour that can't be reused. We proposed a methodology of automated machine translation evaluation that is fast, inexpensive, and language-independent, that relates highly with human evaluation, and that has only little marginal cost initially. This reduces the cost needed for translation, human labour wastage and also the time. This will benefit the developers as it is inexpensive.
References
- E. H. Hovy. 1999. Toward finely differentiated evaluation metrics for machine translation. In Proceedings of the Eagles Workshop on Standards and Evaluation, Pisa, Italy.
- Kishore Papineni, Salim Roukos, Todd Ward, John Henderson, and Florence Reeder. 2002. Corpus-based comprehensive and diagnostic MT evaluation: Initial Arabic, Chinese, French, and Spanish results. In Proceedings of Human Language Technology 2002, San Diego, CA. To appear.
- Florence Reeder. 2001. Additional mt-eval references. Technical report, International Standardsfor Language Engineering, Evaluation Working Group. http://isscowww.unige.ch/projects/isle/taxonomy2/
- J.S. White and T. O’Connell. 1994. The ARPA MT evaluation methodologies: evolution, lessons, and future approaches. In Proceedings of the First Conference of the Association for Machine Translation in the Americas, pages 193–205, Columbia, Maryland.
Downloads
Published
Issue
Section
License
Copyright (c) IJSRCSEIT
This work is licensed under a Creative Commons Attribution 4.0 International License.