Abstract
We compare three approaches to statistical machine translation (pure phrase-based, fac-
tored phrase-based and neural) by performing a fine-grained manual evaluation via error an-
notation of the systems’ outputs. The error types in our annotation are compliant with the
multidimensional quality metrics (MQM), and the annotation is performed by two annotators.
Inter-annotator agreement is high for such a task, and results show that the best performing
system (neural) reduces the errors produced by the worst system (phrase-based) by 54%.
tored phrase-based and neural) by performing a fine-grained manual evaluation via error an-
notation of the systems’ outputs. The error types in our annotation are compliant with the
multidimensional quality metrics (MQM), and the annotation is performed by two annotators.
Inter-annotator agreement is high for such a task, and results show that the best performing
system (neural) reduces the errors produced by the worst system (phrase-based) by 54%.
Original language | English |
---|---|
Pages (from-to) | 121-132 |
Number of pages | 12 |
Journal | The Prague Bulletin of Mathematical Linguistics |
Volume | 108 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2017 |