Evaluating Pre-training Objectives for Low-Resource Translation into Morphologically Rich Languages

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

58 Downloads (Pure)


The scarcity of parallel data is a major limitation for Neural Machine Translation (NMT) systems, in particular for translation into morphologically rich languages (MRLs). An important way to overcome the lack of parallel data is to leverage target monolingual data, which is typically more abundant and easier to collect. We evaluate a number of techniques to achieve this, ranging from back-translation to random token masking, on the challenging task of translating English into four typologically diverse MRLs, under low-resource settings. Additionally, we introduce Inflection Pre-Training (or PT-Inflect), a novel
pre-training objective whereby the NMT system is pre-trained on the task of re-inflecting lemmatized target sentences before being trained on standard source-to-target language translation. We conduct our evaluation on four typologically diverse target MRLs, and find that PT-Inflect surpasses NMT systems trained only on parallel data. While PT-Inflect is outperformed by back-translation overall, combining the two techniques leads to gains in some of the evaluated language pairs.
Original languageEnglish
Title of host publicationProceedings of the 13th Language Resources and Evaluation Conference
EditorsNicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
PublisherEuropean Language Resources Association (ELRA)
Number of pages11
Publication statusPublished - Jun-2022
EventThe 13th Conference on Language Resources and Evaluation - Palais du Pharo, Marseille, France
Duration: 20-Jun-202225-Jun-2022


ConferenceThe 13th Conference on Language Resources and Evaluation
Abbreviated titleLREC 2022
Internet address

Cite this