Adapting Monolingual Models: Data can be Scarce when Language Similarity is High

Wietse de Vries*, Martijn Bartelds*, Malvina Nissim, Martijn Wieling

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

18 Citations (Scopus)
118 Downloads (Pure)

Abstract

For many (minority) languages, the resources needed to train large models are not available. We investigate the performance of zero-shot transfer learning with as little data as possible, and the influence of language similarity in this process. We retrain the lexical layers of four BERT-based models using data from two low-resource target language varieties, while the Transformer layers are independently fine-tuned on a POS-tagging task in the model's source language. By combining the new lexical layers and fine-tuned Transformer layers, we achieve high task performance for both target languages. With high language similarity, 10MB of data appears sufficient to achieve substantial monolingual transfer performance. Monolingual BERT-based models generally achieve higher downstream task performance after retraining the lexical layer than multilingual BERT, even when the target language is included in the multilingual model.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: ACL-IJCNLP 2021
EditorsChengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
PublisherAssociation for Computational Linguistics (ACL)
Pages4901–4907
Number of pages7
DOIs
Publication statusPublished - Aug-2021

Keywords

  • cs.CL

Fingerprint

Dive into the research topics of 'Adapting Monolingual Models: Data can be Scarce when Language Similarity is High'. Together they form a unique fingerprint.

Cite this