Abstract
For many (minority) languages, the resources needed to train large models are not available. We investigate the performance of zero-shot transfer learning with as little data as possible, and the influence of language similarity in this process. We retrain the lexical layers of four BERT-based models using data from two low-resource target language varieties, while the Transformer layers are independently fine-tuned on a POS-tagging task in the model's source language. By combining the new lexical layers and fine-tuned Transformer layers, we achieve high task performance for both target languages. With high language similarity, 10MB of data appears sufficient to achieve substantial monolingual transfer performance. Monolingual BERT-based models generally achieve higher downstream task performance after retraining the lexical layer than multilingual BERT, even when the target language is included in the multilingual model.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 |
Editors | Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 4901–4907 |
Number of pages | 7 |
DOIs | |
Publication status | Published - Aug-2021 |
Keywords
- cs.CL