Abstract
Cross-lingual transfer learning with large multilingual pre-trained models can be an effective approach for low-resource languages with no labeled training data. Existing evaluations of zero-shot cross-lingual generalisability of large pre-trained models use datasets with English training data, and test data in a selection of target languages. We explore a more extensive transfer learning setup with 65 different source languages and 105 target languages for part-of-speech tagging. Through our analysis, we show that pre-training of both source and target language, as well as matching language families, writing systems, word order systems, and lexical-phonetic distance significantly impact cross-lingual performance. The findings described in this paper can be used as indicators of which factors are important for effective zero-shot cross-lingual transfer to zero- and low-resource languages.
Original language | English |
---|---|
Title of host publication | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |
Editors | Smaranda Muresan, Preslav Nakov, Aline Villavicencio |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 7676-7685 |
Number of pages | 10 |
Publication status | Published - 2022 |
Event | The 60th Annual Meeting of the Association for Computational Linguistics - Dublin, Ireland Duration: 22-May-2022 → 27-May-2022 |
Conference
Conference | The 60th Annual Meeting of the Association for Computational Linguistics |
---|---|
Country/Territory | Ireland |
City | Dublin |
Period | 22/05/2022 → 27/05/2022 |