Adapting Monolingual Models: Data can be Scarce when Language Similarity is High

Wietse de Vries*, Martijn Bartelds*, Malvina Nissim, Martijn Wieling

*Bijbehorende auteur voor dit werk

    OnderzoeksoutputAcademicpeer review

    2 Citaten (Scopus)
    25 Downloads (Pure)

    Samenvatting

    For many (minority) languages, the resources needed to train large models are not available. We investigate the performance of zero-shot transfer learning with as little data as possible, and the influence of language similarity in this process. We retrain the lexical layers of four BERT-based models using data from two low-resource target language varieties, while the Transformer layers are independently fine-tuned on a POS-tagging task in the model's source language. By combining the new lexical layers and fine-tuned Transformer layers, we achieve high task performance for both target languages. With high language similarity, 10MB of data appears sufficient to achieve substantial monolingual transfer performance. Monolingual BERT-based models generally achieve higher downstream task performance after retraining the lexical layer than multilingual BERT, even when the target language is included in the multilingual model.
    Originele taal-2English
    TitelFindings of the Association for Computational Linguistics: ACL-IJCNLP 2021
    RedacteurenChengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
    UitgeverijAssociation for Computational Linguistics (ACL)
    Pagina's4901–4907
    Aantal pagina's7
    DOI's
    StatusPublished - aug-2021

    Citeer dit