Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages?

Prajit Dhar, Arianna Bisazza

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    4 Citations (Scopus)
    102 Downloads (Pure)


    Recent work has shown that neural models can
    be successfully trained on multiple languages
    simultaneously. We investigate whether such
    models learn to share and exploit common
    syntactic knowledge among the languages on
    which they are trained. This extended abstract
    presents our preliminary results
    Original languageEnglish
    Title of host publication2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
    Place of PublicationBrussels, Belgium
    PublisherAssociation for Computational Linguistics (ACL)
    Number of pages4
    Publication statusPublished - Nov-2018

    Cite this