Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages?

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    2 Citations (Scopus)
    67 Downloads (Pure)

    Abstract

    Recent work has shown that neural models can
    be successfully trained on multiple languages
    simultaneously. We investigate whether such
    models learn to share and exploit common
    syntactic knowledge among the languages on
    which they are trained. This extended abstract
    presents our preliminary results
    Original languageEnglish
    Title of host publication2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
    Place of PublicationBrussels, Belgium
    PublisherAssociation for Computational Linguistics (ACL)
    Pages374-377
    Number of pages4
    DOIs
    Publication statusPublished - Nov-2018

    Cite this