What's so special about BERT's layers? A closer look at the NLP pipeline in monolingual and multilingual models

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    44 Citations (Scopus)
    235 Downloads (Pure)

    Abstract

    Peeking into the inner workings of BERT has shown that its layers resemble the classical NLP pipeline, with progressively more complex tasks being concentrated in later layers. To investigate to what extent these results also hold for a language other than English, we probe a Dutch BERT-based model and the multilingual BERT model for Dutch NLP tasks. In addition, through a deeper analysis of part-of-speech tagging, we show that also within a given task, information is spread over different parts of the network and the pipeline might not be as neat as it seems. Each layer has different specialisations, so that it may be more useful to combine information from different layers, instead of selecting a single one based on the best overall performance.
    Original languageEnglish
    Title of host publicationFindings of the Association for Computational Linguistics
    Subtitle of host publicationEMNLP 2020
    PublisherAssociation for Computational Linguistics (ACL)
    Pages4339-4350
    Number of pages12
    Publication statusPublished - 2020
    EventConference on Empirical Methods in Natural Language Processing - Online
    Duration: 7-Nov-2020 → …

    Conference

    ConferenceConference on Empirical Methods in Natural Language Processing
    Abbreviated titleEMNLP
    Period07/11/2020 → …

    Fingerprint

    Dive into the research topics of 'What's so special about BERT's layers? A closer look at the NLP pipeline in monolingual and multilingual models'. Together they form a unique fingerprint.

    Cite this