Character-level Representations Improve DRS-based Semantic Parsing Even in the Age of BERT

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    9 Citations (Scopus)
    63 Downloads (Pure)

    Abstract

    We combine character-level and contextual language model representations to improve performance on Discourse Representation Structure parsing. Character representations can easily be added in a sequence-to-sequence model in either one encoder or as a fully separate encoder, with improvements that are robust to different language models, languages and data sets. For English, these improvements are larger than adding individual sources of linguistic information or adding non-contextual embeddings. A new method of analysis based on semantic tags demonstrates that the character-level representations improve performance across a subset of selected semantic phenomena.
    Original languageEnglish
    Title of host publicationProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
    PublisherAssociation for Computational Linguistics (ACL)
    Pages4587-4603
    Number of pages17
    DOIs
    Publication statusPublished - 16-Nov-2020

    Keywords

    • Semantic parsing
    • Discourse Representation Structures
    • Character-level models

    Cite this