Dealing with Co-reference in Neural Semantic Parsing

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    86 Downloads (Pure)

    Abstract

    Linguistic phenomena like pronouns, control constructions, or co-reference give rise to co-indexed variables in meaning representations. We review three different methods for dealing with co-indexed variables in the output of neural semantic parsing of abstract meaning representations: (a) copying concepts during training and restoring co-indexation in a post-processing step; (b) explicit indexing of co-indexation; and (c) using absolute paths to designate co-indexing. The second method gives the best results and outperforms the baseline by 2.9 F-score points.
    Original languageEnglish
    Title of host publicationProceedings of the 2nd Workshop on Semantic Deep Learning (SemDeep-2)
    EditorsDagmar Gromann, Thierry Declerck, Georg Heigl
    PublisherAssociation for Computational Linguistics (ACL)
    Pages49-57
    Number of pages9
    Volume2
    Publication statusPublished - 19-Oct-2017

    Keywords

    • semantic parsing
    • abstract meaning representations
    • co-reference
    • neural networks
    • sequence-to-sequence models

    Cite this