Universal Discourse Representation Structure Parsing

J. Liu, S.B. Cohen, M. Lapata, Johan Bos

    Research output: Contribution to journalArticleAcademicpeer-review

    16 Downloads (Pure)

    Abstract

    We consider the task of crosslingual semantic parsing in the style of Discourse Representation Theory (DRT) where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide learning in other languages. We introduce Universal Discourse Representation Theory (UDRT), a variant of DRT that explicitly anchors semantic representations to tokens in the linguistic input. We develop a semantic parsing framework based on the Transformer architecture and utilize it to obtain semantic resources in multiple languages following two learning schemes. The many-to-one approach translates non-English text to English, and then runs a relatively accurate English parser on the translated text, while the one-to-many approach translates gold standard English to non-English text and trains multiple parsers (one per language) on the translations. Experimental results on the Parallel Meaning Bank show that our proposal outperforms strong baselines by a wide margin and can be used to construct (silver-standard) meaning banks for 99 languages.
    Original languageEnglish
    Pages (from-to)445–476
    Number of pages32
    JournalComputational Linguistics
    Volume47
    Issue number2
    DOIs
    Publication statusPublished - 2021

    Cite this