Abstract
Linguistic phenomena like pronouns, control constructions, or co-reference give rise to co-indexed variables in meaning representations. We review three different methods for dealing with co-indexed variables in the output of neural semantic parsing of abstract meaning representations: (a) copying concepts during training and restoring co-indexation in a post-processing step; (b) explicit indexing of co-indexation; and (c) using absolute paths to designate co-indexing. The second method gives the best results and outperforms the baseline by 2.9 F-score points.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 2nd Workshop on Semantic Deep Learning (SemDeep-2) |
| Editors | Dagmar Gromann, Thierry Declerck, Georg Heigl |
| Publisher | Association for Computational Linguistics (ACL) |
| Pages | 49-57 |
| Number of pages | 9 |
| Volume | 2 |
| Publication status | Published - 19-Oct-2017 |
Keywords
- semantic parsing
- abstract meaning representations
- co-reference
- neural networks
- sequence-to-sequence models