Pre-Trained Language-Meaning Models for Multilingual Parsing and Generation

OnderzoeksoutputAcademicpeer review

63 Downloads (Pure)

Samenvatting

Pre-trained language models (PLMs) have achieved great success in NLP and have recently been used for tasks in computational semantics. However, these tasks do not fully benefit from PLMs since meaning representations are not explicitly included. We introduce multilingual pre-trained language-meaning models based on Discourse Representation Structures (DRSs), including meaning representations besides natural language texts in the same model, and design a new strategy to reduce the gap between the pre-training and fine-tuning objectives. Since DRSs are language neutral, cross-lingual transfer learning is adopted to further improve the performance of non-English tasks. Automatic evaluation results show that our approach achieves the best performance on both the multilingual DRS parsing and DRS-to-text generation tasks. Correlation analysis between automatic metrics and human judgements on the generation task further validates the effectiveness of our model. Human inspection reveals that out-of-vocabulary tokens are the main cause of erroneous results.
Originele taal-2English
TitelFindings of the Association for Computational Linguistics
SubtitelACL 2023
RedacteurenAnna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
UitgeverijAssociation for Computational Linguistics (ACL)
Pagina's5586–5600
Aantal pagina's15
DOI's
StatusPublished - 2023

Vingerafdruk

Duik in de onderzoeksthema's van 'Pre-Trained Language-Meaning Models for Multilingual Parsing and Generation'. Samen vormen ze een unieke vingerafdruk.

Citeer dit