TY - GEN
T1 - Multilingual Multi-Figurative Language Detection
AU - Lai, Huiyuan
AU - Toral, Antonio
AU - Nissim, Malvina
N1 - Publisher Copyright:
© 2023 Association for Computational Linguistics.
PY - 2023
Y1 - 2023
N2 - Figures of speech help people express abstract concepts and evoke stronger emotions than literal expressions, thereby making texts more creative and engaging. Due to its pervasive and fundamental character, figurative language understanding has been addressed in Natural Language Processing, but it's highly understudied in a multilingual setting and when considering more than one figure of speech at the same time. To bridge this gap, we introduce multilingual multi-figurative language modelling, and provide a benchmark for sentence-level figurative language detection, covering three common figures of speech and seven languages. Specifically, we develop a framework for figurative language detection based on template-based prompt learning. In so doing, we unify multiple detection tasks that are interrelated across multiple figures of speech and languages, without requiring task- or language-specific modules. Experimental results show that our framework outperforms several strong baselines and may serve as a blueprint for the joint modelling of other interrelated tasks.
AB - Figures of speech help people express abstract concepts and evoke stronger emotions than literal expressions, thereby making texts more creative and engaging. Due to its pervasive and fundamental character, figurative language understanding has been addressed in Natural Language Processing, but it's highly understudied in a multilingual setting and when considering more than one figure of speech at the same time. To bridge this gap, we introduce multilingual multi-figurative language modelling, and provide a benchmark for sentence-level figurative language detection, covering three common figures of speech and seven languages. Specifically, we develop a framework for figurative language detection based on template-based prompt learning. In so doing, we unify multiple detection tasks that are interrelated across multiple figures of speech and languages, without requiring task- or language-specific modules. Experimental results show that our framework outperforms several strong baselines and may serve as a blueprint for the joint modelling of other interrelated tasks.
UR - http://www.scopus.com/inward/record.url?scp=85175460919&partnerID=8YFLogxK
U2 - 10.18653/v1/2023.findings-acl.589
DO - 10.18653/v1/2023.findings-acl.589
M3 - Conference contribution
AN - SCOPUS:85175460919
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 9254
EP - 9267
BT - Findings of the Association for Computational Linguistics, ACL 2023
A2 - Rogers, Anna
A2 - Boyd-Graber, Jordan
A2 - Okazaki, Naoaki
PB - Association for Computational Linguistics, ACL Anthology
T2 - 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023
Y2 - 9 July 2023 through 14 July 2023
ER -