Human Judgement as a Compass to Navigate Automatic Metrics for Formality Transfer

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

3 Downloads (Pure)

Abstract

Although text style transfer has witnessed rapid development in recent years, there is as yet no established standard for evaluation, which is performed using several automatic metrics, lacking the possibility of always resorting to human judgement. We focus on the task of formality transfer, and on the three aspects that are usually evaluated: style strength, content preservation, and fluency. To cast light on how such aspects are assessed by common and new metrics, we run a human-based evaluation and perform a rich correlation analysis. We are then able to offer some recommendations on the use of such metrics in formality transfer, also with an eye to their generalisability (or not) to related tasks.
Original languageEnglish
Title of host publicationProceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval)
EditorsAnya Belz, Maja Popović, Ehud Reiter, Anastasia Shimorina
PublisherAssociation for Computational Linguistics (ACL)
Pages102-115
Number of pages14
DOIs
Publication statusPublished - 27-May-2022
Event2nd Workshop on Human Evaluation of NLP Systems (HumEval) - Dublin, Ireland
Duration: 27-May-202227-May-2022

Workshop

Workshop2nd Workshop on Human Evaluation of NLP Systems (HumEval)
Abbreviated titleHumEval
Country/TerritoryIreland
CityDublin
Period27/05/202227/05/2022

Cite this