ReproHum #0892-01: The painful route to consistent results: A reproduction study of human evaluation in NLG

Irene Mondella*, Huiyuan Lai, Malvina Nissim

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

32 Downloads (Pure)

Abstract

In spite of the core role human judgement plays in evaluating the performance of NLP systems, the way human assessments are elicited in NLP experiments, and to some extent the nature of human judgement itself, pose challenges to the reliability and validity of human evaluation. In the context of the larger ReproHum project, aimed at running large scale multi-lab reproductions of human judgement, we replicated the understandability assessment by humans on several generated outputs of simplified text described in the paper “Neural Text Simplification of Clinical Letters with a Domain Specific Phrase Table" by Shardlow and Nawaz, appeared in the Proceedings of ACL 2019. Although we had to implement a series of modifications compared to the original study, which were necessary to run our human evaluation on exactly the same data, we managed to collect assessments and compare results with the original study. We obtained results consistent with those of the reference study, confirming their findings. The paper is complete with as much information as possible to foster and facilitate future reproduction.

Original languageEnglish
Title of host publication4th Workshop on Human Evaluation of NLP Systems, HumEval 2024 at LREC-COLING 2024 - Workshop Proceedings
EditorsSimone Balloccu, Anya Belz, Rudali Huidrom, Ehud Reiter, Joao Sedoc, Craig Thomson
PublisherEuropean Language Resources Association (ELRA)
Pages261-268
Number of pages8
ISBN (Electronic)978-249381441-8
Publication statusPublished - 2024
Event4th Workshop on Human Evaluation of NLP Systems, HumEval 2024 - Torino, Italy
Duration: 21-May-202421-May-2024

Publication series

Name4th Workshop on Human Evaluation of NLP Systems, HumEval 2024 at LREC-COLING 2024 - Workshop Proceedings

Conference

Conference4th Workshop on Human Evaluation of NLP Systems, HumEval 2024
Country/TerritoryItaly
CityTorino
Period21/05/202421/05/2024

Keywords

  • human evaluation
  • reproducibility
  • ReproHum

Fingerprint

Dive into the research topics of 'ReproHum #0892-01: The painful route to consistent results: A reproduction study of human evaluation in NLG'. Together they form a unique fingerprint.

Cite this