Abstract
We reproduced the human-based evaluation of the continuation of narratives task presented by Chakrabarty et al. (2022). This experiment is performed as part of the ReproNLP Shared Task on Reproducibility of Evaluations in NLP. Our main goal is to reproduce the original study under conditions as similar as possible. Specifically, we follow the original experimental design and perform human evaluations of the data from the original study, while describing the differences between the two studies. We then present the results of these two studies together with an analysis of similarities between them. Inter-annotator agreement (Krippendorff’s alpha) in the reproduction study is lower than in the original study, while the human evaluation results of both studies have the same trends, that is, our results support the findings in the original study.
Original language | English |
---|---|
Title of host publication | Proceedings of the 3rd Workshop on Human Evaluation of NLP Systems |
Editors | Anya Belz, Maja Popovic, Ehud Reiter, Craig Thomson, Joao Sedoc |
Publisher | INCOMA Ltd. |
Pages | 190-203 |
Number of pages | 14 |
ISBN (Electronic) | 9789544520885 |
Publication status | Published - 2023 |
Event | 3rd Workshop on Human Evaluation of NLP Systems, HumEval 2023 - Varna, Bulgaria Duration: 7-Sept-2023 → … |
Conference
Conference | 3rd Workshop on Human Evaluation of NLP Systems, HumEval 2023 |
---|---|
Country/Territory | Bulgaria |
City | Varna |
Period | 07/09/2023 → … |