Same Trends, Different Answers: Insights from a Replication Study of Human Plausibility Judgments on Narrative Continuations

OnderzoeksoutputAcademicpeer review

4 Citaten (Scopus)
9 Downloads (Pure)

Samenvatting

We reproduced the human-based evaluation of the continuation of narratives task presented by Chakrabarty et al. (2022). This experiment is performed as part of the ReproNLP Shared Task on Reproducibility of Evaluations in NLP. Our main goal is to reproduce the original study under conditions as similar as possible. Specifically, we follow the original experimental design and perform human evaluations of the data from the original study, while describing the differences between the two studies. We then present the results of these two studies together with an analysis of similarities between them. Inter-annotator agreement (Krippendorff’s alpha) in the reproduction study is lower than in the original study, while the human evaluation results of both studies have the same trends, that is, our results support the findings in the original study.

Originele taal-2English
TitelProceedings of the 3rd Workshop on Human Evaluation of NLP Systems
RedacteurenAnya Belz, Maja Popovic, Ehud Reiter, Craig Thomson, Joao Sedoc
UitgeverijINCOMA Ltd.
Pagina's190-203
Aantal pagina's14
ISBN van elektronische versie9789544520885
StatusPublished - 2023
Evenement3rd Workshop on Human Evaluation of NLP Systems, HumEval 2023 - Varna, Bulgaria
Duur: 7-sep.-2023 → …

Conference

Conference3rd Workshop on Human Evaluation of NLP Systems, HumEval 2023
Land/RegioBulgaria
StadVarna
Periode07/09/2023 → …

Vingerafdruk

Duik in de onderzoeksthema's van 'Same Trends, Different Answers: Insights from a Replication Study of Human Plausibility Judgments on Narrative Continuations'. Samen vormen ze een unieke vingerafdruk.

Citeer dit