Same Trends, Different Answers: Insights from a Replication Study of Human Plausibility Judgments on Narrative Continuations

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

4 Citations (Scopus)
12 Downloads (Pure)

Abstract

We reproduced the human-based evaluation of the continuation of narratives task presented by Chakrabarty et al. (2022). This experiment is performed as part of the ReproNLP Shared Task on Reproducibility of Evaluations in NLP. Our main goal is to reproduce the original study under conditions as similar as possible. Specifically, we follow the original experimental design and perform human evaluations of the data from the original study, while describing the differences between the two studies. We then present the results of these two studies together with an analysis of similarities between them. Inter-annotator agreement (Krippendorff’s alpha) in the reproduction study is lower than in the original study, while the human evaluation results of both studies have the same trends, that is, our results support the findings in the original study.

Original languageEnglish
Title of host publicationProceedings of the 3rd Workshop on Human Evaluation of NLP Systems
EditorsAnya Belz, Maja Popovic, Ehud Reiter, Craig Thomson, Joao Sedoc
PublisherINCOMA Ltd.
Pages190-203
Number of pages14
ISBN (Electronic)9789544520885
Publication statusPublished - 2023
Event3rd Workshop on Human Evaluation of NLP Systems, HumEval 2023 - Varna, Bulgaria
Duration: 7-Sept-2023 → …

Conference

Conference3rd Workshop on Human Evaluation of NLP Systems, HumEval 2023
Country/TerritoryBulgaria
CityVarna
Period07/09/2023 → …

Fingerprint

Dive into the research topics of 'Same Trends, Different Answers: Insights from a Replication Study of Human Plausibility Judgments on Narrative Continuations'. Together they form a unique fingerprint.

Cite this