Human Perception in Natural Language Generation

Lorenzo de Mattei, Huiyuan Lai, Felice Dell'Orletta, Malvina Nissim

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

4 Citations (Scopus)
168 Downloads (Pure)

Abstract

We ask subjects whether they perceive as human-produced a bunch of texts, some of which are actually human-written, while others are automatically generated. We use this data to fine-tune a GPT-2 model to push it to generate more human-like texts, and observe that this fine-tuned model produces texts that are indeed perceived more human-like than the original model. Contextually, we show that our automatic evaluation strategy well correlates with human judgements. We also run a linguistic analysis to unveil the characteristics of human- vs machine-perceived language.
Original languageEnglish
Title of host publicationProceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics
EditorsAntoine Bosselut, Esin Durmus, Varun Prashant Gangal, Sebastian Gehrmann, Yacine Jernite, Laura Perez-Beltrachini, Samira Shaikh, Wei Xu
Place of PublicationBangkok, Thailand
PublisherAssociation for Computational Linguistics, ACL Anthology
Pages15-23
Number of pages9
DOIs
Publication statusPublished - 2021

Fingerprint

Dive into the research topics of 'Human Perception in Natural Language Generation'. Together they form a unique fingerprint.

Cite this