What’s the Meaning of Superhuman Performance in Today’s NLU?

Simone Tedeschi, Johan Bos, Thierry Declerck, Jan Hajic, Daniel Hershcovich, Eduard Hovy, Alexander Koller, Simon Krek, Steven Schockaert, Rico Sennrich, Ekaterina Shutova, Roberto Navigli

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

In the last five years, there has been a significant focus in Natural Language Processing (NLP) on developing larger Pretrained Language Models (PLMs) and introducing benchmarks such as SuperGLUE and SQuAD to measure their abilities in language understanding, reasoning, and reading comprehension. These PLMs have achieved impressive results on these benchmarks, even surpassing human performance in some cases. This has led to claims of superhuman capabilities and the provocative idea that certain tasks have been solved. In this position paper, we take a critical look at these claims and ask whether PLMs truly have superhuman abilities and what the current benchmarks are really evaluating. We show that these benchmarks have serious limitations affecting the comparison between humans and PLMs and provide recommendations for fairer and more transparent benchmarks.
Original languageEnglish
Title of host publicationProceedings of the 61st Annual Meeting of the Association for Computational Linguistics
EditorsAnna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
PublisherAssociation for Computational Linguistics (ACL)
Pages12471-12491
Number of pages21
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'What’s the Meaning of Superhuman Performance in Today’s NLU?'. Together they form a unique fingerprint.

Cite this