AGREE: a new benchmark for the evaluation of distributional semantic models of ancient Greek

Research output: Contribution to journalArticleAcademicpeer-review

2 Citations (Scopus)
31 Downloads (Pure)

Abstract

The last years have seen the application of Natural Language Processing, in particular, language models, to the study of the Semantics of ancient Greek, but only a little work has been done to create gold data for the evaluation of such models. In this contribution we introduce AGREE, the first benchmark for intrinsic evaluation of semantic models of ancient Greek created from expert judgements. In the absence of native speakers, eliciting expert judgements to create a gold standard is a way to leverage a competence that is the closest to that of natives. Moreover, this method allows for collecting data in a uniform way and giving precise instructions to participants. Human judgements about word relatedness were collected via two questionnaires: in the first, experts provided related lemmas to some proposed seeds, while in the second, they assigned relatedness judgements to pairs of lemmas. AGREE was built from a selection of the collected data.
Original languageEnglish
JournalDigital Scholarship in the Humanities
DOIs
Publication statusE-pub ahead of print - 15-Jan-2024

Keywords

  • ancient Greek
  • Semantics
  • relatedness
  • benchmark
  • evaluation
  • ancient languages
  • language models
  • human
  • judgements
  • gold standard
  • expert
  • word2vec

Cite this