Abstract
The use of automatic methods for the study of lexical semantic change (LSC) has led to the creation of evaluation benchmarks. Benchmark datasets, however, are intimately tied to the corpus used for their creation questioning their reliability as well as the robustness of automatic methods. This contribution investigates these aspects showing the impact of unforeseen social and cultural dimensions. We also identify a set of additional issues (OCR quality, named entities) that impact the performance of the automatic methods, especially when used to discover LSC.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2nd International Workshop on Computational Approaches to Historical Language Change |
Editors | Nina Tahmasebi, Adam Jatowt, Yang Xu, Simon Hengchen, Syrielle Montariol, Haim Dubossarsky |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 14-20 |
Number of pages | 7 |
DOIs | |
Publication status | Published - 27-Jul-2021 |