Endowing Neural Language Learners with Human-like Biases: A Case Study on Dependency Length Minimization

OnderzoeksoutputAcademicpeer review

1 Citaat (Scopus)
76 Downloads (Pure)

Samenvatting

Natural languages show a tendency to minimize the linear distance between heads and their dependents in a sentence, known as dependency length minimization (DLM). Such a preference, however, has not been consistently replicated with neural agent simulations. Comparing the behavior of models with that of human learners can reveal which aspects affect the emergence of this phenomenon. In this work, we investigate the minimal conditions that may lead neural learners to develop a DLM preference. We add three factors to the standard neural-agent language learning and communication framework to make the simulation more realistic, namely: (i) the presence of noise during listening, (ii) context-sensitivity of word use through non-uniform conditional word distributions, and (iii)
incremental sentence processing, or the extent to which an utterance’s meaning can be guessed before hearing it entirely. While no preference appears in production, we show that the proposed factors can contribute to a small but significant learning advantage of DLM for listeners of verb-initial languages.
Originele taal-2English
TitelProceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
RedacteurenNicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
UitgeverijELRA and ICCL
Pagina's5819-5832
Aantal pagina's14
StatusPublished - mei-2024

Vingerafdruk

Duik in de onderzoeksthema's van 'Endowing Neural Language Learners with Human-like Biases: A Case Study on Dependency Length Minimization'. Samen vormen ze een unieke vingerafdruk.

Citeer dit