Abstract
We propose a linguistically motivated version of the relative pronoun probing task for Dutch (where a model has to predict whether a masked token is either die or dat), collect realistic data for it using a parsed corpus, and probe the performance of four context-sensitive bert-based neural language models. Whereas the original task, which simply masked all occurrences of the words die and dat, was relatively easy, the linguistically motivated task turns out to be much harder. Models differ considerably in their performance, but a monolingual model trained on a heterogeneous corpus appears to be most robust.
Original language | English |
---|---|
Pages (from-to) | 59–70 |
Number of pages | 12 |
Journal | Computational Linguistics in the Netherlands Journal |
Volume | 11 |
Publication status | Published - 31-Dec-2021 |
Event | Computational Linguistics in the Netherlands - (virtual), Ghent, Belgium Duration: 9-Jul-2021 → 9-Jul-2021 Conference number: 31 https://www.clin31.ugent.be/ |
Keywords
- PROBING ATTACHMENT LOSS
- language models
- Dutch