Probing Linguistic Knowledge in Italian Neural Language Models across Language Varieties

Alessio Miaschi*, Gabriele Sarti, Dominique Brunato, Felice Dell’Orletta, Giulia Venturi

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

58 Downloads (Pure)


In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the transformer models currently available for the Italian language. In particular, we investigate how the complexity of two different architectures of probing models affects the performance of the Transformers in encoding a wide spectrum of linguistic features. Moreover, we explore how this implicit knowledge varies according to different textual genres and language varieties.
Original languageEnglish
Article number2
Pages (from-to)25-44
Number of pages20
JournalItalilan Journal of Computational Linguistics
Issue number1
Publication statusPublished - Jul-2022


  • italian language
  • natural language processing
  • syntax
  • neural language models
  • language modeling
  • interpretability
  • interpretable machine learning

Cite this