Quantifying Language Variation Acoustically with Few Resources

Martijn Bartelds, Martijn Wieling

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

7 Citations (Scopus)
63 Downloads (Pure)

Abstract

Deep acoustic models represent linguistic information based on massive amounts of data. Unfortunately, for regional languages and dialects such resources are mostly not available. However, deep acoustic models might have learned linguistic information that transfers to low-resource languages. In this study, we evaluate whether this is the case through the task of distinguishing low-resource (Dutch) regional varieties. By extracting embeddings from the hidden layers of various wav2vec 2.0 models (including new models which are pre-trained and/or fine-tuned on Dutch) and using dynamic time warping, we compute pairwise pronunciation differences averaged over 10 words for over 100 individual dialects from four (regional) languages. We then cluster the resulting difference matrix in four groups and compare these to a gold standard, and a partitioning on the basis of comparing phonetic transcriptions. Our results show that acoustic models outperform the (traditional) transcription-based approach without requiring phonetic transcriptions, with the best performance achieved by the multilingual XLSR-53 model fine-tuned on Dutch. On the basis of only six seconds of speech, the resulting clustering closely matches the gold standard.
Original languageEnglish
Title of host publicationProceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
EditorsMarine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
PublisherAssociation for Computational Linguistics (ACL)
Pages3735-3741
Number of pages7
DOIs
Publication statusPublished - Jul-2022

Fingerprint

Dive into the research topics of 'Quantifying Language Variation Acoustically with Few Resources'. Together they form a unique fingerprint.

Cite this