Abstract
Pre-trained speech representations like wav2vec 2.0 are a powerful tool for automatic speech recognition (ASR). Yet many endangered languages lack sufficient data for pre-training such models, or are predominantly oral vernaculars without a standardised writing system, precluding fine-tuning. Query-by-example spoken term detection (QbE-STD) offers an alternative for iteratively indexing untranscribed speech corpora by locating spoken query terms. Using data from 7 Australian Aboriginal languages and a regional variety of Dutch, all of which are endangered or vulnerable, we show that QbE-STD can be improved by leveraging representations developed for ASR (wav2vec 2.0: the English monolingual model and XLSR53 multilingual model). Surprisingly, the English model outperformed the multilingual model on 4 Australian language datasets, raising questions around how to optimally leverage self-supervised speech representations for QbE-STD. Nevertheless, we find that wav2vec 2.0 representations (either English or XLSR53) offer large improvements (56-86% relative) over state-of-the-art approaches on our endangered language datasets.
Original language | English |
---|---|
Publisher | IEEE |
DOIs | |
Publication status | Published - 3-Feb-2022 |
Fingerprint
Dive into the research topics of 'Leveraging pre-trained representations to improve access to untranscribed speech from endangered languages'. Together they form a unique fingerprint.Datasets
-
Dataset: Gronings
Bartelds, M. (Contributor) & San, N. (Contributor), ZENODO, 23-Mar-2021
DOI: 10.5281/zenodo.4634878, https://github.com/fauxneticien/qbe-std_feats_eval
Dataset