Leveraging pre-trained representations to improve access to untranscribed speech from endangered languages

Nay San*, Martijn Bartelds*, Mitchell Browne, Lily Clifford, Fiona Gibson, John Mansfield, David Nash, Jane Simpson, Myfany Turpin, Maria Vollmer, Sasha Wilmoth, Dan Jurafsky

*Corresponding author for this work

    Research output: Working paperAcademic

    12 Citations (Scopus)
    239 Downloads (Pure)

    Abstract

    Pre-trained speech representations like wav2vec 2.0 are a powerful tool for automatic speech recognition (ASR). Yet many endangered languages lack sufficient data for pre-training such models, or are predominantly oral vernaculars without a standardised writing system, precluding fine-tuning. Query-by-example spoken term detection (QbE-STD) offers an alternative for iteratively indexing untranscribed speech corpora by locating spoken query terms. Using data from 7 Australian Aboriginal languages and a regional variety of Dutch, all of which are endangered or vulnerable, we show that QbE-STD can be improved by leveraging representations developed for ASR (wav2vec 2.0: the English monolingual model and XLSR53 multilingual model). Surprisingly, the English model outperformed the multilingual model on 4 Australian language datasets, raising questions around how to optimally leverage self-supervised speech representations for QbE-STD. Nevertheless, we find that wav2vec 2.0 representations (either English or XLSR53) offer large improvements (56-86% relative) over state-of-the-art approaches on our endangered language datasets.
    Original languageEnglish
    PublisherIEEE
    DOIs
    Publication statusPublished - 3-Feb-2022

    Fingerprint

    Dive into the research topics of 'Leveraging pre-trained representations to improve access to untranscribed speech from endangered languages'. Together they form a unique fingerprint.

    Cite this