Resource-Efficient Fine-Tuning Strategies for Automatic MOS Prediction in Text-to-Speech for Low-Resource Languages

Phat Do, Matt Coler*, Jelske Dijkstra, Esther Klabbers

*Corresponding author voor dit werk

OnderzoeksoutputAcademicpeer review

75 Downloads (Pure)

Samenvatting

We train a MOS prediction model based on wav2vec 2.0 using the open-access data sets BVCC and SOMOS. Our test with neural TTS data in the low-resource language (LRL) West Frisian shows that pre-training on BVCC before fine-tuning on SOMOS leads to the best accuracy for both fine-tuned and zero-shot prediction. Further fine-tuning experiments show that using more than 30 percent of the total data does not lead to significant improvements. In addition, fine-tuning with data from a single listener shows promising system-level accuracy, supporting the viability of one-participant pilot tests. These findings can all assist the resource-conscious development of TTS for LRLs by progressing towards better zero-shot MOS prediction and informing the design of listening tests, especially in early-stage evaluation.

Originele taal-2English
TitelProceedings of Interspeech 2023
UitgeverijISCA
Pagina's5466-5470
Aantal pagina's5
DOI's
StatusPublished - 20-aug.-2023
EvenementInterspeech 2023 - Dublin, Ireland
Duur: 20-aug.-202324-aug.-2023

Conference

ConferenceInterspeech 2023
Land/RegioIreland
StadDublin
Periode20/08/202324/08/2023

Vingerafdruk

Duik in de onderzoeksthema's van 'Resource-Efficient Fine-Tuning Strategies for Automatic MOS Prediction in Text-to-Speech for Low-Resource Languages'. Samen vormen ze een unieke vingerafdruk.

Citeer dit