Comparison of Speech Representations for Automatic Quality Estimation in Multi-Speaker Text-to-Speech Synthesis

Authors: Jennifer Williams, Joanna Rownicka, Pilar Oplustil, Simon King

Published: 2020-02-28 10:44:32+00:00

AI Summary

This paper investigates automatic quality estimation of multi-speaker Text-to-Speech (TTS) synthesis by comparing different speech representations for predicting human mean opinion scores (MOS). A neural network is trained and evaluated on various TTS and voice conversion systems, achieving high correlation with human judgments, and revealing consistent quality patterns across different systems for specific speakers.

Abstract

We aim to characterize how different speakers contribute to the perceived output quality of multi-speaker Text-to-Speech (TTS) synthesis. We automatically rate the quality of TTS using a neural network (NN) trained on human mean opinion score (MOS) ratings. First, we train and evaluate our NN model on 13 different TTS and voice conversion (VC) systems from the ASVSpoof 2019 Logical Access (LA) Dataset. Since it is not known how best to represent speech for this task, we compare 8 different representations alongside MOSNet frame-based features. Our representations include image-based spectrogram features and x-vector embeddings that explicitly model different types of noise such as T60 reverberation time. Our NN predicts MOS with a high correlation to human judgments. We report prediction correlation and error. A key finding is the quality achieved for certain speakers seems consistent, regardless of the TTS or VC system. It is widely accepted that some speakers give higher quality than others for building a TTS system: our method provides an automatic way to identify such speakers. Finally, to see if our quality prediction models generalize, we predict quality scores for synthetic speech using a separate multi-speaker TTS system that was trained on LibriTTS data, and conduct our own MOS listening test to compare human ratings with our NN predictions.


Key findings
The study finds that certain speakers consistently produce higher or lower quality speech regardless of the TTS or VC system used. The xvec5 representation, modeling replay device quality, showed the best performance in predicting MOS and ranking speakers. The model's ability to generalize to a new TTS system (Ophelia/LibriTTS) was limited, highlighting the need for further research in cross-system generalization.
Approach
The authors train a neural network (NN) to predict human mean opinion scores (MOS) for the quality of TTS and voice conversion (VC) systems. They compare the performance of the NN using eight different speech representations (including spectrograms and x-vectors) as input features. The best performing model is selected based on the Spearman Rank Correlation Coefficient at the speaker level.
Datasets
ASVSpoof 2019 Logical Access (LA) Dataset, ASVSpoof 2019 Physical Access (PA) Dataset, LibriTTS Dataset
Model(s)
MOSNet (BLSTM, CNN, CNN-BLSTM), a low-capacity CNN architecture
Author countries
United Kingdom