Combining Automatic Speaker Verification and Prosody Analysis for Synthetic Speech Detection

Authors: Luigi Attorresi, Davide Salvi, Clara Borrelli, Paolo Bestagini, Stefano Tubaro

Published: 2022-10-31 11:03:03+00:00

AI Summary

This paper proposes a novel synthetic speech detection approach combining speaker verification and prosody analysis. It uses speaker embeddings from an automatic speaker verification network and prosody embeddings from a specialized encoder, concatenating them and feeding them into a binary classifier to detect deepfake speech generated by Text-to-Speech and Voice Conversion techniques.

Abstract

The rapid spread of media content synthesis technology and the potentially damaging impact of audio and video deepfakes on people's lives have raised the need to implement systems able to detect these forgeries automatically. In this work we present a novel approach for synthetic speech detection, exploiting the combination of two high-level semantic properties of the human voice. On one side, we focus on speaker identity cues and represent them as speaker embeddings extracted using a state-of-the-art method for the automatic speaker verification task. On the other side, voice prosody, intended as variations in rhythm, pitch or accent in speech, is extracted through a specialized encoder. We show that the combination of these two embeddings fed to a supervised binary classifier allows the detection of deepfake speech generated with both Text-to-Speech and Voice Conversion techniques. Our results show improvements over the considered baselines, good generalization properties over multiple datasets and robustness to audio compression.


Key findings
The proposed method outperforms baselines (RawNet2, Spec-ResNet, MFCC-ResNet) on ASVspoof 2019. It shows good generalization across multiple datasets and robustness to audio compression, although performance degrades with increasing compression levels. The combination of speaker and prosody embeddings improves detection accuracy compared to using either modality alone.
Approach
The approach extracts speaker identity cues as embeddings from a state-of-the-art speaker verification model and prosody features as embeddings from a speech synthesis encoder. These embeddings are concatenated and fed to a supervised binary classifier to distinguish between real and synthetic speech.
Datasets
ASVspoof 2019 (LA partition), LibriSpeech (train-clean-100), LJSpeech, Cloud2019, IEMOCAP, VoxCeleb 1, VoxCeleb 2, Blizzard 2013
Model(s)
ECAPA-Time Delay Neural Network (TDNN) for speaker embedding extraction, a prosody encoder (from a speech synthesis model) for prosody embedding extraction, Support Vector Machine (SVM) classifier
Author countries
Italy