Adaptive re-calibration of channel-wise features for Adversarial Audio Classification

Authors: Vardhan Dongre, Abhinav Thimma Reddy, Nikhitha Reddeddy

Published: 2022-10-21 04:21:56+00:00

AI Summary

This paper proposes an adaptive channel-wise recalibration of audio features using attentional feature fusion for synthetic speech detection. The approach improves upon existing methods by achieving higher accuracy and better generalization across various synthetic speech generation models, particularly using a ResNet architecture with squeeze-excitation blocks and a combination of LFCC and MFCC features.

Abstract

DeepFake Audio, unlike DeepFake images and videos, has been relatively less explored from detection perspective, and the solutions which exist for the synthetic speech classification either use complex networks or dont generalize to different varieties of synthetic speech obtained using different generative and optimization-based methods. Through this work, we propose a channel-wise recalibration of features using attention feature fusion for synthetic speech detection and compare its performance against different detection methods including End2End models and Resnet-based models on synthetic speech generated using Text to Speech and Vocoder systems like WaveNet, WaveRNN, Tactotron, and WaveGlow. We also experiment with Squeeze Excitation (SE) blocks in our Resnet models and found that the combination was able to get better performance. In addition to the analysis, we also demonstrate that the combination of Linear frequency cepstral coefficients (LFCC) and Mel Frequency cepstral coefficients (MFCC) using the attentional feature fusion technique creates better input features representations which can help even simpler models generalize well on synthetic speech classification tasks. Our models (Resnet based using feature fusion) trained on Fake or Real (FoR) dataset and were able to achieve 95% test accuracy with the FoR data, and an average of 90% accuracy with samples we generated using different generative models after adapting this framework.


Key findings
The proposed approach, using ResNet with attentional feature fusion of LFCC and MFCC, achieved 95% accuracy on the FoR dataset and an average of 90% accuracy on various synthetic speech samples. The inclusion of Squeeze Excitation blocks further improved performance, and the model demonstrated better generalizability compared to baselines including an end-to-end model.
Approach
The authors address the problem of synthetic speech detection by proposing a method that combines Linear Frequency Cepstral Coefficients (LFCC) and Mel Frequency Cepstral Coefficients (MFCC) using attentional feature fusion. This fused feature representation is then fed into a ResNet model, optionally incorporating Squeeze Excitation (SE) blocks, to perform the classification.
Datasets
Fake or Real (FoR) dataset, ASVSpoof 2019 Logical Access (LA) dataset, and synthetic speech samples generated using WaveNet, WaveRNN, FastSpeech, and Tacotron & WaveGlow.
Model(s)
ResNet34, ResNet50, with and without Squeeze Excitation (SE) blocks; Random Forest; Multi-layer Perceptron; Time-domain Synthetic Speech Detection Net (TSSDNet).
Author countries
USA