CtrSVDD: A Benchmark Dataset and Baseline Analysis for Controlled Singing Voice Deepfake Detection

Authors: Yongyi Zang, Jiatong Shi, You Zhang, Ryuichi Yamamoto, Jionghao Han, Yuxun Tang, Shengyuan Xu, Wenxiao Zhao, Jing Guo, Tomoki Toda, Zhiyao Duan

Published: 2024-06-04 16:00:18+00:00

AI Summary

The paper introduces CtrSVDD, a large-scale dataset for singing voice deepfake detection, addressing limitations in existing datasets through enhanced controllability, diversity, and openness. It also presents a baseline system for evaluating different audio features in detecting deepfakes.

Abstract

Recent singing voice synthesis and conversion advancements necessitate robust singing voice deepfake detection (SVDD) models. Current SVDD datasets face challenges due to limited controllability, diversity in deepfake methods, and licensing restrictions. Addressing these gaps, we introduce CtrSVDD, a large-scale, diverse collection of bonafide and deepfake singing vocals. These vocals are synthesized using state-of-the-art methods from publicly accessible singing voice datasets. CtrSVDD includes 47.64 hours of bonafide and 260.34 hours of deepfake singing vocals, spanning 14 deepfake methods and involving 164 singer identities. We also present a baseline system with flexible front-end features, evaluated against a structured train/dev/eval split. The experiments show the importance of feature selection and highlight a need for generalization towards deepfake methods that deviate further from training distribution. The CtrSVDD dataset and baselines are publicly accessible.


Key findings
Raw waveform and LFCC front-ends achieved the lowest EER, indicating their robustness. However, the results also showed a lack of generalization to unseen deepfake methods, highlighting the need for more robust and generalizable models. The performance varied significantly across different deepfake methods, with some showing considerable overlap with bonafide singing in t-SNE visualizations.
Approach
The authors propose a baseline system with interchangeable front-end feature extraction modules (raw waveform, spectrogram, mel-spectrogram, MFCC, LFCC) and a fixed back-end based on graph attention networks. The system's performance is evaluated using the Equal Error Rate (EER) metric.
Datasets
CtrSVDD dataset (includes bonafide and deepfake singing vocals synthesized using 14 different methods from publicly accessible singing voice datasets such as Opencpop, M4Singer, Kising, ACE-Studio release, Ofuton-P, Oniku Kurumi, Kiritan, and JVS-MuSiC), partially based on the FSD dataset.
Model(s)
A baseline system with interchangeable front-end feature extraction modules (raw waveform, spectrogram, mel-spectrogram, MFCC, LFCC) and a fixed back-end using graph attention networks.
Author countries
USA, Japan, China