Singing Voice Graph Modeling for SingFake Detection

Authors: Xuanjun Chen, Haibin Wu, Jyh-Shing Roger Jang, Hung-yi Lee

Published: 2024-06-05 10:02:56+00:00

AI Summary

This paper introduces SingGraph, a novel model for singing voice deepfake (SingFake) detection. SingGraph combines MERT and wav2vec2.0 models for pitch/rhythm and lyric analysis, respectively, and uses RawBoost and beat matching for data augmentation, achieving state-of-the-art results on the SingFake dataset.

Abstract

Detecting singing voice deepfakes, or SingFake, involves determining the authenticity and copyright of a singing voice. Existing models for speech deepfake detection have struggled to adapt to unseen attacks in this unique singing voice domain of human vocalization. To bridge the gap, we present a groundbreaking SingGraph model. The model synergizes the capabilities of the MERT acoustic music understanding model for pitch and rhythm analysis with the wav2vec2.0 model for linguistic analysis of lyrics. Additionally, we advocate for using RawBoost and beat matching techniques grounded in music domain knowledge for singing voice augmentation, thereby enhancing SingFake detection performance. Our proposed method achieves new state-of-the-art (SOTA) results within the SingFake dataset, surpassing the previous SOTA model across three distinct scenarios: it improves EER relatively for seen singers by 13.2%, for unseen singers by 24.3%, and unseen singers using different codecs by 37.1%.


Key findings
SingGraph significantly outperforms previous state-of-the-art models on the SingFake dataset, improving the Equal Error Rate (EER) by 13.2% for seen singers, 24.3% for unseen singers, and 37.1% for unseen singers using different codecs. The results highlight the effectiveness of combining music and speech understanding models with data augmentation techniques for SingFake detection.
Approach
SingGraph uses MERT for pitch and rhythm analysis and wav2vec2.0 for lyric analysis of separated instrumental and vocal tracks. RawBoost and beat matching augmentations enhance the model's ability to distinguish between real and fake singing voices. The model uses a graph neural network architecture for feature aggregation and modeling.
Datasets
SingFake dataset
Model(s)
MERT, wav2vec2.0, RawNet2, Graph Attention Networks (GANs), and a custom SingGraph architecture.
Author countries
Taiwan