Towards Source Attribution of Singing Voice Deepfake with Multimodal Foundation Models

Authors: Orchid Chetia Phukan, Girish, Mohd Mujtaba Akhtar, Swarup Ranjan Behera, Priyabrata Mallick, Pailla Balakrishna Reddy, Arun Balaji Buduru, Rajesh Sharma

Published: 2025-06-03 20:16:41+00:00

AI Summary

This paper introduces the task of singing voice deepfake source attribution (SVDSA) and proposes COFFE, a novel framework for this task. COFFE uses multimodal foundation models (MMFMs) and a Chernoff Distance loss function for effective fusion of different foundation models, achieving state-of-the-art performance.

Abstract

In this work, we introduce the task of singing voice deepfake source attribution (SVDSA). We hypothesize that multimodal foundation models (MMFMs) such as ImageBind, LanguageBind will be most effective for SVDSA as they are better equipped for capturing subtle source-specific characteristics-such as unique timbre, pitch manipulation, or synthesis artifacts of each singing voice deepfake source due to their cross-modality pre-training. Our experiments with MMFMs, speech foundation models and music foundation models verify the hypothesis that MMFMs are the most effective for SVDSA. Furthermore, inspired from related research, we also explore fusion of foundation models (FMs) for improved SVDSA. To this end, we propose a novel framework, COFFE which employs Chernoff Distance as novel loss function for effective fusion of FMs. Through COFFE with the symphony of MMFMs, we attain the topmost performance in comparison to all the individual FMs and baseline fusion methods.


Key findings
Multimodal foundation models (MMFMs) significantly outperform unimodal models for SVDSA. The COFFE framework, utilizing Chernoff Distance for model fusion, achieves superior performance compared to individual models and baseline fusion methods, setting a new state-of-the-art. The results highlight the importance of multimodal information for this task.
Approach
The authors propose COFFE, a framework that fuses multiple foundation models (SFMs, MFMs, and MMFMs) for singing voice deepfake source attribution. It uses a novel Chernoff Distance loss function to align the feature spaces of different models before concatenation and classification.
Datasets
CtrSVDD dataset, containing 188,486 synthetic singing voice deepfake clips generated by 14 different methods.
Model(s)
WavLM, Unispeech-SAT, Wav2vec2, XLS-R, Whisper, MMS, x-vector, MERT (various versions), music2vec-v1, ImageBind, LanguageBind. These models are used individually and in various combinations within the COFFE framework.
Author countries
India, Estonia