CLAD: Robust Audio Deepfake Detection Against Manipulation Attacks with Contrastive Learning

Authors: Haolin Wu, Jing Chen, Ruiying Du, Cong Wu, Kun He, Xingcan Shang, Hao Ren, Guowen Xu

Published: 2024-04-24 13:10:35+00:00

AI Summary

This paper presents CLAD, a contrastive learning-based audio deepfake detector robust to manipulation attacks. CLAD incorporates contrastive learning to minimize variations caused by manipulations and a length loss to improve clustering of real audios, significantly enhancing detection robustness against various attacks.

Abstract

The increasing prevalence of audio deepfakes poses significant security threats, necessitating robust detection methods. While existing detection systems exhibit promise, their robustness against malicious audio manipulations remains underexplored. To bridge the gap, we undertake the first comprehensive study of the susceptibility of the most widely adopted audio deepfake detectors to manipulation attacks. Surprisingly, even manipulations like volume control can significantly bypass detection without affecting human perception. To address this, we propose CLAD (Contrastive Learning-based Audio deepfake Detector) to enhance the robustness against manipulation attacks. The key idea is to incorporate contrastive learning to minimize the variations introduced by manipulations, therefore enhancing detection robustness. Additionally, we incorporate a length loss, aiming to improve the detection accuracy by clustering real audios more closely in the feature space. We comprehensively evaluated the most widely adopted audio deepfake detection models and our proposed CLAD against various manipulation attacks. The detection models exhibited vulnerabilities, with FAR rising to 36.69%, 31.23%, and 51.28% under volume control, fading, and noise injection, respectively. CLAD enhanced robustness, reducing the FAR to 0.81% under noise injection and consistently maintaining an FAR below 1.63% across all tests. Our source code and documentation are available in the artifact repository (https://github.com/CLAD23/CLAD).


Key findings
Existing audio deepfake detectors are highly vulnerable to simple manipulations like volume control and fading. CLAD significantly improves robustness against these attacks, consistently maintaining a False Acceptance Rate (FAR) below 1.63% across various manipulations. The choice of encoder architecture influences CLAD's performance, with the AASIST encoder yielding the best results.
Approach
CLAD uses contrastive learning to train a robust audio encoder that generates similar feature representations for the same audio under different manipulations and dissimilar representations for different audios. It also incorporates a length loss to cluster real audios closer in feature space.
Datasets
Logical Access (LA) part of the ASVspoof 2019 dataset
Model(s)
AASIST, RawNet2, Res-TSSDNet, SAMO (as baselines); CLAD (proposed model) using AASIST, RawNet2, and Res-TSSDNet as encoders
Author countries
China, Singapore