DFADD: The Diffusion and Flow-Matching Based Audio Deepfake Dataset

Authors: Jiawei Du, I-Ming Lin, I-Hsiang Chiu, Xuanjun Chen, Haibin Wu, Wenze Ren, Yu Tsao, Hung-yi Lee, Jyh-Shing Roger Jang

Published: 2024-09-13 11:33:34+00:00

AI Summary

The paper introduces DFADD, a new dataset of audio deepfakes generated using advanced diffusion and flow-matching TTS models. DFADD addresses the lack of robust anti-spoofing models against these high-quality synthetic audios and serves as a valuable resource for developing more resilient detection methods.

Abstract

Mainstream zero-shot TTS production systems like Voicebox and Seed-TTS achieve human parity speech by leveraging Flow-matching and Diffusion models, respectively. Unfortunately, human-level audio synthesis leads to identity misuse and information security issues. Currently, many antispoofing models have been developed against deepfake audio. However, the efficacy of current state-of-the-art anti-spoofing models in countering audio synthesized by diffusion and flowmatching based TTS systems remains unknown. In this paper, we proposed the Diffusion and Flow-matching based Audio Deepfake (DFADD) dataset. The DFADD dataset collected the deepfake audio based on advanced diffusion and flowmatching TTS models. Additionally, we reveal that current anti-spoofing models lack sufficient robustness against highly human-like audio generated by diffusion and flow-matching TTS systems. The proposed DFADD dataset addresses this gap and provides a valuable resource for developing more resilient anti-spoofing models.


Key findings
Current anti-spoofing models, trained on datasets like ASVspoof, perform poorly on audio generated by diffusion and flow-matching TTS systems. The DFADD dataset significantly improves the performance of anti-spoofing models, with an average EER reduction of over 47%. Models trained on flow-matching based subsets showed better generalization than those trained on diffusion-based subsets.
Approach
The authors created the DFADD dataset by using five different diffusion and flow-matching based TTS models to generate deepfake audio from the VCTK dataset. They then evaluated the performance of state-of-the-art anti-spoofing models on this dataset, demonstrating the need for improved models.
Datasets
VCTK (for bonafide audio), LJspeech (for text prompts), and ASVspoof (for comparison). The main contribution is the creation of the DFADD dataset itself.
Model(s)
AASIST-L (for anti-spoofing detection); Grad-TTS, NaturalSpeech 2, Style-TTS 2, Matcha-TTS, and PFlow-TTS (for generating deepfakes).
Author countries
Taiwan