Codecfake: An Initial Dataset for Detecting LLM-based Deepfake Audio

Authors: Yi Lu, Yuankun Xie, Ruibo Fu, Zhengqi Wen, Jianhua Tao, Zhiyong Wang, Xin Qi, Xuefei Liu, Yongwei Li, Yukun Liu, Xiaopeng Wang, Shuchen Shi

Published: 2024-06-12 11:47:23+00:00

AI Summary

This paper introduces Codecfake, a new dataset for detecting LLM-based deepfake audio generated using neural codecs. Codecfake shows that ADD models trained on this dataset significantly outperform those trained on vocoder-based datasets, achieving a 41.406% reduction in average equal error rate.

Abstract

With the proliferation of Large Language Model (LLM) based deepfake audio, there is an urgent need for effective detection methods. Previous deepfake audio generation methods typically involve a multi-step generation process, with the final step using a vocoder to predict the waveform from handcrafted features. However, LLM-based audio is directly generated from discrete neural codecs in an end-to-end generation process, skipping the final step of vocoder processing. This poses a significant challenge for current audio deepfake detection (ADD) models based on vocoder artifacts. To effectively detect LLM-based deepfake audio, we focus on the core of the generation process, the conversion from neural codec to waveform. We propose Codecfake dataset, which is generated by seven representative neural codec methods. Experiment results show that codec-trained ADD models exhibit a 41.406% reduction in average equal error rate compared to vocoder-trained ADD models on the Codecfake test set.


Key findings
ADD models trained on Codecfake achieve significantly lower equal error rates (EER) compared to models trained on vocoder-based datasets. The improvement is substantial (41.406% reduction in average EER). While performance is excellent in-distribution, there's still room for improvement in out-of-distribution generalization to unseen codecs.
Approach
The authors address the challenge of detecting LLM-based deepfake audio by creating the Codecfake dataset using seven different neural codecs. They then train and evaluate audio deepfake detection (ADD) models on this dataset and compare their performance to models trained on vocoder-based datasets.
Datasets
Codecfake (generated using seven neural codec methods: SoundStream, SpeechTokenizer, FunCodec, EnCodec, AudioDec, AcademicCodec, DAC), LibriTTS, VCTK, AISHELL3, ASVspoof2019LA
Model(s)
AASIST, LCNN
Author countries
China