FakeAVCeleb: A Novel Audio-Video Multimodal Deepfake Dataset

Authors: Hasam Khalid, Shahroz Tariq, Minha Kim, Simon S. Woo

Published: 2021-08-11 07:49:36+00:00

AI Summary

The paper introduces FakeAVCeleb, a novel audio-video multimodal deepfake dataset containing deepfake videos and corresponding lip-synced fake audios. This dataset addresses racial bias and the lack of multimodal deepfake data, facilitating the development of more robust deepfake detectors.

Abstract

While the significant advancements have made in the generation of deepfakes using deep learning technologies, its misuse is a well-known issue now. Deepfakes can cause severe security and privacy issues as they can be used to impersonate a person's identity in a video by replacing his/her face with another person's face. Recently, a new problem of generating synthesized human voice of a person is emerging, where AI-based deep learning models can synthesize any person's voice requiring just a few seconds of audio. With the emerging threat of impersonation attacks using deepfake audios and videos, a new generation of deepfake detectors is needed to focus on both video and audio collectively. To develop a competent deepfake detector, a large amount of high-quality data is typically required to capture real-world (or practical) scenarios. Existing deepfake datasets either contain deepfake videos or audios, which are racially biased as well. As a result, it is critical to develop a high-quality video and audio deepfake dataset that can be used to detect both audio and video deepfakes simultaneously. To fill this gap, we propose a novel Audio-Video Deepfake dataset, FakeAVCeleb, which contains not only deepfake videos but also respective synthesized lip-synced fake audios. We generate this dataset using the most popular deepfake generation methods. We selected real YouTube videos of celebrities with four ethnic backgrounds to develop a more realistic multimodal dataset that addresses racial bias, and further help develop multimodal deepfake detectors. We performed several experiments using state-of-the-art detection methods to evaluate our deepfake dataset and demonstrate the challenges and usefulness of our multimodal Audio-Video deepfake dataset.


Key findings
Experiments using state-of-the-art deepfake detection methods showed that FakeAVCeleb is more challenging to detect than existing datasets, highlighting the need for advanced multimodal detectors. The average AUC across multiple methods was around 65% on the video modality alone, indicating the realism of the generated deepfakes.
Approach
FakeAVCeleb is created using popular deepfake generation methods (Faceswap, FSGAN, Wav2Lip, SV2TTS) applied to real YouTube videos of celebrities with diverse ethnic backgrounds. The resulting dataset includes four combinations of real/fake audio and video, enabling comprehensive evaluation of multimodal deepfake detection.
Datasets
VoxCeleb2, YouTube videos
Model(s)
Capsule, HeadPose, VA-MLP/LogReg, Xception, Meso4, MesoInception4, Face X-ray, F3Net, LipForensics, Multimodal-1, Multimodal-2, CDCN
Author countries
South Korea