Evaluation of an Audio-Video Multimodal Deepfake Dataset using Unimodal and Multimodal Detectors

Authors: Hasam Khalid, Minha Kim, Shahroz Tariq, Simon S. Woo

Published: 2021-09-07 11:00:20+00:00

AI Summary

This research evaluates the Audio-Video Multimodal Deepfake Detection Dataset (FakeAVCeleb) using unimodal, ensemble, and multimodal deepfake detection methods. The key contribution is a comprehensive evaluation demonstrating that ensemble methods outperform unimodal and purely multimodal approaches for detecting audio-video deepfakes.

Abstract

Significant advancements made in the generation of deepfakes have caused security and privacy issues. Attackers can easily impersonate a person's identity in an image by replacing his face with the target person's face. Moreover, a new domain of cloning human voices using deep-learning technologies is also emerging. Now, an attacker can generate realistic cloned voices of humans using only a few seconds of audio of the target person. With the emerging threat of potential harm deepfakes can cause, researchers have proposed deepfake detection methods. However, they only focus on detecting a single modality, i.e., either video or audio. On the other hand, to develop a good deepfake detector that can cope with the recent advancements in deepfake generation, we need to have a detector that can detect deepfakes of multiple modalities, i.e., videos and audios. To build such a detector, we need a dataset that contains video and respective audio deepfakes. We were able to find a most recent deepfake dataset, Audio-Video Multimodal Deepfake Detection Dataset (FakeAVCeleb), that contains not only deepfake videos but synthesized fake audios as well. We used this multimodal deepfake dataset and performed detailed baseline experiments using state-of-the-art unimodal, ensemble-based, and multimodal detection methods to evaluate it. We conclude through detailed experimentation that unimodals, addressing only a single modality, video or audio, do not perform well compared to ensemble-based methods. Whereas purely multimodal-based baselines provide the worst performance.


Key findings
Unimodal methods showed poor performance compared to ensemble methods. Purely multimodal methods performed worst. Ensemble-based methods achieved the best results, but accuracy remained below 85%, highlighting the difficulty of multimodal deepfake detection.
Approach
The research uses the FakeAVCeleb dataset to evaluate three types of deepfake detection approaches: unimodal (using only audio or video), ensemble (combining audio and video models), and multimodal (a single model using both audio and video). Performance is measured using standard metrics like precision, recall, F1-score, and accuracy.
Datasets
FakeAVCeleb (Audio-Video Multimodal Deepfake Detection Dataset), VoxCeleb2 (for real video source)
Model(s)
Meso-4, MesoInception-4, Xception, EfficientNet-B0, VGG16 (unimodal); Ensemble methods using soft and hard voting of unimodal models; Multimodal-1, Multimodal-2, CDCN (modified for audio-video input)
Author countries
South Korea