Voice-Face Homogeneity Tells Deepfake

Authors: Harry Cheng, Yangyang Guo, Tianyi Wang, Qi Li, Xiaojun Chang, Liqiang Nie

Published: 2022-03-04 09:08:50+00:00

AI Summary

This paper proposes a novel deepfake detection method, Voice-Face matching Detection (VFD), that leverages the inherent correlation between voices and faces. VFD pre-trains on a generic audio-visual dataset and then fine-tunes on deepfake datasets, significantly improving generalization and performance compared to state-of-the-art methods.

Abstract

Detecting forgery videos is highly desirable due to the abuse of deepfake. Existing detection approaches contribute to exploring the specific artifacts in deepfake videos and fit well on certain data. However, the growing technique on these artifacts keeps challenging the robustness of traditional deepfake detectors. As a result, the development of generalizability of these approaches has reached a blockage. To address this issue, given the empirical results that the identities behind voices and faces are often mismatched in deepfake videos, and the voices and faces have homogeneity to some extent, in this paper, we propose to perform the deepfake detection from an unexplored voice-face matching view. To this end, a voice-face matching method is devised to measure the matching degree of these two. Nevertheless, training on specific deepfake datasets makes the model overfit certain traits of deepfake algorithms. We instead, advocate a method that quickly adapts to untapped forgery, with a pre-training then fine-tuning paradigm. Specifically, we first pre-train the model on a generic audio-visual dataset, followed by the fine-tuning on downstream deepfake data. We conduct extensive experiments over three widely exploited deepfake datasets - DFDC, FakeAVCeleb, and DeepfakeTIMIT. Our method obtains significant performance gains as compared to other state-of-the-art competitors. It is also worth noting that our method already achieves competitive results when fine-tuned on limited deepfake data.


Key findings
VFD achieves state-of-the-art performance on multiple deepfake datasets. The method demonstrates strong generalization capabilities, even when fine-tuned on limited deepfake data. The heatmaps show that VFD focuses on global face features rather than specific artifacts.
Approach
VFD uses a dual-stream network to extract voice and face features, employing a pre-training phase on a generic audio-visual dataset followed by fine-tuning on deepfake datasets. The model learns to measure the matching degree between voices and faces, using this as a proxy for deepfake detection.
Datasets
Voxceleb2 (pre-training), DFDC, FakeAVCeleb, DeepfakeTIMIT (fine-tuning and evaluation)
Model(s)
A dual-stream network with transformer-based feature extractors and a contrastive loss function (RFC) for fine-tuning.
Author countries
China, Singapore, Hong Kong, Australia