GLCF: A Global-Local Multimodal Coherence Analysis Framework for Talking Face Generation Detection

Authors: Xiaocan Chen, Qilin Yin, Jiarui Liu, Wei Lu, Xiangyang Luo, Jiantao Zhou

Published: 2024-12-18 09:34:59+00:00

AI Summary

This paper introduces MSTF, the first large-scale multi-scenario talking face dataset for deepfake detection, containing 22 forgery techniques and 11 generation scenarios. It also proposes GLCF, a novel deepfake detection framework that analyzes global-local multimodal coherence in talking face videos to achieve superior performance compared to state-of-the-art methods.

Abstract

Talking face generation (TFG) allows for producing lifelike talking videos of any character using only facial images and accompanying text. Abuse of this technology could pose significant risks to society, creating the urgent need for research into corresponding detection methods. However, research in this field has been hindered by the lack of public datasets. In this paper, we construct the first large-scale multi-scenario talking face dataset (MSTF), which contains 22 audio and video forgery techniques, filling the gap of datasets in this field. The dataset covers 11 generation scenarios and more than 20 semantic scenarios, closer to the practical application scenario of TFG. Besides, we also propose a TFG detection framework, which leverages the analysis of both global and local coherence in the multimodal content of TFG videos. Therefore, a region-focused smoothness detection module (RSFDM) and a discrepancy capture-time frame aggregation module (DCTAM) are introduced to evaluate the global temporal coherence of TFG videos, aggregating multi-grained spatial information. Additionally, a visual-audio fusion module (V-AFM) is designed to evaluate audiovisual coherence within a localized temporal perspective. Comprehensive experiments demonstrate the reasonableness and challenges of our datasets, while also indicating the superiority of our proposed method compared to the state-of-the-art deepfake detection approaches.


Key findings
The MSTF dataset proved more challenging than existing datasets, highlighting the need for specialized methods. The proposed GLCF framework outperformed state-of-the-art methods on MSTF, FakeAVCeleb, and DFDC datasets, demonstrating its effectiveness and generalizability. Ablation studies confirmed the contribution of each module within GLCF.
Approach
GLCF analyzes both global and local coherence in talking face videos. It uses a region-focused smoothness detection module (RSFDM), a discrepancy capture-time frame aggregation module (DCTAM), and a visual-audio fusion module (V-AFM) to detect inconsistencies across frames and between audio and video modalities.
Datasets
MSTF (created by the authors), FaceForensics++, FakeAVCeleb, DFDC
Model(s)
The paper does not specify a single model name but rather describes a framework (GLCF) composed of several modules including RSFDM, DCTAM, V-AFM, and utilizes pre-trained wav2vec for audio features. 2D and 3D CNNs are employed within these modules.
Author countries
China, China, China, China, China