Audio-Visual Deepfake Detection With Local Temporal Inconsistencies

Authors: Marcella Astrid, Enjie Ghorbel, Djamila Aouada

Published: 2025-01-14 14:15:10+00:00

AI Summary

This paper presents a novel audio-visual deepfake detection method focusing on fine-grained temporal inconsistencies between audio and video. It leverages a temporal distance map with an attention mechanism to identify these inconsistencies and uses novel pseudo-fake generation techniques to augment training data, improving detection accuracy.

Abstract

This paper proposes an audio-visual deepfake detection approach that aims to capture fine-grained temporal inconsistencies between audio and visual modalities. To achieve this, both architectural and data synthesis strategies are introduced. From an architectural perspective, a temporal distance map, coupled with an attention mechanism, is designed to capture these inconsistencies while minimizing the impact of irrelevant temporal subsequences. Moreover, we explore novel pseudo-fake generation techniques to synthesize local inconsistencies. Our approach is evaluated against state-of-the-art methods using the DFDC and FakeAVCeleb datasets, demonstrating its effectiveness in detecting audio-visual deepfakes.


Key findings
The proposed method outperforms state-of-the-art techniques on both in-dataset (DFDC) and cross-dataset (FakeAVCeleb) evaluations. The use of temporal inconsistencies proves effective, and the attention mechanism enhances performance. Different pseudo-fake generation techniques are compared, with clip replacement showing the best results.
Approach
The approach uses a deep learning model that computes a temporal distance map between audio and visual features extracted from input videos. An attention mechanism is incorporated to focus on relevant temporal subsequences, and pseudo-fake data with subtle local temporal inconsistencies are generated to augment training.
Datasets
DFDC and FakeAVCeleb datasets
Model(s)
ResNet-based 3D Conv (visual), 1D Conv (audio), and a classifier processing the temporal distance map with attention.
Author countries
Luxembourg, Tunisia