Beyond Flicker: Detecting Kinematic Inconsistencies for Generalizable Deepfake Video Detection
Authors: Alejandro Cobo, Roberto Valle, José Miguel Buenaposada, Luis Baumela
Published: 2025-12-03 19:00:07+00:00
AI Summary
This paper introduces Kinematic Model for facial motion Inconsistencies (KiMoI), a novel synthetic video generation method, to create training data with subtle kinematic inconsistencies for generalizable deepfake video detection. It leverages a Landmark Perturbation Network (LPN) to decompose facial landmark configurations into motion bases, which are then manipulated to break natural motion correlations. These sophisticated biomechanical flaws are introduced into pristine videos via face morphing, allowing a network trained on this data to achieve state-of-the-art generalization across popular benchmarks.
Abstract
Generalizing deepfake detection to unseen manipulations remains a key challenge. A recent approach to tackle this issue is to train a network with pristine face images that have been manipulated with hand-crafted artifacts to extract more generalizable clues. While effective for static images, extending this to the video domain is an open issue. Existing methods model temporal artifacts as frame-to-frame instabilities, overlooking a key vulnerability: the violation of natural motion dependencies between different facial regions. In this paper, we propose a synthetic video generation method that creates training data with subtle kinematic inconsistencies. We train an autoencoder to decompose facial landmark configurations into motion bases. By manipulating these bases, we selectively break the natural correlations in facial movements and introduce these artifacts into pristine videos via face morphing. A network trained on our data learns to spot these sophisticated biomechanical flaws, achieving state-of-the-art generalization results on several popular benchmarks.