Exposing Lip-syncing Deepfakes from Mouth Inconsistencies
Authors: Soumyya Kanti Datta, Shan Jia, Siwei Lyu
Published: 2024-01-18 16:35:37+00:00
AI Summary
This paper introduces LIPINC, a novel approach for lip-syncing deepfake detection that identifies temporal inconsistencies in the mouth region of videos. LIPINC outperforms state-of-the-art methods on benchmark deepfake datasets by focusing on inconsistencies in adjacent and globally similar mouth poses.
Abstract
A lip-syncing deepfake is a digitally manipulated video in which a person's lip movements are created convincingly using AI models to match altered or entirely new audio. Lip-syncing deepfakes are a dangerous type of deepfakes as the artifacts are limited to the lip region and more difficult to discern. In this paper, we describe a novel approach, LIP-syncing detection based on mouth INConsistency (LIPINC), for lip-syncing deepfake detection by identifying temporal inconsistencies in the mouth region. These inconsistencies are seen in the adjacent frames and throughout the video. Our model can successfully capture these irregularities and outperforms the state-of-the-art methods on several benchmark deepfake datasets. Code is available at https://github.com/skrantidatta/LIPINC