AV-Lip-Sync+: Leveraging AV-HuBERT to Exploit Multimodal Inconsistency for Video Deepfake Detection

Authors: Sahibzada Adil Shahzad, Ammarah Hashmi, Yan-Tsung Peng, Yu Tsao, Hsin-Min Wang

Published: 2023-11-05 18:35:03+00:00

AI Summary

This research introduces AV-Lip-Sync+, a novel multimodal deepfake detection method leveraging the AV-HuBERT model for audio-visual feature extraction and a multi-scale temporal convolutional network to capture temporal correlations. It achieves state-of-the-art performance by exploiting inconsistencies between audio and visual modalities, particularly in lip synchronization.

Abstract

Multimodal manipulations (also known as audio-visual deepfakes) make it difficult for unimodal deepfake detectors to detect forgeries in multimedia content. To avoid the spread of false propaganda and fake news, timely detection is crucial. The damage to either modality (i.e., visual or audio) can only be discovered through multi-modal models that can exploit both pieces of information simultaneously. Previous methods mainly adopt uni-modal video forensics and use supervised pre-training for forgery detection. This study proposes a new method based on a multi-modal self-supervised-learning (SSL) feature extractor to exploit inconsistency between audio and visual modalities for multi-modal video forgery detection. We use the transformer-based SSL pre-trained Audio-Visual HuBERT (AV-HuBERT) model as a visual and acoustic feature extractor and a multi-scale temporal convolutional neural network to capture the temporal correlation between the audio and visual modalities. Since AV-HuBERT only extracts visual features from the lip region, we also adopt another transformer-based video model to exploit facial features and capture spatial and temporal artifacts caused during the deepfake generation process. Experimental results show that our model outperforms all existing models and achieves new state-of-the-art performance on the FakeAVCeleb and DeepfakeTIMIT datasets.


Key findings
AV-Lip-Sync+ outperforms existing models on FakeAVCeleb and DeepfakeTIMIT datasets. The integration of full-face features (using ViViT) significantly improves performance, especially for deepfakes with manipulation outside the lip region. The model achieves state-of-the-art accuracy on both datasets.
Approach
AV-Lip-Sync+ uses AV-HuBERT to extract audio and lip-region visual features, and a separate transformer-based model (ViViT) for full-face features. A multi-scale temporal convolutional network processes these features to capture temporal correlations, and the inconsistencies are used for deepfake detection.
Datasets
FakeAVCeleb and DeepfakeTIMIT datasets
Model(s)
AV-HuBERT, ViViT, Multi-scale Temporal Convolutional Network (MS-TCN)
Author countries
Taiwan