Glitch in the Matrix: A Large Scale Benchmark for Content Driven Audio-Visual Forgery Detection and Localization

Authors: Zhixi Cai, Shreya Ghosh, Abhinav Dhall, Tom Gedeon, Kalin Stefanov, Munawar Hayat

Published: 2023-05-03 08:48:45+00:00

AI Summary

This paper introduces LAV-DF, a large-scale benchmark dataset for content-driven audio-visual forgery detection and localization, addressing the limitations of existing datasets that focus primarily on visual-only manipulations. A novel multimodal method, BA-TFD+, using a 3D CNN or Multiscale Vision Transformer backbone with contrastive and boundary matching loss functions, significantly improves temporal forgery localization and deepfake detection.

Abstract

Most deepfake detection methods focus on detecting spatial and/or spatio-temporal changes in facial attributes and are centered around the binary classification task of detecting whether a video is real or fake. This is because available benchmark datasets contain mostly visual-only modifications present in the entirety of the video. However, a sophisticated deepfake may include small segments of audio or audio-visual manipulations that can completely change the meaning of the video content. To addresses this gap, we propose and benchmark a new dataset, Localized Audio Visual DeepFake (LAV-DF), consisting of strategic content-driven audio, visual and audio-visual manipulations. The proposed baseline method, Boundary Aware Temporal Forgery Detection (BA-TFD), is a 3D Convolutional Neural Network-based architecture which effectively captures multimodal manipulations. We further improve (i.e. BA-TFD+) the baseline method by replacing the backbone with a Multiscale Vision Transformer and guide the training process with contrastive, frame classification, boundary matching and multimodal boundary matching loss functions. The quantitative analysis demonstrates the superiority of BA-TFD+ on temporal forgery localization and deepfake detection tasks using several benchmark datasets including our newly proposed dataset. The dataset, models and code are available at https://github.com/ControlNet/LAV-DF.


Key findings
BA-TFD+ outperforms state-of-the-art methods on temporal forgery localization tasks across multiple datasets, including the newly proposed LAV-DF dataset. The multimodal approach significantly improves performance compared to visual-only methods. The method also achieves high accuracy in deepfake detection on the DFDC dataset.
Approach
The authors propose BA-TFD+, a multimodal deepfake detection method that uses a Multiscale Vision Transformer (or 3D CNN) to extract features from both audio and video. It employs contrastive learning, frame classification, and boundary matching loss functions to improve localization and detection of manipulated segments.
Datasets
Localized Audio Visual DeepFake (LAV-DF), VoxCeleb2, ForgeryNet, DFDC, DF-TIMIT, UADFV, FaceForensics++, Google DFD, DeeperForensics, Celeb-DF, WildDeepfake, FFIW10K, KoDF, DF-Platter, FakeAVCeleb
Model(s)
Boundary Aware Temporal Forgery Detection (BA-TFD) and its improved version BA-TFD+ (using MViTv2 or 3D CNN and ViT architectures), BMN, AGT, MDS, AVFusion, BSN++, TadTR, ActionFormer, TriDet
Author countries
Australia, Australia, India