Tex-ViT: A Generalizable, Robust, Texture-based dual-branch cross-attention deepfake detector

Authors: Deepak Dagar, Dinesh Kumar Vishwakarma

Published: 2024-08-29 20:26:27+00:00

AI Summary

Tex-ViT, a novel deepfake detector, combines ResNet features with a parallel texture module, feeding both into a dual-branch cross-attention vision transformer. This approach improves generalization and robustness against post-processing, achieving over 98% accuracy in cross-domain scenarios.

Abstract

Deepfakes, which employ GAN to produce highly realistic facial modification, are widely regarded as the prevailing method. Traditional CNN have been able to identify bogus media, but they struggle to perform well on different datasets and are vulnerable to adversarial attacks due to their lack of robustness. Vision transformers have demonstrated potential in the realm of image classification problems, but they require enough training data. Motivated by these limitations, this publication introduces Tex-ViT (Texture-Vision Transformer), which enhances CNN features by combining ResNet with a vision transformer. The model combines traditional ResNet features with a texture module that operates in parallel on sections of ResNet before each down-sampling operation. The texture module then serves as an input to the dual branch of the cross-attention vision transformer. It specifically focuses on improving the global texture module, which extracts feature map correlation. Empirical analysis reveals that fake images exhibit smooth textures that do not remain consistent over long distances in manipulations. Experiments were performed on different categories of FF++, such as DF, f2f, FS, and NT, together with other types of GAN datasets in cross-domain scenarios. Furthermore, experiments also conducted on FF++, DFDCPreview, and Celeb-DF dataset underwent several post-processing situations, such as blurring, compression, and noise. The model surpassed the most advanced models in terms of generalization, achieving a 98% accuracy in cross-domain scenarios. This demonstrates its ability to learn the shared distinguishing textural characteristics in the manipulated samples. These experiments provide evidence that the proposed model is capable of being applied to various situations and is resistant to many post-processing procedures.


Key findings
Tex-ViT significantly outperforms state-of-the-art models in cross-domain generalization, achieving over 98% accuracy. It also demonstrates robustness against post-processing techniques like blurring, compression, and noise addition. The ablation study confirms the effectiveness of both the texture module and the cross-attention transformer.
Approach
Tex-ViT enhances CNN features by incorporating a parallel texture module (using Gram matrices) before each ResNet downsampling. These features, along with the ResNet features, are then fed into a dual-branch cross-attention vision transformer to leverage both local and global texture information for deepfake detection.
Datasets
FaceForensics++, DFDCPreview, Celeb-DF, CelebA-HQ, CelebA, FFHQ, GAN-generated images (ProGAN, StyleGAN, StarGAN, STGAN)
Model(s)
ResNet-18, Vision Transformer with cross-attention mechanism, texture module using Gram matrices
Author countries
India