Generalized Deepfake Attribution

Authors: Sowdagar Mahammad Shahid, Sudev Kumar Padhi, Umesh Kashyap, Sk. Subidh Ali

Published: 2024-06-26 12:04:09+00:00

AI Summary

This paper introduces a Generalized Deepfake Attribution Network (GDA-Net) that attributes fake images to their underlying GAN architectures, even if the models were retrained with different seeds or fine-tuned. It addresses the limitation of existing methods which struggle with variations in GAN model instances.

Abstract

The landscape of fake media creation changed with the introduction of Generative Adversarial Networks (GAN s). Fake media creation has been on the rise with the rapid advances in generation technology, leading to new challenges in Detecting fake media. A fundamental characteristic of GAN s is their sensitivity to parameter initialization, known as seeds. Each distinct seed utilized during training leads to the creation of unique model instances, resulting in divergent image outputs despite employing the same architecture. This means that even if we have one GAN architecture, it can produce countless variations of GAN models depending on the seed used. Existing methods for attributing deepfakes work well only if they have seen the specific GAN model during training. If the GAN architectures are retrained with a different seed, these methods struggle to attribute the fakes. This seed dependency issue made it difficult to attribute deepfakes with existing methods. We proposed a generalized deepfake attribution network (GDA-N et) to attribute fake images to their respective GAN architectures, even if they are generated from a retrained version of the GAN architecture with a different seed (cross-seed) or from the fine-tuned version of the existing GAN model. Extensive experiments on cross-seed and fine-tuned data of GAN models show that our method is highly effective compared to existing methods. We have provided the source code to validate our results.


Key findings
GDA-Net significantly outperforms existing methods in attributing deepfakes across different GAN models, even with cross-seed and fine-tuned data. The use of supervised contrastive learning and a denoising autoencoder enhances the model's robustness and generalization capability.
Approach
GDA-Net uses a Feature Extraction Network (FEN) trained with supervised contrastive learning to extract architecture-specific features from images, regardless of seed or fine-tuning. A denoising autoencoder preprocesses images to reduce content dependency before feature extraction, improving robustness.
Datasets
CelebA dataset for real images and fake images generated from DCGAN, WGAN, ProGAN, and SNGAN architectures trained on CelebA.
Model(s)
Feature Extraction Network (FEN), Denoising Autoencoder, Multi-class classification network.
Author countries
India