Unraveling Hidden Representations: A Multi-Modal Layer Analysis for Better Synthetic Content Forensics
Authors: Tom Or, Omri Azencot
Published: 2025-08-01 17:07:00+00:00
AI Summary
This paper proposes using latent representations from intermediate layers of large pre-trained multi-modal models for deepfake detection. It demonstrates that linear classifiers trained on these features achieve state-of-the-art results across audio and image modalities, while being computationally efficient and effective in few-shot settings.
Abstract
Generative models achieve remarkable results in multiple data domains, including images and texts, among other examples. Unfortunately, malicious users exploit synthetic media for spreading misinformation and disseminating deepfakes. Consequently, the need for robust and stable fake detectors is pressing, especially when new generative models appear everyday. While the majority of existing work train classifiers that discriminate between real and fake information, such tools typically generalize only within the same family of generators and data modalities, yielding poor results on other generative classes and data domains. Towards a universal classifier, we propose the use of large pre-trained multi-modal models for the detection of generative content. Effectively, we show that the latent code of these models naturally captures information discriminating real from fake. Building on this observation, we demonstrate that linear classifiers trained on these features can achieve state-of-the-art results across various modalities, while remaining computationally efficient, fast to train, and effective even in few-shot settings. Our work primarily focuses on fake detection in audio and images, achieving performance that surpasses or matches that of strong baseline methods.