DistilDIRE: A Small, Fast, Cheap and Lightweight Diffusion Synthesized Deepfake Detection
Authors: Yewon Lim, Changyeon Lee, Aerin Kim, Oren Etzioni
Published: 2024-06-02 20:22:38+00:00
AI Summary
DistilDIRE is a fast and lightweight deepfake detection model that leverages knowledge distillation from a pre-trained diffusion model (DIRE) to significantly reduce computational demands without sacrificing performance. It achieves a 3.2 times faster inference speed than DIRE while maintaining robust detection capabilities.
Abstract
A dramatic influx of diffusion-generated images has marked recent years, posing unique challenges to current detection technologies. While the task of identifying these images falls under binary classification, a seemingly straightforward category, the computational load is significant when employing the reconstruction then compare technique. This approach, known as DIRE (Diffusion Reconstruction Error), not only identifies diffusion-generated images but also detects those produced by GANs, highlighting the technique's broad applicability. To address the computational challenges and improve efficiency, we propose distilling the knowledge embedded in diffusion models to develop rapid deepfake detection models. Our approach, aimed at creating a small, fast, cheap, and lightweight diffusion synthesized deepfake detector, maintains robust performance while significantly reducing operational demands. Maintaining performance, our experimental results indicate an inference speed 3.2 times faster than the existing DIRE framework. This advance not only enhances the practicality of deploying these systems in real-world settings but also paves the way for future research endeavors that seek to leverage diffusion model knowledge.