Spectrum Translation for Refinement of Image Generation (STIG) Based on Contrastive Learning and Spectral Filter Profile

Authors: Seokjun Lee, Seung-Won Jung, Hyunseok Seo

Published: 2024-03-08 06:39:24+00:00

AI Summary

This paper proposes STIG, a framework for refining generated images by mitigating frequency domain discrepancies. STIG uses contrastive learning and spectrum translation to improve the realism of images generated by GANs and diffusion models, making them harder to detect by frequency-based deepfake detectors.

Abstract

Currently, image generation and synthesis have remarkably progressed with generative models. Despite photo-realistic results, intrinsic discrepancies are still observed in the frequency domain. The spectral discrepancy appeared not only in generative adversarial networks but in diffusion models. In this study, we propose a framework to effectively mitigate the disparity in frequency domain of the generated images to improve generative performance of both GAN and diffusion models. This is realized by spectrum translation for the refinement of image generation (STIG) based on contrastive learning. We adopt theoretical logic of frequency components in various generative networks. The key idea, here, is to refine the spectrum of the generated image via the concept of image-to-image translation and contrastive learning in terms of digital signal processing. We evaluate our framework across eight fake image datasets and various cutting-edge models to demonstrate the effectiveness of STIG. Our framework outperforms other cutting-edges showing significant decreases in FID and log frequency distance of spectrum. We further emphasize that STIG improves image quality by decreasing the spectral anomaly. Additionally, validation results present that the frequency-based deepfake detector confuses more in the case where fake spectrums are manipulated by STIG.


Key findings
STIG outperforms state-of-the-art methods in reducing frequency domain discrepancies and improving image quality, as measured by FID and LFD. STIG significantly reduces the accuracy of both CNN- and ViT-based frequency domain deepfake detectors, demonstrating its effectiveness in making generated images harder to detect.
Approach
STIG translates the magnitude spectrum of generated images to match the spectrum of real images using an adversarial loss and contrastive learning. This process refines the frequency components while preserving original spatial frequencies, improving image quality and hindering frequency-based deepfake detection.
Datasets
Eight fake image datasets generated from CycleGAN, StarGAN, StarGAN2, StyleGAN, DDPM, and DDIM models; FFHQ and LSUN Church datasets were used for training diffusion models.
Model(s)
Nested U-Net (generator), PatchGAN (discriminator), and a fully connected layer with sigmoid activation (spectral discriminator). A shallow CNN and ViT-B16 were used for deepfake detection.
Author countries
South Korea