Detecting GAN-generated Imagery using Color Cues
Authors: Scott McCloskey, Michael Albright
Published: 2018-12-19 21:12:00+00:00
AI Summary
This paper proposes two methods for detecting GAN-generated images by analyzing color cues. The first method leverages the frequency of saturated pixels, while the second uses color channel correlations. Both methods are based on an analysis of the GAN generator's architecture and its handling of color information.
Abstract
Image forensics is an increasingly relevant problem, as it can potentially address online disinformation campaigns and mitigate problematic aspects of social media. Of particular interest, given its recent successes, is the detection of imagery produced by Generative Adversarial Networks (GANs), e.g. `deepfakes'. Leveraging large training sets and extensive computing resources, recent work has shown that GANs can be trained to generate synthetic imagery which is (in some ways) indistinguishable from real imagery. We analyze the structure of the generating network of a popular GAN implementation, and show that the network's treatment of color is markedly different from a real camera in two ways. We further show that these two cues can be used to distinguish GAN-generated imagery from camera imagery, demonstrating effective discrimination between GAN imagery and real camera images used to train the GAN.