Amplifying The Uncanny

Authors: Terence Broad, Frederic Fol Leymarie, Mick Grierson

Published: 2020-02-17 11:12:39+00:00

AI Summary

This paper explores the aesthetic consequences of inverting the objective function of a StyleGAN, optimizing it to generate images predicted as fake rather than real. This process amplifies the uncanny nature of the generated images, creating a visual representation of the machine's predictive capacity.

Abstract

Deep neural networks have become remarkably good at producing realistic deepfakes, images of people that (to the untrained eye) are indistinguishable from real images. Deepfakes are produced by algorithms that learn to distinguish between real and fake images and are optimised to generate samples that the system deems realistic. This paper, and the resulting series of artworks Being Foiled explore the aesthetic outcome of inverting this process, instead optimising the system to generate images that it predicts as being fake. This maximises the unlikelihood of the data and in turn, amplifies the uncanny nature of these machine hallucinations.


Key findings
The inverted optimization process leads to a progression of image generation from realistic to increasingly uncanny and ultimately abstract. The resulting images exhibit exaggerated features and artifacts, highlighting the machine's understanding of what constitutes a 'fake' image. These artifacts are analyzed to understand how the model identifies deepfakes.
Approach
The authors fine-tune a pre-trained StyleGAN, freezing the discriminator weights and inverting the objective function. The generator is then optimized to produce images the discriminator classifies as fake, maximizing the unlikelihood of the data. This process is observed across different iterations of model training.
Datasets
Flickr Faces HQ dataset
Model(s)
StyleGAN
Author countries
United Kingdom