Amplifying The Uncanny
Authors: Terence Broad, Frederic Fol Leymarie, Mick Grierson
Published: 2020-02-17 11:12:39+00:00
AI Summary
This paper explores the aesthetic consequences of inverting the objective function of a StyleGAN, optimizing it to generate images predicted as fake rather than real. This process amplifies the uncanny nature of the generated images, creating a visual representation of the machine's predictive capacity.
Abstract
Deep neural networks have become remarkably good at producing realistic deepfakes, images of people that (to the untrained eye) are indistinguishable from real images. Deepfakes are produced by algorithms that learn to distinguish between real and fake images and are optimised to generate samples that the system deems realistic. This paper, and the resulting series of artworks Being Foiled explore the aesthetic outcome of inverting this process, instead optimising the system to generate images that it predicts as being fake. This maximises the unlikelihood of the data and in turn, amplifies the uncanny nature of these machine hallucinations.