Vulnerability of Face Recognition to Deep Morphing

Authors: Pavel Korshunov, Sébastien Marcel

Published: 2019-10-03 12:34:08+00:00

AI Summary

This paper introduces a publicly available dataset of deepfake videos created using GANs, demonstrating the vulnerability of state-of-the-art face recognition systems (VGG and Facenet) to these manipulated videos. It also explores several baseline approaches for detecting these deepfakes, finding that visual quality metrics achieve the best performance.

Abstract

It is increasingly easy to automatically swap faces in images and video or morph two faces into one using generative adversarial networks (GANs). The high quality of the resulted deep-morph raises the question of how vulnerable the current face recognition systems are to such fake images and videos. It also calls for automated ways to detect these GAN-generated faces. In this paper, we present the publicly available dataset of the Deepfake videos with faces morphed with a GAN-based algorithm. To generate these videos, we used open source software based on GANs, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. We show that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to the deep morph videos, with 85.62 and 95.00 false acceptance rates, respectively, which means methods for detecting these videos are necessary. We consider several baseline approaches for detecting deep morphs and find that the method based on visual quality metrics (often used in presentation attack detection domain) leads to the best performance with 8.97 equal error rate. Our experiments demonstrate that GAN-generated deep morph videos are challenging for both face recognition systems and existing detection methods, and the further development of deep morphing technologies will make it even more so.


Key findings
State-of-the-art face recognition systems showed high false acceptance rates (up to 95%) when presented with deepfake videos. A detection method based on visual quality metrics and SVM achieved an equal error rate of 8.97%, suggesting that while detection is possible, advancements in deepfake creation will pose increasing challenges.
Approach
The authors generated a deepfake video dataset using open-source GAN-based software. They evaluated the vulnerability of VGG and Facenet face recognition systems to these videos and tested several baseline detection methods, including those based on visual quality metrics (IQM), PCA-LDA, and SVM.
Datasets
DeepfakeTIMIT dataset (created by the authors), VidTIMIT database
Model(s)
VGG, Facenet, PCA, LDA, SVM
Author countries
Switzerland