In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking

Authors: Yuezun Li, Ming-Ching Chang, Siwei Lyu

Published: 2018-06-07 19:36:09+00:00

AI Summary

This paper proposes a novel method for detecting AI-generated fake face videos by analyzing eye-blinking patterns. The method leverages a Long-term Recurrent Convolutional Network (LRCN) to capture the temporal dependencies in eye blinks, which are often absent or poorly represented in fake videos.

Abstract

The new developments in deep generative networks have significantly improve the quality and efficiency in generating realistically-looking fake face videos. In this work, we describe a new method to expose fake face videos generated with neural networks. Our method is based on detection of eye blinking in the videos, which is a physiological signal that is not well presented in the synthesized fake videos. Our method is tested over benchmarks of eye-blinking detection datasets and also show promising performance on detecting videos generated with DeepFake.


Key findings
The LRCN model outperforms both a CNN-only approach and an EAR-based method in detecting fake videos, achieving a better ROC curve performance (0.99 vs 0.98 and 0.79 respectively). The LRCN's ability to consider temporal context improves accuracy, especially in cases of ambiguous single frames.
Approach
The approach uses a three-part LRCN model: a CNN for feature extraction from eye regions, an LSTM-RNN for sequence learning to capture temporal dependencies in blinking patterns, and a fully connected layer for state prediction (open or closed eyes). The model is trained on a combination of existing eye datasets and a new Eye Blinking Video (EBV) dataset created by the authors.
Datasets
CEW Dataset, Eye Blinking Video (EBV) dataset (created by authors)
Model(s)
Long-term Recurrent Convolutional Network (LRCN) combining VGG16 (CNN) and LSTM-RNN; also compared against a VGG16 CNN and Eye Aspect Ratio (EAR) method.
Author countries
USA