DeepFake Detection: Current Challenges and Next Steps

Authors: Siwei Lyu

Published: 2020-03-11 13:20:42+00:00

AI Summary

This paper analyzes the challenges and future directions in deepfake detection. It highlights limitations in current methods, such as reliance on low-quality datasets and the lack of explainability, and proposes future research avenues, including focusing on head puppetry and lip-syncing deepfakes and developing audio deepfake detection techniques.

Abstract

High quality fake videos and audios generated by AI-algorithms (the deep fakes) have started to challenge the status of videos and audios as definitive evidence of events. In this paper, we highlight a few of these challenges and discuss the research opportunities in this direction.


Key findings
Current deepfake detection methods suffer from limitations such as reliance on low-quality datasets and a lack of explainability. Future research should focus on new forms of deepfakes (head puppetry, lip-syncing, audio) and develop more robust and explainable detection methods. The ongoing arms race between deepfake creation and detection techniques is highlighted.
Approach
The paper is a review and analysis of existing deepfake detection methods, categorizing them based on their feature extraction techniques (inconsistencies, signal-level artifacts, data-driven). It doesn't propose a novel method itself but rather discusses limitations and future research directions.
Datasets
FaceForensics++, Celeb-DF, datasets from DARPA MFC18 Synthetic Data Detection Challenge and the Facebook DeepFake Detection Challenge are mentioned.
Model(s)
Various deep neural networks (DNNs) are mentioned as being used in existing deepfake detection methods, but specific architectures aren't detailed.
Author countries
USA