Why Do Facial Deepfake Detectors Fail?
Authors: Binh Le, Shahroz Tariq, Alsharif Abuadbba, Kristen Moore, Simon Woo
Published: 2023-02-25 20:54:02+00:00
AI Summary
This research paper investigates why facial deepfake detectors often fail. The authors identify two key challenges: (1) inconsistencies in pre-processing pipelines (e.g., resizing vs. cropping) that affect the detection of artifacts, and (2) the lack of diversity in training datasets, leading to poor generalization to unseen deepfakes generated by different methods.
Abstract
Recent rapid advancements in deepfake technology have allowed the creation of highly realistic fake media, such as video, image, and audio. These materials pose significant challenges to human authentication, such as impersonation, misinformation, or even a threat to national security. To keep pace with these rapid advancements, several deepfake detection algorithms have been proposed, leading to an ongoing arms race between deepfake creators and deepfake detectors. Nevertheless, these detectors are often unreliable and frequently fail to detect deepfakes. This study highlights the challenges they face in detecting deepfakes, including (1) the pre-processing pipeline of artifacts and (2) the fact that generators of new, unseen deepfake samples have not been considered when building the defense models. Our work sheds light on the need for further research and development in this field to create more robust and reliable detectors.