Deepfake CAPTCHA: A Method for Preventing Fake Calls

Authors: Lior Yasur, Guy Frankovits, Fred M. Grabovski, Yisroel Mirsky

Published: 2023-01-08 15:34:19+00:00

AI Summary

This paper proposes D-CAPTCHA, an active defense against real-time deepfakes, which challenges the deepfake model to generate content exceeding its capabilities, thereby making passive detection easier. The system outperforms state-of-the-art audio deepfake detectors, achieving 91-100% accuracy depending on the challenge.

Abstract

Deep learning technology has made it possible to generate realistic content of specific individuals. These `deepfakes' can now be generated in real-time which enables attackers to impersonate people over audio and video calls. Moreover, some methods only need a few images or seconds of audio to steal an identity. Existing defenses perform passive analysis to detect fake content. However, with the rapid progress of deepfake quality, this may be a losing game. In this paper, we propose D-CAPTCHA: an active defense against real-time deepfakes. The approach is to force the adversary into the spotlight by challenging the deepfake model to generate content which exceeds its capabilities. By doing so, passive detection becomes easier since the content will be distorted. In contrast to existing CAPTCHAs, we challenge the AI's ability to create content as opposed to its ability to classify content. In this work we focus on real-time audio deepfakes and present preliminary results on video. In our evaluation we found that D-CAPTCHA outperforms state-of-the-art audio deepfake detectors with an accuracy of 91-100% depending on the challenge (compared to 71% without challenges). We also performed a study on 41 volunteers to understand how threatening current real-time deepfake attacks are. We found that the majority of the volunteers could not tell the difference between real and fake audio.


Key findings
D-CAPTCHA significantly improves the accuracy of state-of-the-art audio deepfake detectors (91-100% vs. 71%). A user study revealed that many volunteers could not distinguish between real and fake audio, highlighting the threat of real-time deepfakes. The system's performance varies based on the specific challenge task.
Approach
D-CAPTCHA actively challenges the caller with tasks difficult for deepfake models but easy for humans. The response is analyzed for realism, identity consistency, task completion, and time constraints to detect deepfakes. This active approach increases detection accuracy by inducing distortions in the deepfake's response.
Datasets
Custom datasets of real and deepfake audio (including challenge responses) from 20 English speakers, ASVspoof-DF dataset, and RITW dataset.
Model(s)
SpecRNet, One-Class, GMM-ASVspoof, PC-DARTS, Local Outlier Factor (LOF) for realism verification; GMM classifier for task verification; pre-trained ECAPA-TDNN based voice recognition model for identity verification. StarGANv2-VC, AdaIN-VC, MediumVC, FragmentVC, and ASSEM-VC were used to generate deepfakes in the threat analysis.
Author countries
Israel