DF-Captcha: A Deepfake Captcha for Preventing Fake Calls

Authors: Yisroel Mirsky

Published: 2022-08-17 20:40:54+00:00

AI Summary

This paper proposes DF-Captcha, a lightweight application to protect against deepfake social engineering attacks. It leverages the technical limitations of deepfake technology by using a challenge-response approach, making it difficult for deepfakes to convincingly respond while being easy for humans.

Abstract

Social engineering (SE) is a form of deception that aims to trick people into giving access to data, information, networks and even money. For decades SE has been a key method for attackers to gain access to an organization, virtually skipping all lines of defense. Attackers also regularly use SE to scam innocent people by making threatening phone calls which impersonate an authority or by sending infected emails which look like they have been sent from a loved one. SE attacks will likely remain a top attack vector for criminals because humans are the weakest link in cyber security. Unfortunately, the threat will only get worse now that a new technology called deepfakes as arrived. A deepfake is believable media (e.g., videos) created by an AI. Although the technology has mostly been used to swap the faces of celebrities, it can also be used to `puppet' different personas. Recently, researchers have shown how this technology can be deployed in real-time to clone someone's voice in a phone call or reenact a face in a video call. Given that any novice user can download this technology to use it, it is no surprise that criminals have already begun to monetize it to perpetrate their SE attacks. In this paper, we propose a lightweight application which can protect organizations and individuals from deepfake SE attacks. Through a challenge and response approach, we leverage the technical and theoretical limitations of deepfake technologies to expose the attacker. Existing defence solutions are too heavy as an end-point solution and can be evaded by a dynamic attacker. In contrast, our approach is lightweight and breaks the reactive arms race, putting the attacker at a disadvantage.


Key findings
Preliminary results show that real-time deepfakes struggle with the proposed challenges, producing easily detectable artifacts. The approach offers a proactive defense, shifting the advantage to the defender by focusing on attacker limitations rather than a reactive arms race against evolving deepfake technologies.
Approach
DF-Captcha employs a challenge-response mechanism, similar to a Turing test. It presents challenges exploiting known limitations of deepfake generation (e.g., head movements, specific vocalizations). Lightweight anomaly detection models then analyze the responses to identify deepfakes based on generated artifacts.
Datasets
FaceForensics++, a custom dataset from the implementation of the First Order Motion Model for image animation, and DeepfakeTIMIT are mentioned in related work.
Model(s)
The paper mentions experimenting with various anomaly detection models, including lightweight neural networks, one-class SVMs, and statistical models. Specific architectures are not detailed.
Author countries
Israel