Can deepfakes be created by novice users?

Authors: Pulak Mehta, Gauri Jagatap, Kevin Gallagher, Brian Timmerman, Progga Deb, Siddharth Garg, Rachel Greenstadt, Brendan Dolan-Gavitt

Published: 2023-04-28 00:32:24+00:00

AI Summary

This paper investigates the ability of novice users to create deepfakes. Two user studies were conducted, one with open-ended tool selection and another with pre-specified deep learning tools. The results show that while a significant portion of participants could create deepfakes, the generated videos were easily detectable by both software and human examiners.

Abstract

Recent advancements in machine learning and computer vision have led to the proliferation of Deepfakes. As technology democratizes over time, there is an increasing fear that novice users can create Deepfakes, to discredit others and undermine public discourse. In this paper, we conduct user studies to understand whether participants with advanced computer skills and varying levels of computer science expertise can create Deepfakes of a person saying a target statement using limited media files. We conduct two studies; in the first study (n = 39) participants try creating a target Deepfake in a constrained time frame using any tool they desire. In the second study (n = 29) participants use pre-specified deep learning-based tools to create the same Deepfake. We find that for the first study, 23.1% of the participants successfully created complete Deepfakes with audio and video, whereas, for the second user study, 58.6% of the participants were successful in stitching target speech to the target video. We further use Deepfake detection software tools as well as human examiner-based analysis, to classify the successfully generated Deepfake outputs as fake, suspicious, or real. The software detector classified 80% of the Deepfakes as fake, whereas the human examiners classified 100% of the videos as fake. We conclude that creating Deepfakes is a simple enough task for a novice user given adequate tools and time; however, the resulting Deepfakes are not sufficiently real-looking and are unable to completely fool detection software as well as human examiners


Key findings
In the open-ended study, 23.1% of participants successfully created deepfakes; in the pre-specified tool study, this increased to 58.6%. Deepfake detection software correctly identified 80% of the generated deepfakes as fake, while human examiners identified 100% as fake. The study demonstrates that creating deepfakes is relatively easy for novices with sufficient tools and time, but the resulting quality is often insufficient to deceive detection systems or human observers.
Approach
The researchers conducted two user studies. The first study allowed participants to use any tool to create a deepfake, while the second study provided pre-specified deep learning tools. Success was measured by the ability to create a video of a target person saying a specific phrase. Deepfake detection was assessed using software and human examiners.
Datasets
UNKNOWN
Model(s)
Wav2lip, Real-Time Voice Cloning, (Seferbekov's method, Avatarify, Deepware algorithms were used for detection)
Author countries
USA, Portugal