How Deep Are the Fakes? Focusing on Audio Deepfake: A Survey

Authors: Zahra Khanjani, Gabrielle Watson, Vandana P. Janeja

Published: 2021-11-28 18:28:30+00:00

AI Summary

This survey paper focuses on audio deepfakes, a topic often overlooked in existing surveys. It critically analyzes audio deepfake generation and detection methods from 2016 to 2020, providing a unique resource for researchers in this field.

Abstract

Deepfake is content or material that is synthetically generated or manipulated using artificial intelligence (AI) methods, to be passed off as real and can include audio, video, image, and text synthesis. This survey has been conducted with a different perspective compared to existing survey papers, that mostly focus on just video and image deepfakes. This survey not only evaluates generation and detection methods in the different deepfake categories, but mainly focuses on audio deepfakes that are overlooked in most of the existing surveys. This paper critically analyzes and provides a unique source of audio deepfake research, mostly ranging from 2016 to 2020. To the best of our knowledge, this is the first survey focusing on audio deepfakes in English. This survey provides readers with a summary of 1) different deepfake categories 2) how they could be created and detected 3) the most recent trends in this domain and shortcomings in detection methods 4) audio deepfakes, how they are created and detected in more detail which is the main focus of this paper. We found that Generative Adversarial Networks(GAN), Convolutional Neural Networks (CNN), and Deep Neural Networks (DNN) are common ways of creating and detecting deepfakes. In our evaluation of over 140 methods we found that the majority of the focus is on video deepfakes and in particular in the generation of video deepfakes. We found that for text deepfakes there are more generation methods but very few robust methods for detection, including fake news detection, which has become a controversial area of research because of the potential of heavy overlaps with human generation of fake content. This paper is an abbreviated version of the full survey and reveals a clear need to research audio deepfakes and particularly detection of audio deepfakes.


Key findings
The survey reveals a significant lack of research on audio deepfake detection compared to video deepfakes. Common generation methods include GANs, CNNs, and DNNs, while detection often employs DNNs like ResNet. The challenge of generalization in deepfake detection is highlighted, along with a need for more research in prevention and mitigation strategies.
Approach
The paper conducts a systematic review of existing literature on audio deepfakes, categorizing methods into replay attacks, speech synthesis, and voice conversion. It analyzes generation and detection techniques for each category and discusses trends, shortcomings, and future research directions.
Datasets
ASVspoof 2017, ASVspoof 2019, VCTK, LibriSpeech, VoxCeleb2, Blizzard, MAESTRO, TED-LUM 3, MagnaTagATune, YouTube piano datasets, DIMEX-100, LJSpeech, ALAGIN Japanese Speech Database, NUS Sung and Spoken LyricsCorpus (NUS-48E corpus), TIDIGITS, COCO Image Captions, EMNLP2017 WMT News, Chinese Poems, FFHQ, LSUN CAR, various internal datasets
Model(s)
Generative Adversarial Networks (GANs), Convolutional Neural Networks (CNNs), Deep Neural Networks (DNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs), ResNet, WaveNet, WaveGlow, Tacotron, Tacotron 2, MelNet, CycleGAN, StarGAN, U-Net, various other architectures mentioned in reviewed papers
Author countries
USA