Vulnerability of Automatic Identity Recognition to Audio-Visual Deepfakes

Authors: Pavel Korshunov, Haolin Chen, Philip N. Garner, Sebastien Marcel

Published: 2023-11-29 14:18:04+00:00

AI Summary

This paper introduces SWAN-DF, the first realistic audio-visual deepfake database with high-quality synchronized lips and speech. It also presents LibriTTS-DF, a database of audio-only deepfakes. The authors demonstrate the vulnerability of state-of-the-art speaker and face recognition systems to these deepfakes.

Abstract

The task of deepfakes detection is far from being solved by speech or vision researchers. Several publicly available databases of fake synthetic video and speech were built to aid the development of detection methods. However, existing databases typically focus on visual or voice modalities and provide no proof that their deepfakes can in fact impersonate any real person. In this paper, we present the first realistic audio-visual database of deepfakes SWAN-DF, where lips and speech are well synchronized and video have high visual and audio qualities. We took the publicly available SWAN dataset of real videos with different identities to create audio-visual deepfakes using several models from DeepFaceLab and blending techniques for face swapping and HiFiVC, DiffVC, YourTTS, and FreeVC models for voice conversion. From the publicly available speech dataset LibriTTS, we also created a separate database of only audio deepfakes LibriTTS-DF using several latest text to speech methods: YourTTS, Adaspeech, and TorToiSe. We demonstrate the vulnerability of a state of the art speaker recognition system, such as ECAPA-TDNN-based model from SpeechBrain, to the synthetic voices. Similarly, we tested face recognition system based on the MobileFaceNet architecture to several variants of our visual deepfakes. The vulnerability assessment show that by tuning the existing pretrained deepfake models to specific identities, one can successfully spoof the face and speaker recognition systems in more than 90% of the time and achieve a very realistic looking and sounding fake video of a given person.


Key findings
The results show that tuned deepfake models can successfully spoof face and speaker recognition systems in over 90% of cases. The vulnerability is high for both audio and visual deepfakes, particularly with tuned voice conversion models. The quality of the deepfakes generated by different methods varies, with some posing a greater threat than others.
Approach
The authors created two deepfake datasets: SWAN-DF (audio-visual) using DeepFaceLab for face swapping and various voice conversion models (HiFiVC, DiffVC, YourTTS, FreeVC), and LibriTTS-DF (audio-only) using YourTTS, Adaspeech, and TorToiSe TTS models. They then evaluated the vulnerability of existing speaker and face recognition systems (ECAPA-TDNN and MobileFaceNet) to these deepfakes.
Datasets
SWAN dataset (for audio-visual deepfakes), LibriTTS dataset (for audio-only deepfakes)
Model(s)
ECAPA-TDNN (speaker recognition), MobileFaceNet (face recognition), DeepFaceLab (face swapping), HiFiVC, DiffVC, YourTTS, FreeVC (voice conversion), Adaspeech, TorToiSe TTS (text-to-speech)
Author countries
Switzerland