Emotions Don't Lie: An Audio-Visual Deepfake Detection Method Using Affective Cues

Authors: Trisha Mittal, Uttaran Bhattacharya, Rohan Chandra, Aniket Bera, Dinesh Manocha

Published: 2020-03-14 22:07:26+00:00

AI Summary

This paper introduces a novel audio-visual deepfake detection method that leverages both audio and visual modalities, along with perceived emotion cues from each, to identify fake videos. The approach uses a Siamese network architecture with a triplet loss function to learn the similarities and dissimilarities between modalities in real and fake videos, achieving high AUC scores on benchmark datasets.

Abstract

We present a learning-based method for detecting real and fake deepfake multimedia content. To maximize information for learning, we extract and analyze the similarity between the two audio and visual modalities from within the same video. Additionally, we extract and compare affective cues corresponding to perceived emotion from the two modalities within a video to infer whether the input video is real or fake. We propose a deep learning network, inspired by the Siamese network architecture and the triplet loss. To validate our model, we report the AUC metric on two large-scale deepfake detection datasets, DeepFake-TIMIT Dataset and DFDC. We compare our approach with several SOTA deepfake detection methods and report per-video AUC of 84.4% on the DFDC and 96.6% on the DF-TIMIT datasets, respectively. To the best of our knowledge, ours is the first approach that simultaneously exploits audio and video modalities and also perceived emotions from the two modalities for deepfake detection.


Key findings
The proposed method achieved a per-video AUC of 84.4% on the DFDC dataset and 96.6% on the DF-TIMIT dataset. The results show improvement over state-of-the-art methods on DFDC and comparable performance on DF-TIMIT. The integration of audio, visual, and emotion cues proved effective in deepfake detection.
Approach
The method extracts audio and visual features from videos, then uses separate networks to generate modality embeddings and perceived emotion embeddings. A triplet loss function is used during training to maximize the similarity between modalities in real videos and minimize similarity between modalities in fake videos.
Datasets
DeepFake-TIMIT Dataset and DFDC
Model(s)
Siamese network architecture inspired model with triplet loss function. Utilizes OpenFace for facial feature extraction and pyAudioAnalysis for speech feature extraction. The emotion embedding networks are based on the Memory Fusion Network (MFN).
Author countries
USA