Human Brain Exhibits Distinct Patterns When Listening to Fake Versus Real Audio: Preliminary Evidence

Authors: Mahsa Salehi, Kalin Stefanov, Ehsan Shareghi

Published: 2024-02-22 21:44:58+00:00

AI Summary

This research investigates discrepancies in detecting deepfake audio between state-of-the-art algorithms and human brains. While deepfake detection algorithms struggle to discern real from fake audio, human EEG data shows distinct patterns when listening to each, suggesting a promising avenue for improved deepfake detection.

Abstract

In this paper we study the variations in human brain activity when listening to real and fake audio. Our preliminary results suggest that the representations learned by a state-of-the-art deepfake audio detection algorithm, do not exhibit clear distinct patterns between real and fake audio. In contrast, human brain activity, as measured by EEG, displays distinct patterns when individuals are exposed to fake versus real audio. This preliminary evidence enables future research directions in areas such as deepfake audio detection.


Key findings
EEG data reveals distinct patterns differentiating real and fake audio, unlike the tested deepfake detection algorithm. A ConvTran model achieved high accuracy in classifying real and fake audio based on EEG data, suggesting human brain responses offer a more effective detection method.
Approach
The study compares deepfake audio detection performance of a state-of-the-art algorithm with human brain activity measured via EEG. They generated deepfake audio using VITS and YourTTS, then analyzed EEG data from participants listening to both real and fake audio to identify distinguishing neural patterns.
Datasets
A custom dataset of real and deepfake audio created using VITS and YourTTS, paired with corresponding EEG data from two participants listening to the audio.
Model(s)
VITS and YourTTS for deepfake audio generation; ConvTran for EEG time series classification.
Author countries
Australia