EmoFake: An Initial Dataset for Emotion Fake Audio Detection

Authors: Yan Zhao, Jiangyan Yi, Jianhua Tao, Chenglong Wang, Xiaohui Zhang, Yongfeng Dong

Published: 2022-11-10 06:09:51+00:00

AI Summary

This paper introduces EmoFake, a new dataset for emotion fake audio detection, focusing on audio where the emotion has been altered while other aspects remain the same. A new detection method, Graph Attention networks using Deep Emotion embedding (GADE), is proposed and evaluated on this dataset, showing promising results.

Abstract

Many datasets have been designed to further the development of fake audio detection, such as datasets of the ASVspoof and ADD challenges. However, these datasets do not consider a situation that the emotion of the audio has been changed from one to another, while other information (e.g. speaker identity and content) remains the same. Changing the emotion of an audio can lead to semantic changes. Speech with tampered semantics may pose threats to people's lives. Therefore, this paper reports our progress in developing such an emotion fake audio detection dataset involving changing emotion state of the origin audio named EmoFake. The fake audio in EmoFake is generated by open source emotion voice conversion models. Furthermore, we proposed a method named Graph Attention networks using Deep Emotion embedding (GADE) for the detection of emotion fake audio. Some benchmark experiments are conducted on this dataset. The results show that our designed dataset poses a challenge to the fake audio detection model trained with the LA dataset of ASVspoof 2019. The proposed GADE shows good performance in the face of emotion fake audio.


Key findings
The EmoFake dataset poses a significant challenge to existing fake audio detection models trained on other datasets like ASVspoof 2019 LA. The proposed GADE model shows good performance on EmoFake, highlighting the need for improved detection methods robust to emotion manipulation. Performance varies significantly depending on the emotion voice conversion model used to generate the fake audio.
Approach
The authors created the EmoFake dataset using open-source emotion voice conversion models to generate fake audio with altered emotions. They then proposed a Graph Attention network using Deep Emotion embeddings (GADE) for emotion fake audio detection and evaluated its performance on the new dataset.
Datasets
EmoFake dataset (created by the authors), ASVspoof 2019 LA dataset, ADD 2022 track 3.2 dataset, ESD (Emotional Speech Database)
Model(s)
LCNN, RawNet2, SENet, ResNet34, AASIST, HGFM (for emotion recognition), and several open-source emotion voice conversion models (VAW-GAN-CWT, DeepEST, Seq2Seq-EVC, CycleGAN-EVC, CycleTransGAN, EmoCycleGAN, StarGAN-EVC) for data generation.
Author countries
China