SceneFake: An Initial Dataset and Benchmarks for Scene Fake Audio Detection

Authors: Jiangyan Yi, Chenglong Wang, Jianhua Tao, Chu Yuan Zhang, Cunhang Fan, Zhengkun Tian, Haoxin Ma, Ruibo Fu

Published: 2022-11-11 09:05:50+00:00

AI Summary

This paper introduces SceneFake, a novel dataset for scene fake audio detection, addressing a gap in existing datasets by focusing on manipulations of the acoustic scene in audio recordings. Benchmark results demonstrate that existing models trained on other datasets perform poorly on SceneFake, highlighting the challenge of detecting this specific type of audio manipulation.

Abstract

Many datasets have been designed to further the development of fake audio detection. However, fake utterances in previous datasets are mostly generated by altering timbre, prosody, linguistic content or channel noise of original audio. These datasets leave out a scenario, in which the acoustic scene of an original audio is manipulated with a forged one. It will pose a major threat to our society if some people misuse the manipulated audio with malicious purpose. Therefore, this motivates us to fill in the gap. This paper proposes such a dataset for scene fake audio detection named SceneFake, where a manipulated audio is generated by only tampering with the acoustic scene of an real utterance by using speech enhancement technologies. Some scene fake audio detection benchmark results on the SceneFake dataset are reported in this paper. In addition, an analysis of fake attacks with different speech enhancement technologies and signal-to-noise ratios are presented in this paper. The results indicate that scene fake utterances cannot be reliably detected by baseline models trained on the ASVspoof 2019 dataset. Although these models perform well on the SceneFake training set and seen testing set, their performance is poor on the unseen test set. The dataset (https://zenodo.org/record/7663324#.Y_XKMuPYuUk) and benchmark source codes (https://github.com/ADDchallenge/SceneFake) are publicly available.


Key findings
Baseline models trained on ASVspoof 2019 performed poorly on the SceneFake dataset, indicating that detecting scene-manipulated audio is challenging. Models performed well on the seen test set of SceneFake but poorly on the unseen test set, showing a lack of generalization. Performance varied across different signal-to-noise ratios, with the worst performance at both very low and very high SNRs.
Approach
The authors created the SceneFake dataset by manipulating the acoustic scenes of real audio utterances using various speech enhancement techniques (e.g., FullSubNet, WaveU-Net, GCRN). They then evaluated the performance of baseline models (GMM, LCNN, RawNet2) on this dataset, comparing their performance across different signal-to-noise ratios and unseen acoustic scenes.
Datasets
SceneFake dataset (created by the authors), ASVspoof 2019 dataset, DCASE 2022 dataset
Model(s)
GMM, LCNN, RawNet2
Author countries
China