A Noval Feature via Color Quantisation for Fake Audio Detection

Authors: Zhiyong Wang, Xiaopeng Wang, Yuankun Xie, Ruibo Fu, Zhengqi Wen, Jianhua Tao, Yukun Liu, Guanjun Li, Xin Qi, Yi Lu, Xuefei Liu, Yongwei Li

Published: 2024-08-20 13:43:20+00:00

AI Summary

This paper introduces a novel fake audio detection method using color quantization to extract features from spectrograms. By constraining the reconstruction to a limited color palette, the method enhances the distinguishability between real and fake audio, improving classification performance compared to using original spectral inputs.

Abstract

In the field of deepfake detection, previous studies focus on using reconstruction or mask and prediction methods to train pre-trained models, which are then transferred to fake audio detection training where the encoder is used to extract features, such as wav2vec2.0 and Masked Auto Encoder. These methods have proven that using real audio for reconstruction pre-training can better help the model distinguish fake audio. However, the disadvantage lies in poor interpretability, meaning it is hard to intuitively present the differences between deepfake and real audio. This paper proposes a noval feature extraction method via color quantisation which constrains the reconstruction to use a limited number of colors for the spectral image-like input. The proposed method ensures reconstructed input differs from the original, which allows for intuitive observation of the focus areas in the spectral reconstruction. Experiments conducted on the ASVspoof2019 dataset demonstrate that the proposed method achieves better classification performance compared to using the original spectral as input and pretraining the recolor network can also benefit the fake audio detection.


Key findings
The proposed color quantization method improves fake audio detection performance compared to using original spectral inputs across multiple classifiers. Pretraining the recolor network further enhances performance. Fewer colors in the quantization generally lead to better results.
Approach
The approach uses color quantization to reconstruct spectral image-like representations of audio. This constrained reconstruction creates differences between real and fake audio, generating more discriminative features for classification. The method is evaluated using various classifiers on the ASVspoof2019 dataset.
Datasets
ASVspoof 2019 (Logical Access subset), VCTK
Model(s)
LCNN, ResNet18, AASIST (with modifications)
Author countries
China