Representation Loss Minimization with Randomized Selection Strategy for Efficient Environmental Fake Audio Detection

Authors: Orchid Chetia Phukan, Girish, Mohd Mujtaba Akhtar, Swarup Ranjan Behera, Nitin Choudhury, Arun Balaji Buduru, Rajesh Sharma, S. R Mahadeva Prasanna

Published: 2024-09-24 05:46:52+00:00

AI Summary

This paper proposes a novel approach for efficient environmental audio deepfake detection by randomly selecting a subset (40-50%) of representation vectors from foundation models. This method outperforms state-of-the-art dimensionality reduction techniques while significantly reducing model parameters and inference time.

Abstract

The adaptation of foundation models has significantly advanced environmental audio deepfake detection (EADD), a rapidly growing area of research. These models are typically fine-tuned or utilized in their frozen states for downstream tasks. However, the dimensionality of their representations can substantially lead to a high parameter count of downstream models, leading to higher computational demands. So, a general way is to compress these representations by leveraging state-of-the-art (SOTA) unsupervised dimensionality reduction techniques (PCA, SVD, KPCA, GRP) for efficient EADD. However, with the application of such techniques, we observe a drop in performance. So in this paper, we show that representation vectors contain redundant information, and randomly selecting 40-50% of representation values and building downstream models on it preserves or sometimes even improves performance. We show that such random selection preserves more performance than the SOTA dimensionality reduction techniques while reducing model parameters and inference time by almost over half.


Key findings
Randomly selecting 40-50% of representation values from foundation models maintains or improves deepfake detection performance compared to using full representations or applying state-of-the-art dimensionality reduction techniques. This approach reduces model parameters and inference time by approximately half, demonstrating its efficiency and broad applicability across different foundation models.
Approach
The authors address the computational cost of using high-dimensional representations from foundation models for audio deepfake detection. Instead of using dimensionality reduction techniques, they randomly select a portion of the representation vectors, demonstrating that this preserves or improves performance while reducing computational demands.
Datasets
DCASE 2023 Challenge dataset, containing authentic and synthetic audio samples from Urban-Sound8K, FSD50K, and BBC Sound Effects.
Model(s)
Various audio foundation models (Unispeech-SAT, WavLM2, Wav2vec2.3, TRILLsson4) and multimodal foundation models (LanguageBind, ImageBind, CLAP) are used as feature extractors. Downstream models include a Fully Connected Network (FCN) and a Convolutional Neural Network (CNN).
Author countries
India, Estonia