Exploring Green AI for Audio Deepfake Detection

Authors: Subhajit Saha, Md Sahidullah, Swagatam Das

Published: 2024-03-21 10:54:21+00:00

AI Summary

This research proposes a green AI framework for audio deepfake detection using pre-trained self-supervised learning (SSL) models and classical machine learning algorithms. Instead of fine-tuning large deep neural networks, it leverages embeddings from these pre-trained models with simpler classifiers, achieving competitive results with significantly reduced computational cost.

Abstract

The state-of-the-art audio deepfake detectors leveraging deep neural networks exhibit impressive recognition performance. Nonetheless, this advantage is accompanied by a significant carbon footprint. This is mainly due to the use of high-performance computing with accelerators and high training time. Studies show that average deep NLP model produces around 626k lbs of COtextsubscript{2} which is equivalent to five times of average US car emission at its lifetime. This is certainly a massive threat to the environment. To tackle this challenge, this study presents a novel framework for audio deepfake detection that can be seamlessly trained using standard CPU resources. Our proposed framework utilizes off-the-shelve self-supervised learning (SSL) based models which are pre-trained and available in public repositories. In contrast to existing methods that fine-tune SSL models and employ additional deep neural networks for downstream tasks, we exploit classical machine learning algorithms such as logistic regression and shallow neural networks using the SSL embeddings extracted using the pre-trained model. Our approach shows competitive results compared to the commonly used high-carbon footprint approaches. In experiments with the ASVspoof 2019 LA dataset, we achieve a 0.90% equal error rate (EER) with less than 1k trainable model parameters. To encourage further research in this direction and support reproducible results, the Python code will be made publicly accessible following acceptance. Github: https://github.com/sahasubhajit/Speech-Spoofing-


Key findings
The proposed method achieves a 0.90% equal error rate (EER) on the ASVspoof 2019 LA dataset using less than 1k trainable parameters. This performance is competitive with state-of-the-art methods but with significantly lower energy consumption due to the use of CPUs and classical ML algorithms instead of large deep networks and GPUs.
Approach
The approach uses a pre-trained wav2vec 2.0 model to extract embeddings from different layers. These embeddings are then fed into classical machine learning classifiers (like logistic regression and SVM) for deepfake detection. This avoids the high computational cost of training large deep networks.
Datasets
ASVspoof 2019 LA dataset, LibriSpeech dataset (for pre-training the SSL model)
Model(s)
wav2vec 2.0 (pre-trained), logistic regression, SVM, KNN, Naive Bayes, decision tree, multi-layer perceptron (shallow)
Author countries
India