ALDAS: Audio-Linguistic Data Augmentation for Spoofed Audio Detection

Authors: Zahra Khanjani, Christine Mallinson, James Foulds, Vandana P Janeja

Published: 2024-10-21 01:54:55+00:00

AI Summary

The paper introduces ALDAS, an AI framework for automatically labeling linguistic features in audio to improve spoofed audio detection. ALDAS leverages a CNN trained on expert-labeled data, enhancing existing detection models without the limitations of manual annotation.

Abstract

Spoofed audio, i.e. audio that is manipulated or AI-generated deepfake audio, is difficult to detect when only using acoustic features. Some recent innovative work involving AI-spoofed audio detection models augmented with phonetic and phonological features of spoken English, manually annotated by experts, led to improved model performance. While this augmented model produced substantial improvements over traditional acoustic features based models, a scalability challenge motivates inquiry into auto labeling of features. In this paper we propose an AI framework, Audio-Linguistic Data Augmentation for Spoofed audio detection (ALDAS), for auto labeling linguistic features. ALDAS is trained on linguistic features selected and extracted by sociolinguistics experts; these auto labeled features are used to evaluate the quality of ALDAS predictions. Findings indicate that while the detection enhancement is not as substantial as when involving the pure ground truth linguistic features, there is improvement in performance while achieving auto labeling. Labels generated by ALDAS are also validated by the sociolinguistics experts.


Key findings
ALDAS, while not surpassing the performance of models using purely expert-labeled data, significantly improves the performance of common spoofed audio detection baselines. Expert validation confirms the effectiveness of ALDAS's auto-labeling, particularly for breath detection.
Approach
ALDAS uses a pre-trained VGGish model to extract audio representations, which are then fed into a fine-tuned CNN classifier to predict linguistic features (breath, pitch, audio quality). These predicted features are integrated with existing baselines to improve spoofed audio detection.
Datasets
A primary dataset created by combining samples from existing datasets (ASVspoof 2015, ASVspoof 2017, ASVspoof 2021) and new spoofed audio samples (mimicry, TTS, VC), and the evaluation set of ASVspoof 2021 (Deepfake task) for expert validation.
Model(s)
VGGish (pre-trained), CNN classifier, Multi-layer Perceptron (MLP) classifier, ensemble models with ASVspoof 2021 baselines (LFCC-GMM, LFCC-LCNN, RawNet2).
Author countries
USA