Detecting Musical Deepfakes

Authors: Nick Sunday

Published: 2025-05-03 21:45:13+00:00

AI Summary

This research explores the detection of AI-generated music (deepfakes) using the FakeMusicCaps dataset. A convolutional neural network (ResNet18) is trained on Mel spectrograms of audio clips, subjected to tempo and pitch shifting to simulate adversarial conditions, to classify audio as either deepfake or human-generated.

Abstract

The proliferation of Text-to-Music (TTM) platforms has democratized music creation, enabling users to effortlessly generate high-quality compositions. However, this innovation also presents new challenges to musicians and the broader music industry. This study investigates the detection of AI-generated songs using the FakeMusicCaps dataset by classifying audio as either deepfake or human. To simulate real-world adversarial conditions, tempo stretching and pitch shifting were applied to the dataset. Mel spectrograms were generated from the modified audio, then used to train and evaluate a convolutional neural network. In addition to presenting technical results, this work explores the ethical and societal implications of TTM platforms, arguing that carefully designed detection systems are essential to both protecting artists and unlocking the positive potential of generative AI in music.


Key findings
The ResNet18 model achieved high accuracy in detecting deepfakes, even with adversarial manipulations. Performance was slightly better on pitch-shifted audio compared to tempo-stretched audio. Continuous learning, training the model sequentially on different datasets, improved recall but worsened other metrics.
Approach
The study uses the FakeMusicCaps dataset, applies tempo stretching and pitch shifting as adversarial manipulations, converts the audio to Mel spectrograms, and trains a pre-trained ResNet18 model for binary classification (deepfake or human).
Datasets
FakeMusicCaps dataset
Model(s)
ResNet18
Author countries
USA