Detecting Musical Deepfakes
Authors: Nick Sunday
Published: 2025-05-03 21:45:13+00:00
AI Summary
This research explores the detection of AI-generated music (deepfakes) using the FakeMusicCaps dataset. A convolutional neural network (ResNet18) is trained on Mel spectrograms of audio clips, subjected to tempo and pitch shifting to simulate adversarial conditions, to classify audio as either deepfake or human-generated.
Abstract
The proliferation of Text-to-Music (TTM) platforms has democratized music creation, enabling users to effortlessly generate high-quality compositions. However, this innovation also presents new challenges to musicians and the broader music industry. This study investigates the detection of AI-generated songs using the FakeMusicCaps dataset by classifying audio as either deepfake or human. To simulate real-world adversarial conditions, tempo stretching and pitch shifting were applied to the dataset. Mel spectrograms were generated from the modified audio, then used to train and evaluate a convolutional neural network. In addition to presenting technical results, this work explores the ethical and societal implications of TTM platforms, arguing that carefully designed detection systems are essential to both protecting artists and unlocking the positive potential of generative AI in music.