SONICS: Synthetic Or Not -- Identifying Counterfeit Songs

Authors: Md Awsafur Rahman, Zaber Ibn Abdul Hakim, Najibul Haque Sarker, Bishmoy Paul, Shaikh Anowarul Fattah

Published: 2024-08-26 08:02:57+00:00

AI Summary

This paper introduces SONICS, a large-scale dataset for end-to-end synthetic song detection, addressing limitations of existing datasets. It also proposes SpecTTTra, a novel architecture that efficiently models long-range temporal dependencies in songs, outperforming existing methods in terms of F1 score, speed, and memory usage.

Abstract

The recent surge in AI-generated songs presents exciting possibilities and challenges. These innovations necessitate the ability to distinguish between human-composed and synthetic songs to safeguard artistic integrity and protect human musical artistry. Existing research and datasets in fake song detection only focus on singing voice deepfake detection (SVDD), where the vocals are AI-generated but the instrumental music is sourced from real songs. However, these approaches are inadequate for detecting contemporary end-to-end artificial songs where all components (vocals, music, lyrics, and style) could be AI-generated. Additionally, existing datasets lack music-lyrics diversity, long-duration songs, and open-access fake songs. To address these gaps, we introduce SONICS, a novel dataset for end-to-end Synthetic Song Detection (SSD), comprising over 97k songs (4,751 hours) with over 49k synthetic songs from popular platforms like Suno and Udio. Furthermore, we highlight the importance of modeling long-range temporal dependencies in songs for effective authenticity detection, an aspect entirely overlooked in existing methods. To utilize long-range patterns, we introduce SpecTTTra, a novel architecture that significantly improves time and memory efficiency over conventional CNN and Transformer-based models. For long songs, our top-performing variant outperforms ViT by 8% in F1 score, is 38% faster, and uses 26% less memory, while also surpassing ConvNeXt with a 1% F1 score gain, 20% speed boost, and 67% memory reduction.


Key findings
SpecTTTra outperforms ViT and ConvNeXt on long songs in F1 score, speed, and memory usage. The SONICS dataset effectively demonstrates the importance of modeling long-range temporal dependencies for accurate synthetic song detection. AI models consistently outperform human evaluators in distinguishing real and fake songs.
Approach
The authors address the problem by creating SONICS, a new dataset of real and synthetic songs with diverse styles and lengths. They then propose SpecTTTra, a new architecture that uses a spectro-temporal tokenizer to improve the efficiency of Transformer models for long audio classification.
Datasets
SONICS dataset (97k songs, 4751 hours), Suno and Udio platforms, YouTube, SingFake dataset
Model(s)
SpecTTTra, ConvNeXt, ViT, EfficientViT
Author countries
USA, Bangladesh