RTCFake: Speech Deepfake Detection in Real-Time Communication

Authors: Jun Xue, Zhuolin Yi, Yihuan Huang, Yanzhen Ren, Yujie Chen, Cunhang Fan, Zicheng Su, Yonghong Zhang, Bo Cai

Published: 2026-04-26 14:42:50+00:00

Comment: Accepted by ACL 2026

AI Summary

This paper addresses the challenge of detecting speech deepfakes in real-time communication (RTC) scenarios, where existing methods struggle with complex distortions like noise suppression and codec compression. It introduces RTCFake, the first large-scale (600-hour) speech deepfake dataset specifically tailored for RTC by transmitting speech through mainstream platforms. Additionally, the authors propose a phoneme-guided consistency learning (PCL) strategy to enforce models to learn platform-invariant semantic structural representations, significantly improving cross-platform generalization and noise robustness.

Abstract

With the rapid advancement of speech generation technologies, the threat posed by speech deepfakes in real-time communication (RTC) scenarios has intensified. However, existing detection studies mainly focus on offline simulations and struggle to cope with the complex distortions introduced during RTC transmission, including unknown speech enhancement processes (e.g., noise suppression) and codec compression. To address this challenge, we present the first large-scale speech deepfake dataset tailored for RTC scenarios, termed \\textit{RTCFake}, totaling approximately 600 hours. The dataset is constructed by transmitting speech through multiple mainstream social media and conferencing platforms (e.g., Zoom), enabling precise pairing between offline and online speech. In addition, we propose a phoneme-guided consistency learning (PCL) strategy that enforces models to learn platform-invariant semantic structural representations. In this paper, the RTCFake dataset is divided into training, development, and evaluation sets. The evaluation set further includes both unseen RTC platforms and unseen complex noise conditions, thereby providing a more realistic and challenging evaluation benchmark for speech deepfake detection. Furthermore, the proposed PCL strategy achieves significant improvements in both cross-platform generalization and noise robustness, offering an effective and generalizable modeling paradigm. The \\textit{RTCFake} dataset is provided in the {https://huggingface.co/datasets/JunXueTech/RTCFake}.


Key findings
Existing open-source datasets and models trained on them show very limited generalization capability and high error rates in realistic RTC scenarios due to severe domain mismatch. The proposed RTCFake dataset and phoneme-guided consistency learning (PCL) strategy achieve significant improvements in detection robustness and generalization. PCL consistently reduces EER across both seen and unseen communication platforms and diverse noise conditions by effectively leveraging platform-invariant phoneme-level representations.
Approach
The authors construct a new large-scale dataset, RTCFake, by sending both real and synthetic speech through various mainstream RTC platforms to simulate real-world distortions. They then propose a phoneme-guided consistency learning (PCL) strategy, which leverages the stable nature of phoneme-level representations under RTC transmission. PCL enforces consistency between offline and online speech representations at the semantic structural level during training, enabling the model to learn platform-invariant features.
Datasets
RTCFake, LibriHeavy, Chinese-Lips
Model(s)
XLSR (front-end feature extractor), AASIST (back-end classifier using a heterogeneous stacked graph attention network), Wav2Vec2-Large-XLSR-53 (for phoneme boundary identification), RawBoost (data augmentation)
Author countries
China