AuViRe: Audio-visual Speech Representation Reconstruction for Deepfake Temporal Localization

Authors: Christos Koutlis, Symeon Papadopoulos

Published: 2025-11-24 11:19:21+00:00

Comment: WACV 2026

AI Summary

This work introduces AuViRe, a novel approach for temporal localization of deepfakes by leveraging Audio-Visual Speech Representation Reconstruction. It reconstructs speech representations from one modality (e.g., lip movements) based on the other (e.g., audio waveform), exploiting amplified discrepancies in manipulated video segments. AuViRe achieves state-of-the-art performance on established benchmarks and demonstrates strong robustness and real-world applicability.

Abstract

With the rapid advancement of sophisticated synthetic audio-visual content, e.g., for subtle malicious manipulations, ensuring the integrity of digital media has become paramount. This work presents a novel approach to temporal localization of deepfakes by leveraging Audio-Visual Speech Representation Reconstruction (AuViRe). Specifically, our approach reconstructs speech representations from one modality (e.g., lip movements) based on the other (e.g., audio waveform). Cross-modal reconstruction is significantly more challenging in manipulated video segments, leading to amplified discrepancies, thereby providing robust discriminative cues for precise temporal forgery localization. AuViRe outperforms the state of the art by +8.9 AP@0.95 on LAV-DF, +9.6 AP@0.5 on AV-Deepfake1M, and +5.1 AUC on an in-the-wild experiment. Code available at https://github.com/mever-team/auvire.


Key findings
AuViRe outperforms state-of-the-art methods in temporal forgery localization, with significant gains (e.g., +8.9 AP@0.95 on LAV-DF, +9.6 AP@0.5 on AV-Deepfake1M). It also achieves near-perfect video-level deepfake detection (99.94 AUC on LAV-DF, 99.8 AUC on AV-Deepfake1M) as an emergent property. The approach demonstrates strong robustness against distortions and superior performance in challenging 'in-the-wild' scenarios.
Approach
AuViRe extracts speech-related features independently from visual (lip movements) and audio (waveform) modalities using a self-supervised backbone. It then reconstructs representations across and within modalities (e.g., visual from audio, audio from audio). Discrepancies between original and reconstructed representations are computed and processed by a reconstruction-discrepancy encoder to identify and localize forgeries temporally.
Datasets
LAV-DF, AV-Deepfake1M, a collection of 371 real-world videos (curated with fact-checkers)
Model(s)
AuViRe (proposed architecture), AV-Hubert Base/LRS3/No-finetuning (as backbone B), 1D convolutional and deconvolutional layers for reconstruction and encoding modules (R and E), CNN model type overall.
Author countries
Greece