TalkingHeadBench: A Multi-Modal Benchmark & Analysis of Talking-Head DeepFake Detection

Authors: Xinqi Xiong, Prakrut Patel, Qingyuan Fan, Amisha Wadhwa, Sarathy Selvam, Xiao Guo, Luchao Qi, Xiaoming Liu, Roni Sengupta

Published: 2025-05-30 17:59:08+00:00

Comment: WACV2026

AI Summary

This paper introduces TalkingHeadBench, a comprehensive multi-modal, multi-generator benchmark and meticulously curated dataset designed to evaluate state-of-the-art deepfake detectors against advanced talking-head generative models. It features deepfakes synthesized by leading academic and commercial models, employing structured protocols to assess detector robustness and generalization under distribution shifts in identity and generator characteristics. The benchmark reveals significant performance degradation of existing detectors on high-quality deepfakes, underscoring the urgent need for more robust and generalizable detection solutions.

Abstract

The rapid advancement of talking-head deepfake generation fueled by advanced generative models has elevated the realism of synthetic videos to a level that poses substantial risks in domains such as media, politics, and finance. However, current benchmarks for deepfake talking-head detection fail to reflect this progress, relying on outdated generators and offering limited insight into model robustness and generalization. We introduce TalkingHeadBench, a comprehensive multi-model multi-generator benchmark and curated dataset designed to evaluate the performance of state-of-the-art detectors on the most advanced generators. Our dataset includes deepfakes synthesized by leading academic and commercial models and features carefully constructed protocols to assess generalization under distribution shifts in identity and generator characteristics. We benchmark a diverse set of existing detection methods, including CNNs, vision transformers, and temporal models, and analyze their robustness and generalization capabilities. In addition, we provide error analysis using Grad-CAM visualizations to expose common failure modes and detector biases. TalkingHeadBench is hosted on https://huggingface.co/datasets/luchaoqi/TalkingHeadBench with open access to all data splits and protocols. Our benchmark aims to accelerate research towards more robust and generalizable detection models in the face of rapidly evolving generative techniques.


Key findings
State-of-the-art detectors, despite high accuracy on older benchmarks, exhibit substantial performance drops on TalkingHeadBench, especially at strict false positive rates. Generalization across generator shifts proves more challenging than identity shifts, with EMOPortraits identified as a particularly difficult generator. Error analysis showed detectors often shift attention from facial features to background artifacts when generator quality improves, indicating reliance on non-facial cues and poor generalization.
Approach
The authors developed TalkingHeadBench, a dataset of 2,994 high-quality talking-head deepfakes from eight modern generators, manually curated to remove obvious artifacts. They designed three evaluation protocols to measure detector generalization across identity, generator, and combined shifts. Seven state-of-the-art detectors were benchmarked, and their failure modes were analyzed using Grad-CAM visualizations.
Datasets
TalkingHeadBench, FFHQ, CelebV-HQ, FaceForensics++
Model(s)
CADDM, HiFi-Net, AltFreezing, TALL, LipFD, DeepFake-Adapter, MM-Det
Author countries
USA