TalkingHeadBench: A Multi-Modal Benchmark & Analysis of Talking-Head DeepFake Detection

Authors: Xinqi Xiong, Prakrut Patel, Qingyuan Fan, Amisha Wadhwa, Sarathy Selvam, Xiao Guo, Luchao Qi, Xiaoming Liu, Roni Sengupta

Published: 2025-05-30 17:59:08+00:00

AI Summary

This paper introduces TalkingHeadBench, a multi-modal benchmark dataset for evaluating talking-head deepfake detection models. The dataset features high-quality deepfakes generated by various state-of-the-art generators and is designed to assess detector robustness and generalization under distribution shifts.

Abstract

The rapid advancement of talking-head deepfake generation fueled by advanced generative models has elevated the realism of synthetic videos to a level that poses substantial risks in domains such as media, politics, and finance. However, current benchmarks for deepfake talking-head detection fail to reflect this progress, relying on outdated generators and offering limited insight into model robustness and generalization. We introduce TalkingHeadBench, a comprehensive multi-model multi-generator benchmark and curated dataset designed to evaluate the performance of state-of-the-art detectors on the most advanced generators. Our dataset includes deepfakes synthesized by leading academic and commercial models and features carefully constructed protocols to assess generalization under distribution shifts in identity and generator characteristics. We benchmark a diverse set of existing detection methods, including CNNs, vision transformers, and temporal models, and analyze their robustness and generalization capabilities. In addition, we provide error analysis using Grad-CAM visualizations to expose common failure modes and detector biases. TalkingHeadBench is hosted on https://huggingface.co/datasets/luchaoqi/TalkingHeadBench with open access to all data splits and protocols. Our benchmark aims to accelerate research towards more robust and generalizable detection models in the face of rapidly evolving generative techniques.


Key findings
State-of-the-art deepfake detectors struggle with generalization across identity and generator shifts. Certain generators, like EMOPortraits and Hallo2, prove particularly challenging. Grad-CAM analysis reveals that some detectors rely on background cues rather than facial features, limiting their robustness.
Approach
The authors create a benchmark dataset of high-quality talking-head deepfakes generated using multiple state-of-the-art generators. They then evaluate existing deepfake detection models on this dataset using three protocols designed to assess generalization under different distribution shifts.
Datasets
TalkingHeadBench (created by the authors), FFHQ, CelebV-HQ, FaceForensics++
Model(s)
CADDM, TALL, LipFD, DeepFake-Adapter
Author countries
USA