Multi-spectral Class Center Network for Face Manipulation Detection and Localization

Authors: Changtao Miao, Qi Chu, Zhentao Tan, Zhenchao Jin, Tao Gong, Wanyi Zhuang, Yue Wu, Bin Liu, Honggang Hu, Nenghai Yu

Published: 2023-05-18 08:09:20+00:00

AI Summary

The paper introduces MSCCNet, a novel network for face manipulation detection and localization that leverages multi-frequency spectrum information. MSCCNet uses a Multi-Spectral Class Center module to learn generalizable features and a Multi-level Features Aggregation module to incorporate low-level forgery artifacts, achieving superior performance on benchmark datasets.

Abstract

As deepfake content proliferates online, advancing face manipulation forensics has become crucial. To combat this emerging threat, previous methods mainly focus on studying how to distinguish authentic and manipulated face images. Although impressive, image-level classification lacks explainability and is limited to specific application scenarios, spurring recent research on pixel-level prediction for face manipulation forensics. However, existing forgery localization methods suffer from exploring frequency-based forgery traces in the localization network. In this paper, we observe that multi-frequency spectrum information is effective for identifying tampered regions. To this end, a novel Multi-Spectral Class Center Network (MSCCNet) is proposed for face manipulation detection and localization. Specifically, we design a Multi-Spectral Class Center (MSCC) module to learn more generalizable and multi-frequency features. Based on the features of different frequency bands, the MSCC module collects multi-spectral class centers and computes pixel-to-class relations. Applying multi-spectral class-level representations suppresses the semantic information of the visual concepts which is insensitive to manipulated regions of forgery images. Furthermore, we propose a Multi-level Features Aggregation (MFA) module to employ more low-level forgery artifacts and structural textures. Meanwhile, we conduct a comprehensive localization benchmark based on pixel-level FF++ and Dolos datasets. Experimental results quantitatively and qualitatively demonstrate the effectiveness and superiority of the proposed MSCCNet. We expect this work to inspire more studies on pixel-level face manipulation localization. The codes are available (https://github.com/miaoct/MSCCNet).


Key findings
MSCCNet outperforms existing methods in face manipulation localization and detection on both P-FF++ and Dolos datasets. The MSCC module effectively suppresses semantic information and enhances frequency-aware feature learning. The model shows strong generalization abilities to unseen datasets and manipulation techniques.
Approach
MSCCNet uses a Multi-Spectral Class Center (MSCC) module to learn multi-frequency features from different frequency bands, computing pixel-to-class relations and suppressing irrelevant semantic information. A Multi-level Features Aggregation (MFA) module combines low-level forgery artifacts and structural textures. The network performs both image-level classification and pixel-level localization.
Datasets
P-FF++ (a reconstructed version of FaceForensics++ with pixel-level annotations) and Dolos datasets.
Model(s)
Dilated ResNet-50 as the backbone, with a Multi-Spectral Class Center (MSCC) module and a Multi-level Features Aggregation (MFA) module for the localization branch. A simple MLP layer is used for the classification branch.
Author countries
China