DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant Forgery Clues

Authors: Kun Pan, Yin Yifang, Yao Wei, Feng Lin, Zhongjie Ba, Zhenguang Liu, ZhiBo Wang, Lorenzo Cavallaro, Kui Ren

Published: 2023-09-18 07:02:26+00:00

AI Summary

This paper introduces DFIL, a novel incremental learning framework for deepfake detection that addresses the challenge of model accuracy degradation when encountering new deepfake methods. DFIL achieves this by learning domain-invariant representations using supervised contrastive learning and mitigating catastrophic forgetting through multi-perspective knowledge distillation and a novel replay strategy.

Abstract

The malicious use and widespread dissemination of deepfake pose a significant crisis of trust. Current deepfake detection models can generally recognize forgery images by training on a large dataset. However, the accuracy of detection models degrades significantly on images generated by new deepfake methods due to the difference in data distribution. To tackle this issue, we present a novel incremental learning framework that improves the generalization of deepfake detection models by continual learning from a small number of new samples. To cope with different data distributions, we propose to learn a domain-invariant representation based on supervised contrastive learning, preventing overfit to the insufficient new data. To mitigate catastrophic forgetting, we regularize our model in both feature-level and label-level based on a multi-perspective knowledge distillation approach. Finally, we propose to select both central and hard representative samples to update the replay set, which is beneficial for both domain-invariant representation learning and rehearsal-based knowledge preserving. We conduct extensive experiments on four benchmark datasets, obtaining the new state-of-the-art average forgetting rate of 7.01 and average accuracy of 85.49 on FF++, DFDC-P, DFD, and CDF2. Our code is released at https://github.com/DeepFakeIL/DFIL.


Key findings
DFIL achieves state-of-the-art results with an average forgetting rate of 7.01 and average accuracy of 85.49 across the four benchmark datasets. The ablation study validates the effectiveness of each component of the DFIL framework. The visualization results demonstrate that DFIL learns more consistent and informative features compared to existing methods.
Approach
DFIL uses supervised contrastive learning to learn domain-invariant representations, preventing overfitting to limited new data. It also employs multi-perspective knowledge distillation to mitigate catastrophic forgetting and a novel replay strategy selecting central and hard samples to update the replay set.
Datasets
FF++, DFDC-P, DFD, CDF2
Model(s)
Xception (primarily), EfficientNet-B4, ResNet34 are also evaluated for backbone comparison.
Author countries
China, Singapore, United Kingdom