Explaining Deepfake Detection by Analysing Image Matching

Authors: Shichao Dong, Jin Wang, Jiajun Liang, Haoqiang Fan, Renhe Ji

Published: 2022-07-20 06:23:11+00:00

AI Summary

This paper investigates how deepfake detection models learn artifact features from binary labels, proposing three hypotheses related to image matching. It introduces the FST-Matching Deepfake Detection Model to improve detection performance, especially on compressed videos.

Abstract

This paper aims to interpret how deepfake detection models learn artifact features of images when just supervised by binary labels. To this end, three hypotheses from the perspective of image matching are proposed as follows. 1. Deepfake detection models indicate real/fake images based on visual concepts that are neither source-relevant nor target-relevant, that is, considering such visual concepts as artifact-relevant. 2. Besides the supervision of binary labels, deepfake detection models implicitly learn artifact-relevant visual concepts through the FST-Matching (i.e. the matching fake, source, target images) in the training set. 3. Implicitly learned artifact visual concepts through the FST-Matching in the raw training set are vulnerable to video compression. In experiments, the above hypotheses are verified among various DNNs. Furthermore, based on this understanding, we propose the FST-Matching Deepfake Detection Model to boost the performance of forgery detection on compressed videos. Experiment results show that our method achieves great performance, especially on highly-compressed (e.g. c40) videos.


Key findings
The hypotheses regarding artifact-relevant visual concepts and their vulnerability to compression were verified. The FST-Matching Deepfake Detection Model showed improved performance on compressed videos, particularly highly compressed ones (e.g., c40), compared to state-of-the-art methods.
Approach
The authors propose an explanation method using Shapley values to interpret deepfake detection models' predictions. Based on their findings, they develop the FST-Matching Deepfake Detection Model, which disentangles source/target-irrelevant features to improve detection on compressed videos.
Datasets
FF++ dataset
Model(s)
ResNet-18, ResNet-34, EfficientNet-b3, pre-trained models from [36] and [53], and the proposed FST-Matching Deepfake Detection Model.
Author countries
UNKNOWN