Semantics-Oriented Multitask Learning for DeepFake Detection: A Joint Embedding Approach

Authors: Mian Zou, Baosheng Yu, Yibing Zhan, Siwei Lyu, Kede Ma

Published: 2024-08-29 07:11:50+00:00

AI Summary

This paper proposes a semantics-oriented multitask learning approach for DeepFake detection using joint embedding of face images and textual descriptions of face attributes. It introduces an automated dataset expansion technique to enrich existing datasets with semantic hierarchical labels and employs bi-level optimization for automated training.

Abstract

In recent years, the multimedia forensics and security community has seen remarkable progress in multitask learning for DeepFake (i.e., face forgery) detection. The prevailing approach has been to frame DeepFake detection as a binary classification problem augmented by manipulation-oriented auxiliary tasks. This scheme focuses on learning features specific to face manipulations with limited generalizability. In this paper, we delve deeper into semantics-oriented multitask learning for DeepFake detection, capturing the relationships among face semantics via joint embedding. We first propose an automated dataset expansion technique that broadens current face forgery datasets to support semantics-oriented DeepFake detection tasks at both the global face attribute and local face region levels. Furthermore, we resort to the joint embedding of face images and labels (depicted by text descriptions) for prediction. This approach eliminates the need for manually setting task-agnostic and task-specific parameters, which is typically required when predicting multiple labels directly from images. In addition, we employ bi-level optimization to dynamically balance the fidelity loss weightings of various tasks, making the training process fully automated. Extensive experiments on six DeepFake datasets show that our method improves the generalizability of DeepFake detection and renders some degree of model interpretation by providing human-understandable explanations.


Key findings
Extensive experiments on six DeepFake datasets demonstrate improved cross-dataset, cross-manipulation, and cross-attribute detection performance compared to state-of-the-art methods. The joint embedding approach enhances model interpretability by providing human-understandable explanations. The proposed automated dataset expansion technique significantly improves the generalizability of the model.
Approach
The approach uses joint embedding of face images and textual labels describing face attributes to learn a shared feature space for DeepFake detection. It employs bi-level optimization to dynamically balance the loss weights of different tasks, automating the training process and improving generalizability.
Datasets
FF++, FFSC, CDF, FaceShifter (FSh), DeeperForensics-1.0 (DF-1.0), DeepFake Detection Challenge (DFDC), DiffusionFace, DiFF
Model(s)
ViT-B/32 (from CLIP) for image encoding, GPT-2 (from CLIP) for text encoding
Author countries
Hong Kong, Singapore, China, USA