Do Not DeepFake Me: Privacy-Preserving Neural 3D Head Reconstruction Without Sensitive Images

Authors: Jiayi Kong, Xurui Song, Shuo Huai, Baixin Xu, Jun Luo, Ying He

Published: 2023-12-07 07:41:10+00:00

AI Summary

This paper introduces a novel two-stage 3D facial reconstruction method that prioritizes privacy by avoiding the use of sensitive facial images. It uses non-sensitive rear-head images for initial geometry and refines it using processed, privacy-removed gradient images, achieving comparable accuracy to methods using full images while resisting deepfake applications and facial recognition systems.

Abstract

While 3D head reconstruction is widely used for modeling, existing neural reconstruction approaches rely on high-resolution multi-view images, posing notable privacy issues. Individuals are particularly sensitive to facial features, and facial image leakage can lead to many malicious activities, such as unauthorized tracking and deepfake. In contrast, geometric data is less susceptible to misuse due to its complex processing requirements, and absence of facial texture features. In this paper, we propose a novel two-stage 3D facial reconstruction method aimed at avoiding exposure to sensitive facial information while preserving detailed geometric accuracy. Our approach first uses non-sensitive rear-head images for initial geometry and then refines this geometry using processed privacy-removed gradient images. Extensive experiments show that the resulting geometry is comparable to methods using full images, while the process is resistant to DeepFake applications and facial recognition (FR) systems, thereby proving its effectiveness in privacy protection.


Key findings
The proposed method achieves comparable geometric accuracy to methods using full facial images, while significantly reducing the effectiveness of facial recognition systems and deepfake generation. The two-stage approach effectively leverages non-sensitive data for initial geometry and privacy-protected gradient information for refinement.
Approach
The approach uses a two-stage training process. The first stage uses non-sensitive rear-head images to establish basic geometry. The second stage refines the geometry using processed gradient images from frontal views, which have been modified to remove sensitive facial information, focusing only on color variations.
Datasets
FaceScape and High-Fidelity 3D Head (H3DS)
Model(s)
VolSDF
Author countries
Singapore