Towards Trustworthy AI: Secure Deepfake Detection using CNNs and Zero-Knowledge Proofs
Authors: H M Mohaimanul Islam, Huynh Q. N. Vo, Aditya Rane
Published: 2025-07-22 20:47:46+00:00
AI Summary
TrustDefender is a two-stage deepfake detection framework using a lightweight CNN for real-time detection in XR streams and a zero-knowledge proof (ZKP) protocol to validate results without revealing user data. It addresses computational constraints and privacy concerns of XR platforms, achieving 95.3% detection accuracy.
Abstract
In the era of synthetic media, deepfake manipulations pose a significant threat to information integrity. To address this challenge, we propose TrustDefender, a two-stage framework comprising (i) a lightweight convolutional neural network (CNN) that detects deepfake imagery in real-time extended reality (XR) streams, and (ii) an integrated succinct zero-knowledge proof (ZKP) protocol that validates detection results without disclosing raw user data. Our design addresses both the computational constraints of XR platforms while adhering to the stringent privacy requirements in sensitive settings. Experimental evaluations on multiple benchmark deepfake datasets demonstrate that TrustDefender achieves 95.3% detection accuracy, coupled with efficient proof generation underpinned by rigorous cryptography, ensuring seamless integration with high-performance artificial intelligence (AI) systems. By fusing advanced computer vision models with provable security mechanisms, our work establishes a foundation for reliable AI in immersive and privacy-sensitive applications.