Encryption and Authentication with a Lensless Camera Based on a Programmable Mask

Authors: Eric Bezzam, Martin Vetterli

Published: 2025-07-12 10:05:55+00:00

AI Summary

This paper proposes using a programmable mask in a lensless camera for enhanced security and authentication. Dynamically varying mask patterns improve encryption strength beyond AES-256, and unique mask fingerprints enable robust image authentication, combating deepfakes.

Abstract

Lensless cameras replace traditional optics with thin masks, leading to highly multiplexed measurements akin to encryption. However, static masks in conventional designs leave systems vulnerable to simple attacks. This work explores the use of programmable masks to enhance security by dynamically varying the mask patterns. We perform our experiments with a low-cost system (around 100 USD) based on a liquid crystal display. Experimental results demonstrate that variable masks successfully block a variety of attacks while enabling high-quality recovery for legitimate users. The system's encryption strength exceeds AES-256, achieving effective key lengths over 2'500 bits. Additionally, we demonstrate how a programmable mask enables robust authentication and verification, as each mask pattern leaves a unique fingerprint on the image. When combined with a lensed system, lensless measurements can serve as analog certificates, providing a novel solution for verifying image authenticity and combating deepfakes.


Key findings
The programmable mask significantly improves encryption strength, exceeding AES-256. The system effectively resists various attacks, including chosen-plaintext and known-plaintext attacks. The unique mask fingerprints enable high-accuracy image authentication, reaching near-perfect accuracy with LPIPS score for the learned model when combined with lensed images.
Approach
The approach uses a programmable mask in a lensless camera to encrypt images. The varying mask patterns increase security against attacks, and the unique fingerprint of each pattern allows for image authentication. A learned decoder model is trained to handle variations in the point spread function (PSF) caused by changing masks.
Datasets
MirFlickr-S and MirFlickr-M datasets (created by the authors using DigiCam system and the MirFlickr dataset)
Model(s)
ADMM and a modular learned reconstruction approach (DRUNet architecture with five unrolled iterations of ADMM for camera inversion)
Author countries
Switzerland