Is it the model or the metric -- On robustness measures of deeplearning models
Authors: Zhijin Lyu, Yutong Jin, Sneha Das
Published: 2024-12-13 02:26:58+00:00
AI Summary
This paper investigates the robustness of deepfake detection models by introducing a new metric, Robust Ratio (RR), which complements the existing Robust Accuracy (RA). The authors demonstrate that while models may exhibit similar RA, their RR values vary significantly under different perturbation levels, revealing disparities in their robustness.
Abstract
Determining the robustness of deep learning models is an established and ongoing challenge within automated decision-making systems. With the advent and success of techniques that enable advanced deep learning (DL), these models are being used in widespread applications, including high-stake ones like healthcare, education, border-control. Therefore, it is critical to understand the limitations of these models and predict their regions of failures, in order to create the necessary guardrails for their successful and safe deployment. In this work, we revisit robustness, specifically investigating the sufficiency of robust accuracy (RA), within the context of deepfake detection. We present robust ratio (RR) as a complementary metric, that can quantify the changes to the normalized or probability outcomes under input perturbation. We present a comparison of RA and RR and demonstrate that despite similar RA between models, the models show varying RR under different tolerance (perturbation) levels.