Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised Learning

Authors: Haibin Wu, Andy T. Liu, Hung-yi Lee

Published: 2020-06-05 03:03:06+00:00

AI Summary

This paper proposes using Mockingjay, a self-supervised learning model, to defend against black-box adversarial attacks on anti-spoofing models for automatic speaker verification. High-level representations extracted by Mockingjay prevent the transferability of adversarial examples and successfully counter these attacks.

Abstract

High-performance anti-spoofing models for automatic speaker verification (ASV), have been widely used to protect ASV by identifying and filtering spoofing audio that is deliberately generated by text-to-speech, voice conversion, audio replay, etc. However, it has been shown that high-performance anti-spoofing models are vulnerable to adversarial attacks. Adversarial attacks, that are indistinguishable from original data but result in the incorrect predictions, are dangerous for anti-spoofing models and not in dispute we should detect them at any cost. To explore this issue, we proposed to employ Mockingjay, a self-supervised learning based model, to protect anti-spoofing models against adversarial attacks in the black-box scenario. Self-supervised learning models are effective in improving downstream task performance like phone classification or ASR. However, their effect in defense for adversarial attacks has not been explored yet. In this work, we explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks. A layerwise noise to signal ratio (LNSR) is proposed to quantize and measure the effectiveness of deep models in countering adversarial noise. Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples, and successfully counter black-box attacks.


Key findings
Experimental results show that Mockingjay effectively prevents the transferability of adversarial examples, outperforming other passive defense methods. A proposed layerwise noise-to-signal ratio metric demonstrates that Mockingjay attenuates adversarial noise layer by layer. Pre-training is crucial for the effectiveness of this defense.
Approach
The authors propose a passive defense mechanism that uses Mockingjay, a self-supervised learning model, to extract high-level representations from audio spectrograms before feeding them to the anti-spoofing model. This approach mitigates the impact of adversarial noise added to the input.
Datasets
ASVspoof 2019 (LA partition), LibriSpeech
Model(s)
Mockingjay (self-supervised learning model), LCNN, SENet (anti-spoofing models)
Author countries
Taiwan