Defense against adversarial attacks on spoofing countermeasures of ASV
Authors: Haibin Wu, Songxiang Liu, Helen Meng, Hung-yi Lee
Published: 2020-03-06 08:08:54+00:00
AI Summary
This paper proposes spatial smoothing (passive) and adversarial training (proactive) as defense methods to enhance the robustness of Automatic Speaker Verification (ASV) spoofing countermeasure models against adversarial attacks. Experimental results demonstrate that both methods effectively improve the models' resilience to adversarial examples.
Abstract
Various forefront countermeasure methods for automatic speaker verification (ASV) with considerable performance in anti-spoofing are proposed in the ASVspoof 2019 challenge. However, previous work has shown that countermeasure models are vulnerable to adversarial examples indistinguishable from natural data. A good countermeasure model should not only be robust against spoofing audio, including synthetic, converted, and replayed audios; but counteract deliberately generated examples by malicious adversaries. In this work, we introduce a passive defense method, spatial smoothing, and a proactive defense method, adversarial training, to mitigate the vulnerability of ASV spoofing countermeasure models against adversarial examples. This paper is among the first to use defense methods to improve the robustness of ASV spoofing countermeasure models under adversarial attacks. The experimental results show that these two defense methods positively help spoofing countermeasure models counter adversarial examples.