XAttnMark: Learning Robust Audio Watermarking with Cross-Attention

Authors: Yixin Liu, Lie Lu, Jihui Jin, Lichao Sun, Andrea Fanelli

Published: 2025-02-06 17:15:08+00:00

AI Summary

This paper introduces XAttnMark, a novel audio watermarking method that achieves state-of-the-art performance in both watermark detection and attribution. It improves upon existing methods by leveraging partial parameter sharing between generator and detector, a cross-attention mechanism, and a psychoacoustic-aligned loss function.

Abstract

The rapid proliferation of generative audio synthesis and editing technologies has raised significant concerns about copyright infringement, data provenance, and the spread of misinformation through deepfake audio. Watermarking offers a proactive solution by embedding imperceptible, identifiable, and traceable marks into audio content. While recent neural network-based watermarking methods like WavMark and AudioSeal have improved robustness and quality, they struggle to achieve both robust detection and accurate attribution simultaneously. This paper introduces Cross-Attention Robust Audio Watermark (XAttnMark), which bridges this gap by leveraging partial parameter sharing between the generator and the detector, a cross-attention mechanism for efficient message retrieval, and a temporal conditioning module for improved message distribution. Additionally, we propose a psychoacoustic-aligned temporal-frequency masking loss that captures fine-grained auditory masking effects, enhancing watermark imperceptibility. Our approach achieves state-of-the-art performance in both detection and attribution, demonstrating superior robustness against a wide range of audio transformations, including challenging generative editing with strong editing strength. The project webpage is available at https://liuyixin-louis.github.io/xattnmark/.


Key findings
XAttnMark achieves state-of-the-art performance in both detection and attribution, demonstrating superior robustness against a wide range of audio transformations, including generative editing. It outperforms existing methods, particularly in attribution accuracy while maintaining comparable perceptual quality.
Approach
XAttnMark uses a blended architecture with partial parameter sharing between the generator and detector, employing a cross-attention mechanism for efficient message retrieval and a temporal conditioning module for improved message distribution. A psychoacoustic-aligned temporal-frequency masking loss enhances watermark imperceptibility.
Datasets
A mixed dataset of 4100 hours containing speech (VoxPopuli, LibriSpeech), music (MusicCap), and sound effects (AudioSet). A held-out MusicCap test set was also used.
Model(s)
Convolutional encoder-decoder models with LSTM layers, cross-attention mechanism.
Author countries
USA