Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature
Authors: Tong Zhou, Xuandong Zhao, Xiaolin Xu, Shaolei Ren
Published: 2024-06-04 03:58:14+00:00
AI Summary
This paper introduces Bileve, a bi-level signature scheme for securing text provenance in large language models (LLMs) against spoofing attacks. Bileve embeds both fine-grained signature bits for integrity checks and coarse-grained signals for source tracing, effectively mitigating spoofing while maintaining robustness.
Abstract
Text watermarks for large language models (LLMs) have been commonly used to identify the origins of machine-generated content, which is promising for assessing liability when combating deepfake or harmful content. While existing watermarking techniques typically prioritize robustness against removal attacks, unfortunately, they are vulnerable to spoofing attacks: malicious actors can subtly alter the meanings of LLM-generated responses or even forge harmful content, potentially misattributing blame to the LLM developer. To overcome this, we introduce a bi-level signature scheme, Bileve, which embeds fine-grained signature bits for integrity checks (mitigating spoofing attacks) as well as a coarse-grained signal to trace text sources when the signature is invalid (enhancing detectability) via a novel rank-based sampling strategy. Compared to conventional watermark detectors that only output binary results, Bileve can differentiate 5 scenarios during detection, reliably tracing text provenance and regulating LLMs. The experiments conducted on OPT-1.3B and LLaMA-7B demonstrate the effectiveness of Bileve in defeating spoofing attacks with enhanced detectability. Code is available at https://github.com/Tongzhou0101/Bileve-official.