Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature

Authors: Tong Zhou, Xuandong Zhao, Xiaolin Xu, Shaolei Ren

Published: 2024-06-04 03:58:14+00:00

AI Summary

This paper introduces Bileve, a bi-level signature scheme for securing text provenance in large language models (LLMs) against spoofing attacks. Bileve embeds both fine-grained signature bits for integrity checks and coarse-grained signals for source tracing, effectively mitigating spoofing while maintaining robustness.

Abstract

Text watermarks for large language models (LLMs) have been commonly used to identify the origins of machine-generated content, which is promising for assessing liability when combating deepfake or harmful content. While existing watermarking techniques typically prioritize robustness against removal attacks, unfortunately, they are vulnerable to spoofing attacks: malicious actors can subtly alter the meanings of LLM-generated responses or even forge harmful content, potentially misattributing blame to the LLM developer. To overcome this, we introduce a bi-level signature scheme, Bileve, which embeds fine-grained signature bits for integrity checks (mitigating spoofing attacks) as well as a coarse-grained signal to trace text sources when the signature is invalid (enhancing detectability) via a novel rank-based sampling strategy. Compared to conventional watermark detectors that only output binary results, Bileve can differentiate 5 scenarios during detection, reliably tracing text provenance and regulating LLMs. The experiments conducted on OPT-1.3B and LLaMA-7B demonstrate the effectiveness of Bileve in defeating spoofing attacks with enhanced detectability. Code is available at https://github.com/Tongzhou0101/Bileve-official.


Key findings
Bileve effectively defeats spoofing attacks, particularly semantic manipulation, while maintaining high detectability even with text editing. Compared to single-level signature schemes, Bileve shows significantly improved robustness and the ability to distinguish five different scenarios during detection.
Approach
Bileve uses a bi-level signature scheme embedding fine-grained signature bits for integrity checks and coarse-grained signals for source tracing. A novel rank-based sampling strategy is used for embedding, and detection differentiates five scenarios based on signature validity and statistical tests.
Datasets
OpenGen (text completion), LFQA (long-form question answering)
Model(s)
OPT-1.3B, LLaMA-7B, LLaMA-13B (as oracle model for perplexity), GPT-4 Turbo (for zero-shot evaluation)
Author countries
USA