I Know Which LLM Wrote Your Code Last Summer: LLM generated Code Stylometry for Authorship Attribution

Authors: Tamas Bisztray, Bilel Cherif, Richard A. Dubniczky, Nils Gruschka, Bertalan Borsos, Mohamed Amine Ferrag, Attila Kovacs, Vasileios Mavroeidis, Norbert Tihanyi

Published: 2025-06-18 19:49:41+00:00

AI Summary

This paper introduces CodeT5-Authorship, a novel model for attributing C programs to specific Large Language Models (LLMs), and LLM-AuthorBench, a benchmark dataset of 32,000 C programs generated by eight LLMs. The CodeT5-Authorship model achieves high accuracy in both binary and multi-class authorship attribution tasks.

Abstract

Detecting AI-generated code, deepfakes, and other synthetic content is an emerging research challenge. As code generated by Large Language Models (LLMs) becomes more common, identifying the specific model behind each sample is increasingly important. This paper presents the first systematic study of LLM authorship attribution for C programs. We released CodeT5-Authorship, a novel model that uses only the encoder layers from the original CodeT5 encoder-decoder architecture, discarding the decoder to focus on classification. Our model's encoder output (first token) is passed through a two-layer classification head with GELU activation and dropout, producing a probability distribution over possible authors. To evaluate our approach, we introduce LLM-AuthorBench, a benchmark of 32,000 compilable C programs generated by eight state-of-the-art LLMs across diverse tasks. We compare our model to seven traditional ML classifiers and eight fine-tuned transformer models, including BERT, RoBERTa, CodeBERT, ModernBERT, DistilBERT, DeBERTa-V3, Longformer, and LoRA-fine-tuned Qwen2-1.5B. In binary classification, our model achieves 97.56% accuracy in distinguishing C programs generated by closely related models such as GPT-4.1 and GPT-4o, and 95.40% accuracy for multi-class attribution among five leading LLMs (Gemini 2.5 Flash, Claude 3.5 Haiku, GPT-4.1, Llama 3.3, and DeepSeek-V3). To support open science, we release the CodeT5-Authorship architecture, the LLM-AuthorBench benchmark, and all relevant Google Colab scripts on GitHub: https://github.com/LLMauthorbench/.


Key findings
The CodeT5-Authorship model achieves 97.56% accuracy in binary classification (e.g., distinguishing between GPT-4.1 and GPT-4o) and 95.40% accuracy in multi-class attribution among five leading LLMs. The results demonstrate that LLM authorship attribution is feasible and accurate, even for closely related models.
Approach
The authors modify the CodeT5 encoder-decoder architecture, using only the encoder layers for classification. The encoder's output (first token) is fed into a two-layer classification head with GELU activation and dropout to predict the LLM author. The model is trained and evaluated on the LLM-AuthorBench dataset.
Datasets
LLM-AuthorBench: a dataset of 32,000 compilable C programs generated by eight state-of-the-art LLMs (GPT-4.1, GPT-4o, GPT-4o-mini, DeepSeek-v3, Qwen2.5-72B, Llama 3.3-70B, Claude-3.5-Haiku, and Gemini-2.5-Flash) across diverse coding tasks.
Model(s)
CodeT5-Authorship (modified CodeT5 encoder), BERT, RoBERTa, CodeBERT, ModernBERT, DistilBERT, DeBERTa-V3, Longformer, LoRA-fine-tuned Qwen2-1.5B, Random Forest, XGBoost, k-nearest neighbors (KNN), Support Vector Machine (SVM), Decision Tree
Author countries
Norway, United Arab Emirates, Hungary, Algeria