TweepFake: about Detecting Deepfake Tweets

Authors: Tiziano Fagni, Fabrizio Falchi, Margherita Gambini, Antonio Martella, Maurizio Tesconi

Published: 2020-07-31 19:01:13+00:00

AI Summary

This paper introduces TweepFake, the first dataset of real deepfake tweets collected from Twitter, comprising both human-written and bot-generated tweets using various techniques (Markov Chains, RNN, LSTM, GPT-2). The authors evaluate 13 deepfake text detection methods on this dataset, establishing a baseline for future research in this area.

Abstract

The recent advances in language modeling significantly improved the generative capabilities of deep neural models: in 2019 OpenAI released GPT-2, a pre-trained language model that can autonomously generate coherent, non-trivial and human-like text samples. Since then, ever more powerful text generative models have been developed. Adversaries can exploit these tremendous generative capabilities to enhance social bots that will have the ability to write plausible deepfake messages, hoping to contaminate public debate. To prevent this, it is crucial to develop deepfake social media messages detection systems. However, to the best of our knowledge no one has ever addressed the detection of machine-generated texts on social networks like Twitter or Facebook. With the aim of helping the research in this detection field, we collected the first dataset of real deepfake tweets, TweepFake. It is real in the sense that each deepfake tweet was actually posted on Twitter. We collected tweets from a total of 23 bots, imitating 17 human accounts. The bots are based on various generation techniques, i.e., Markov Chains, RNN, RNN+Markov, LSTM, GPT-2. We also randomly selected tweets from the humans imitated by the bots to have an overall balanced dataset of 25,572 tweets (half human and half bots generated). The dataset is publicly available on Kaggle. Lastly, we evaluated 13 deepfake text detection methods (based on various state-of-the-art approaches) to both demonstrate the challenges that Tweepfake poses and create a solid baseline of detection techniques. We hope that TweepFake can offer the opportunity to tackle the deepfake detection on social media messages as well.


Key findings
Fine-tuned transformer-based models achieved the highest accuracy (around 90% for RoBERTa), significantly outperforming other methods. However, all methods struggled more with detecting GPT-2 generated tweets, suggesting that sophisticated generative models produce more human-like text. The character-level GRU model performed surprisingly well on GPT-2 tweets.
Approach
The authors created a dataset of real deepfake tweets from Twitter, balanced between human and bot-generated tweets using diverse generation methods. They then evaluated 13 deepfake text detection methods, including those using bag-of-words, BERT embeddings, character-level CNNs/GRUs, and fine-tuned transformer-based models.
Datasets
TweepFake dataset (25,572 tweets; half human-written, half bot-generated from 23 bots imitating 17 human accounts, using various generation techniques including Markov Chains, RNN, RNN+Markov, LSTM, GPT-2). The dataset is publicly available on Kaggle.
Model(s)
Logistic Regression, Random Forest, SVM, CNN, GRU, BERT, DistilBERT, RoBERTa, XLNet (with fine-tuning in some cases)
Author countries
Italy