Large Language Models and Provenance Metadata for Determining the Relevance of Images and Videos in News Stories
Authors: Tomas Peterka, Matyas Bohacek
Published: 2025-02-13 16:48:27+00:00
AI Summary
This paper proposes a system built around a large language model (LLM) to combat multimodal misinformation by analyzing both news article text and the provenance metadata of included images and videos. The system determines whether the visual media are relevant to the news story, considering their origin (location and time) and any signs of tampering. A prototype implementation with an interactive web interface has been open-sourced.
Abstract
The most effective misinformation campaigns are multimodal, often combining text with images and videos taken out of context -- or fabricating them entirely -- to support a given narrative. Contemporary methods for detecting misinformation, whether in deepfakes or text articles, often miss the interplay between multiple modalities. Built around a large language model, the system proposed in this paper addresses these challenges. It analyzes both the article's text and the provenance metadata of included images and videos to determine whether they are relevant. We open-source the system prototype and interactive web interface.