Enhancing Trust in News Media: A Multimodality Approach to Detecting Fake News with Social Constructs
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The widespread dissemination of misinformation and fake news on social media platforms seriously challenges the integrity of public discourse, democratic stability, and societal trust. Although misinformation detection has been widely studied, one critical but underexplored dimension is the misquotation of legitimate news articles on social media. Misquotations—deliberate or accidental distortions of original news content—can mislead audiences and diminish trust in credible news sources, thereby amplifying the reach of misinformation. Addressing this issue requires a robust dataset that captures the original news content and its associated social media representations. To this end, we construct a comprehensive dataset by performing multi-source triangulation across four established datasets: FAKENEWSNET, NELA-GT, TruthSeekers, and Twitter15/16. This process yields approximately 158,400 aligned pairs of news stories and related social media posts. Building on this dataset, we propose a multimodal binary classification framework designed to detect misquotations by jointly modeling textual, visual, and contextual features. The model integrates engineered features representing social context and event semantics within a shared latent space, enabling a nuanced understanding of content distortion. Furthermore, we analyze the relative contribution of each modality—textual, visual, and contextual—using ablation studies and performance metrics to assess their impact on detection accuracy. This study introduces a novel approach to misquotation detection, contributing to the combat of misinformation through multimodal analysis.