Large Language Models and Social Media Information Integrity: Opportunities, Challenges, and Research Directions

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large Language Models (LLMs) have emerged as powerful tools that impact information integrity on social media platforms. This comprehensive review examines the dual role of LLMs in both facilitating and mitigating various information integrity challenges, including misinformation, disinformation, fake news, social bots, and privacy concerns. Through a systematic analysis of papers from academic databases, we identify key patterns in how LLMs influence social media ecosystems. Our findings reveal that while LLMs can enhance detection capabilities for malicious content and enable sophisticated defense mechanisms, they simultaneously pose risks by enabling the generation of highly convincing, deceptive content. We categorize and analyze the potential and challenges across different dimensions of information integrity, examining technical capabilities, ethical implications, and privacy concerns. The study demonstrates critical gaps in current approaches, particularly in cross-lingual detection, real-time monitoring, and privacy-preserving implementations. We conclude by proposing future research directions and recommendations for stakeholders to leverage LLMs while mitigating risks in social media information integrity.

Article activity feed