A Generative Prompting Framework for Robust Misinformation Mitigation on Social Platforms

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Combating the spread of misinformation on social media is a critical challenge. Manual fact-checking is insufficient due to the scale and speed of online information dissemination, while existing Large Language Models (LLMs) often struggle with up-to-date information and multimodal content. This paper introduces MUSE+, a novel and efficient prompt-based approach for correcting misinformation using LLMs. MUSE+ leverages in-context learning by carefully engineered prompts that define the task, provide context, specify correction guidelines, and include illustrative examples. Our experimental evaluation demonstrates that MUSE+ significantly outperforms GPT-4, the original MUSE model, and even human laypeople in terms of correction quality, as assessed by both quantitative metrics and expert human evaluation. Ablation studies confirm the importance of each prompt component, and further analyses reveal MUSE+'s robustness across misinformation types and prompt variations, alongside its efficient correction generation time. These results highlight the potential of prompt engineering for creating scalable, adaptable, and high-quality misinformation correction systems, offering a promising pathway for mitigating the negative impacts of online misinformation.

Article activity feed