SReLLM, Strategic Retrieval Enhanced Large Language Model for Fake News Detector

Main Article Content

Alok Mishra, Halima Sadia

Abstract

Fake news poses serious political, economic, and social risks. While large language model (LLM)-based approaches have improved fake news detection through sophisticated reasoning and generative capabilities, they still encounter limitations, such as outdated information and poor performance on uncommon subjects. Retrieval-augmented models offer some improvements but are hindered by low-quality evidence and context length restrictions. To overcome these challenges, we present SReLLM—a Strategic Retrieval-Enhanced Large Language Model framework that strategically collects relevant web-based evidence to support accurate claim verification. Our system improves fake news detection performance by employing a multi-round retrieval mechanism, ensuring comprehensive and high-quality evidence collection. Furthermore, our approach enhances interpretability by generating clear, human-readable explanations alongside accurate verdict pre- dictions. Experimental results show that SReLLM achieves an accuracy of 90.93 percent, outperforming traditional machine learning models such as naive Bayes and SVM, as well as deep learning approaches like LSTM and BERT. Compared to other retrieval-augmented LLMs such as FLARE and Replug, SReLLM provides better accuracy and improved transparency through human-readable justifications. Future work will focus on enhancing multimodal misinformation detection by integrating text, image, video, and audio-based verification while optimizing computational efficiency for real-time applications.

Article Details

Section
Articles