Detecting Fake News Using Deep Learning Approaches
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Deep learning (DL)-based detection systems for fake news should improve and adapt as fake news grows more complex in order to continue protecting the integrity of information in digital society. In addition to addressing a serious technological challenge, creating strong DL-based fake news detection systems is an important tool to maintain authenticity of information. Because of the difficulty of finding Arabic data, the method offers a basis for understanding the significance and difficulties of collecting data in Arabic. Due to the limited dataset, the suggested system has been translated from English into Arabic, and its performance and the potential of DL in identifying fake news were verified using another dataset that was accessible. Existing studies utilizing DL methods, like recurrent neural networks (RNNs), LSTMs, and convolutional neural networks (CNNs) for detecting fake news are discussed. For comparison, the term Aribert is used. It makes use of cutting-edge DL methods to propose a new method for identifying fake news. The paper offers a thorough framework for automatically detecting as well as classifying any false information on digital platforms by combining deep neural networks (DNNs) with natural language processing (NLP). Contextual understanding, temporal dependency, and content complexity are some of the main challenges with fake news detection that the approach solves. From purposefully created news articles to thinly veiled misleading content, the system displays great performance in detecting various types of misinformation. Furthermore, cutting-edge feature extraction (FE) methods have been used, which take into account metadata as well as textual content, such as propagation patterns and source credibility. When put to comparison with machine learning (ML) methods, the experimental results show notable gains in detection speed and accuracy. Spacey, Fasttext, and two-word embeddings were among the four methods that were employed. The best DNN method used Spacey to obtain a strong performance of 78%, whereas LSTM model performed well with Spacey at 49% and Fasttext at 51%. On accuracy scale, it scored a 99.1% success rate after converting to BERT, whereas AraBERT earned a 99.3% success rate. AraBERT is thus better. This is due to the fact that both AraBERT and BERT are trained on various Arabic articles.