Illusory Citations and Academic Hallucination: Generative AI and the Erosion of Scholarly Authenticity — A Comparative Study of Arab and International Open-Access Journals

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study aims to analyze the impact of risks associated with the use of generative artificial intelligence tools on scholarly integrity in research published in open-access journals, by conducting a comparative analysis across three levels: the global level, the level of developed countries, and the Arab region. The study adopts a critical analytical methodology based on a systematic literature review (23 previous studies) and the analysis of open-access data from platforms such as OSF Preprints, CrossRef, and Semantic Scholar . The findings reveal that the proportion of entirely fabricated hallucinated citations reaches 19.9%, with significant errors present in 45.4% of other citations (Enago, 2026). At the Arab level, the results indicate a clear knowledge gap: while 84.2% of researchers call for mandatory disclosure of AI use, the percentage of Arab academic libraries that have implemented clear ethical policies does not exceed 12%. The study concludes that open-access journals are facing an unprecedented “crisis of trust,” and proposes an integrated ethical governance framework combining transparency, accountability, institutional training, and automated citation verification.

Article activity feed