Comparative Analysis of AI-Generated Research Content: Evaluating ChatGPT and Google Gemini

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background: The advent of Natural Language Generation (NLG) models like ChatGPT and Google Bard has transformed academic writing by automating the creation of research articles. This study aims to evaluate the effectiveness of these AI tools in academic content generation, with a focus on their authenticity, relevance, and potential for plagiarism. Additionally, it explores the ethical concerns associated with AI-generated research articles. Methods: The research employs a comparative analysis of articles generated by ChatGPT and Google Gemini, using Turnitin, a plagiarism detection tool, to assess the originality of the content. Key parameters, such as citation accuracy, reference authenticity, and the similarity index, were examined to evaluate the validity and ethical use of these AI tools. Results: The findings reveal that while ChatGPT and Google Gemini generate coherent articles, both tools frequently produce fabricated citations and references. ChatGPT adhered to APA citation styles but used non-existent sources, while Google Gemini presented some authentic sources but failed to follow proper citation formats. The similarity index for AI-generated content was lower than anticipated, but the repetition of limited sources compromised the comprehensiveness of the work. Conclusion: AI tools like ChatGPT and Google Gemini hold potential for streamlining research article generation, but human supervision remains critical. The study emphasizes the need for ethical guidelines and robust content verification methods to mitigate issues such as plagiarism and fabricated data. Researchers and institutions are encouraged to adopt AI tools responsibly, ensuring their use enhances academic integrity rather than undermines it.

Article activity feed