Fixing Reference Hallucinations of LLMs
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In October and November 2024, using popular LLMs like OpenAI ChatGPT (4 and below), Azure OpenAI and its Copilot instantiations, Google Gemini and GenAI LLM tuned for scientific papers like Zendy, asking a question and references produces with every LLM fake references, well constructed, but with different titles or authors than the web or journal reference actually associated to the citation, or sometimes totally invented. Prompting to ensure that the reference exists and is correct may help for some, but in general it does not. Others have reported similar issues when using these LLM/GenAI services to produce legal briefs, and other legal documents.This paper suggest simple ways to address this, instead of trying to just improve the LLMs and hope hallucinations will be reduced; they won’t, no matter what, they are inherent to LLM. It is very surprising and mindboggling that LLM providers have not been implementing these kind of solutions: just check if the references exist, are correctly cited, and relevant to the paper/context. We also expand the approach with our MultiAI approach to improve on the previous approach, or address other hallucinations; actually eliminating in our tests.