Explainable Document Level Question Answering with Adaptive Granularity and Reasoning Path Generation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Document-level Question Answering (QA) in domains such as finance and law requires accurate retrieval and interpretable reasoning over long and complex documents. However, existing Retrieval-Augmented Generation (RAG) frameworks suffer from fixed retrieval granularity and opaque reasoning processes, limiting their adaptability and transparency. This paper presents AdaptiRAG LLM, a Llama 3-based framework that integrates adaptive multi-granularity retrieval with explicit multi-hop reasoning path generation. The system dynamically adjusts retrieval granularity according to query intent and constructs interpretable reasoning chains to enhance both accuracy and explainability. Experiments on multiple financial QA benchmarks demonstrate that AdaptiRAG LLM achieves superior retrieval performance, answer quality, and reasoning interpretability compared to existing RAG baselines, establishing a robust solution for professional document-level QA.

Article activity feed