Explainable Multi-Hop Question Answering: A Rationale-Based Approach
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Multi-hop question answering tasks involve identifying relevant supporting sentences from a given set of documents, which serve as the rationale for deriving answers. Most research in this area consists of two main components: a rationale identification module and a reader module. Since the rationale identification module often relies on retrieval models or supervised learning, annotated rationales are typically essential. However, approaches that rely on annotations face challenges when adapting to open-domain settings. Moreover, when models are trained on annotated rationales, explainable artificial intelligence (XAI) requires clear explanations of how these rationales are derived. Therefore, traditional multi-hop question answering approaches that depend on annotated rationales are unsuitable for XAI, which demands transparency in the model’s reasoning process. To address this issue, we propose a rationale reasoning framework that can effectively infer rationales and demonstrate the model’s reasoning process, even in open-domain environments without annotations. The proposed model is applicable to various tasks without structural constraints, and experimental results demonstrate its significantly improved rationale reasoning capabilities in multi-hop question answering, relation extraction, and sentence classification tasks.