Towards Explainable Language Reasoning via Multi-Modal Knowledge Graphs

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The increasing complexity of natural language reasoning poses significant challenges for transparency, interpretability, and trustworthiness in artificial intelligence systems. While large-scale language models have demonstrated remarkable success in generating contextually relevant responses, their decision-making processes often remain opaque. To address this gap, we propose a novel framework for explainable language reasoning based on multi-modal knowledge graphs (MMKGs). The framework integrates textual, visual, and structural knowledge sources into a unified graph representation, enabling models to ground language reasoning in explicit, semantically rich relationships. We introduce mechanisms for reasoning over MMKGs to generate interpretable inference paths, thus providing human-understandable justifications for model outputs. Experiments conducted on benchmark datasets demonstrate that our approach achieves competitive reasoning performance while significantly improving explainability, as measured by both automated and human evaluation metrics. The proposed framework contributes to bridging the gap between accuracy and interpretability, offering a pathway toward trustworthy and explainable language reasoning in real-world applications such as question answering, dialogue systems, and decision support.

Article activity feed