Combined GNN specialized in inductive prediction and PLM for natural language inductive reasoning
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Inductive reasoning involves abstracting general principles from specific instances. It primarily relies on rules derived from relationships and is minimally dependent on specific data about entities, such as people, places, or organizations.Pre-trained language models (PLM) tend to focus on learning statistical features within a corpus. Thus, when inductive reasoning is expressed in a natural language, PLMs face challenges in learning the logical relations behind the text. Recently, researchers have explored graph neural network (GNN) architectures that excel in inductive inference on knowledge graphs (KGs) for inductive link prediction tasks; however, their application to natural language remains limited. To address the natural language inductive reasoning tasks, we propose a framework that utilizes a specific GNN module specialized in inductive link prediction as a reasoning mechanism. To construct inputs inferable to GNN from natural language, we first apply insights from a study of relation extraction tasks and use PLM to obtain embeddings for edge-related inferences. Subsequently, the newly designed module performs edge scoring and initializes the relation embeddings. The scores are used to prune the edges and are learned through edge weighting within the decoder. Experimental results on text datasets requiring logical inductive reasoning demonstrate that the proposed method notably improves PLM performance, outperforming the baselines. Furthermore, robustness evaluation on subsets provided by CLUTRR shows that our model surpasses other relational reasoning-based models in its ability to learn from and generalize noisy data.