Hyperbolic Graph Convolutional NetworkRelation Extraction Model CombiningDependency Syntax and Contrastive Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In current relation extraction tasks, when the input sentence structure is complex,the performance of in-context learning methods based on large language modelis still lower than that of traditional pre-train fine-tune models. For complex sen-tence structures, dependency syntax information can provide effective prior textstructure information for relation extraction. However, most studies are affectedby the noise in the syntactic information automatically extracted by natural lan-guage processing toolkits. Additionally, traditional pre-training encoders haveissues such as an overly centralized representation of word embedding for high-frequency words, which adversely affects the model to learn contextual semanticinformation. To address proposed problem, the paper proposes a HyperbolicGraph Convolutional Network Relation Extraction Model Combine DependencySyntax and Contrastive Learning. Based on the hyperbolic graph neural net-work, dependent syntactic information and information optimization strategiesare introduced to solve the problem of word embedding concentration. Simulta-neously, to mitigate the impact of noise in dependency syntax information onthe relation extraction task, a contrastive learning approach is employed. Afterthe model learns context semantics separately in the original dependency syn-tax information and dependency syntax information with added random noise,it maximizes the mutual information between entity words to assist the modelin distinguishing noise in dependency syntax. The experiments indicate that theproposed model in this paper can effectively enhance the performance of relationextraction on public datasets, especially achieving significantly higher precisionon datasets with complex sentence structures compared to in-context learning.

Article activity feed