Addressing the Limitations of Graph Neural Networks on Node-level Tasks

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As a generic data structure, graph is capable of modeling complex relations among objects in many real-world problems. Integrated with deep learning and graph signal processing, Graph Neural Network (GNN) has achieved significant progress for solving large, complex, graph-structured problems in recent decade. GNNs extend basic Neural Network (NN) by incorporating graph structures grounded on the relational inductive bias and have been commonly believed to outperform NNs in real-world tasks. Despite their efficacy, the development of deep and shallow GNNs is confronting two main challenges,• Limited expressive power of deep GNNs: Since graph convolution can be considered as a special form of Laplacian smoothing, stacking multiple GNN layers like the way as deep NNs can lead to an over-smoothing issue, where distant nodes become less identifiable and hard to be discriminated;• Performance degradation of shallow GNNs on heterophilic graphs: When the homophily principle is absent and nodes from different classes are more likely to be connected, the representation of nodes from distinct classes will be erroneously blending, leading nodes to be indistinguishable.In this dissertation, we will delve into these two obstacles in depth, analyzing themthoroughly and proposing methods to address them efficiently.

Article activity feed