Addressing Node Integration Skewness in Graph Neural Networks Using Hop-Wise Attention
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Graph neural networks (GNNs) often suffer performance degradation as their layer count grows, typically due to the well-known problems of over-smoothing and over-squashing. In this work, we identify an additional factor contributing to this degradation, which we term the K-skewed-traversal problem: certain hop distances are disproportionately emphasized during aggregation, with this emphasis intensifying as the number of layers grows. To address this, we introduce an algorithm called Hop-wise Graph Attention Network (HGAT) that ensures uniform aggregation across hops to eliminate the K-skewed traversal problem, and employs a hop-wise attention mechanism to adaptively prioritize specific hop distances. \textcolor{black}{We theoretically prove that HGAT removes this skewness by balancing contributions from different hop distances before applying hop-wise attention}. Moreover, in our extensive empirical evaluation, we observe notable improvement in terms of solution quality compared to the state-of-the-art GNN models, particularly as the number of layers increases.