Extending the RANGE of Graph Neural Networks: Relaying Attention Nodes for Global Encoding

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Graph Neural Networks (GNNs) are routinely used in molecular physics, social sciences, and economics to model many-body interactions in graph-like systems. However, GNNs are inherently local and can suffer from information flow bottlenecks. This is particularly problematic when modeling large molecular systems, where dispersion forces and local electric field variations drive collective structural changes. Existing solutions face challenges related to computational cost and scalability. We introduce RANGE, a model-agnostic framework that employs an attention-based aggregation-broadcast mechanism that significantly reduces oversquashing effects, and achieves remarkable accuracy in capturing long-range interactions at a negligible computational cost. Notably, RANGE is the first virtual-node message-passing implementation to integrate attention with positional encodings and regularization to dynamically expand virtual representations. This work lays the foundation for next-generation of machine-learned force fields, offering accurate and efficient modeling of long-range interactions for simulating large molecular systems.

Article activity feed