SuperiorGAT: Graph Attention Networks for SparseLiDAR Point Cloud Reconstruction in AutonomousSystems
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
LiDAR-based perception in autonomous systems is fundamentally constrained by fixed vertical beam resolution and is furtherdegraded by structured beam dropout caused by occlusions or reduced-cost sensing hardware. This paper introducesSuperiorGAT, a graph attention–based framework for reconstructing missing elevation information in sparse LiDAR pointclouds under structured beam loss. By modeling LiDAR scans as beam-aware graphs and augmenting standard graphattention networks with gated residual fusion and lightweight feed-forward refinement, the proposed approach improves verticalreconstruction accuracy without increasing network depth.The effectiveness of SuperiorGAT is evaluated through extensive experiments on multiple KITTI environments, includingPerson, Road, Campus, and City, as well as cross-dataset validation on nuScenes with lower vertical resolution. Robustnessis assessed under increasing structured beam dropout, demonstrating that SuperiorGAT consistently achieves lower reconstructionerror and improved geometric consistency compared to interpolation-based methods, PointNet-based models, anddeeper GAT baselines. Qualitative X–Z projection analyses further confirm the model’s ability to preserve structural continuitywith minimal vertical distortion. Overall, the results indicate that targeted architectural refinement provides a computationallyefficient solution for enhancing LiDAR vertical reconstruction without reliance on additional sensor modalities or hardwareupgrades.