Point-HRRP-Net: A Deep Fusion Framework via Bi-Directional Cross-Attention for Robust Radar Object Classification in Remote Sensing

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Robust radar object classification is a challenging task, primarily due to the aspect sensitivity limitation of one-dimensional High-Resolution Range Profile (HRRP) data. To address this, we propose Point-HRRP-Net. This multi-modal framework integrates HRRP with 3D LiDAR point clouds via a Bi-Directional Cross-Attention (Bi-CA) mechanism to enable deep feature interaction. Since paired real-world data is scarce, we constructed a high-fidelity simulation dataset to validate our approach. Experiments conducted under strict angular separation demonstrated that Point-HRRP-Net consistently outperformed single-modality baselines. Our results also verified the effectiveness of Dynamic Graph CNN (DGCNN) for feature extraction and highlighted the high inference speed and the potential of Mamba- based architectures for future efficient designs. Finally, this work validates the feasibility of the proposed approach in simulated environments, establishing a foundation for robust object classification in real-world scenarios.

Article activity feed