Cascaded Dual-Directional Cross-Attention Transformer Network for Hyperspectral Imaging Classification

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Hyperspectral image classification faces challenges of inadequate utilization of spectral information and insufficient extraction of spatial features. Traditional methods often overly rely on high-reflectance bands while neglecting crucial spectral information in low-reflectance bands. This can lead to degraded classification performance for category information under small-sample conditions, resulting in limited classification results. To address this issue, this paper proposes a Cascaded Dual-Directional Cross-Attention Transformer Network for Hyperspectral Imaging Classification(CDCATNet). The network fully exploits spectral-spatial complementary information with a small number of labeled samples through a Dual-Directional Cross-Attention (DCA) module; it enhances feature extraction by employing cross-branch interaction and a multi-scale local-global fusion unit; finally, it achieves long-range dependency modeling and feature fusion through a lightweight transformer. Experiments on four publicly available datasets demonstrate that CDCATNet outperforms existing mainstream methods in terms of overall accuracy, average accuracy, and Kappa coefficient, exhibiting excellent classification performance and generalization ability.

Article activity feed