Super-Resolution Reconstruction of Remote Sensing Images Using Generative Adversarial Network with Permuted Self-Attention

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Image super-resolution (SR) reconstruction plays a key role in meeting the ever-growing demand for high-spatial-resolution remote-sensing imagery. Although generative adversarial networks (GANs) have been widely adopted for SR, their dependence on high-frequency information learned from training data often produces artifacts or distortions in complex remote-sensing scenes. Optical satellite images exhibit more intricate spatial distributions and richer multi-scale ground features than natural images; therefore, directly applying existing SR methods to them usually causes unstable convergence and noticeable visual artifacts, seriously degrading reconstruction quality and usability. To address these issues, we propose an improved GAN-based SR network that embeds a Permuted Self-Attention (PSA) module to strengthen global modeling. The PSA module employs a global-context-aware mechanism to adaptively select useful information and suppress noise, markedly improving the reconstruction of multi-scale objects in remote-sensing images. Extensive experiments on standard remote-sensing datasets demonstrate that the proposed method outperforms state-of-the-art alternatives in both objective metrics (PSNR, SSIM) and subjective visual quality, confirming its robustness and effectiveness in complex remote-sensing scenarios.

Article activity feed