Generative Adversarial Network-Based Super-Resolution Reconstruction of Remote Sensing Images
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Advances in satellite imaging have positioned remote sensing as a critical tool for resource management and urban planning. However, current systems grapple with sensor limitations and atmospheric interference that induce image defects such as blurred edges and missing textures, alongside computational inefficiencies and limited generalization of traditional superresolution methods. This work proposes SDGAN (Super-Densely Connected Generative Adversarial Network), a lightweight framework integrating a super-dense residual module with 2D convolution kernels to enhance local feature representation and edge optimization while minimizing artifacts; a dual-discriminator system leveraging attention U-Net further improves efficiency by prioritizing critical features and reducing computational demands. Experiments on UCMerced-LandUse and WHU-RS19 datasets demonstrate state-of-the-art performance, achieving 21.72% and 14.47% runtime reductions over baselines, respectively, while effectively balancing reconstruction quality and computational efficiency in real-world remote sensing scenarios.