Deep Super-Resolution for Autonomous Aerial Robotics
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The performance of autonomous aerial robots depends heavily on visual perception, yet limitations in onboard hardware often result in low-resolution imaging and degraded visual quality. This paper introduces AESR++, an enhanced autoencoder-based super-resolution model tailored for aerial robotics. The architecture extends traditional encoder–decoder designs with skip connections, self-attention mechanisms, attention regularization modules, and contrastive latent learning to achieve improved reconstruction robustness and generalization. Evaluated on the Drone Super-Resolution (DSR) dataset, AESR++ demonstrates superior results over recent state-of-the-art methods across multiple upscaling factors (2×, 4×, and 8×) and evaluation metrics (PSNR, SSIM, LPIPS, DISTS, NIQE, BRISQUE). Qualitative analyses further confirm its ability to preserve fine details and natural appearance under diverse aerial scenarios. By enabling high-quality reconstructions from low-resolution inputs, AESR++ provides an effective and scalable solution for remote image processing, with direct implications for tasks such as navigation, mapping, environmental monitoring, and decision-making in aerial robotics.