Deep Generative Reconstruction of Near Infrared Band for Crop Health Monitoring under Variable Illumination Conditions

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Monitoring crop health is a pivotal pillar of sustainable agricultural production, requiring non-invasive sensing methods to prevent biotic and abiotic stress. Although multispectral and hyperspectral imaging has enabled advances in crop phenomics, its large-scale adoption remains limited by high costs, restricted spatial resolution, and strong sensitivity to lighting conditions. This work analyzes how varying illumination affects generative artificial intelligence models designed to reconstruct the near-infrared (NIR) band for agricultural and forestry applications. Two deep learning architectures were developed and evaluated: Convolutional Variational Autoencoders (CNN-VAEs) and Conditional Generative Adversarial Networks (cGANs). Each model was trained on aerial datasets collected over two forest sites under four shadow scenarios (without, minimal, short, and moderate) to assess their robustness across heterogeneous lighting conditions. The performance model was analyzed using SSIM, PSNR, and NRMSE to enable a standardized comparison between architectures. The experimental findings revealed that the multi-dataset VAE achieved the highest performance, reaching 0.96 SSIM, 35.06 dB PSNR, and an NRMSE of 0.018 under moderate shadows. These results demonstrate that VAE-based architectures provide more stable and reliable multispectral reconstructions than GANs under diverse illumination conditions.

Article activity feed