HydroVision: Predicting Optically Active Parameters in Surface Water Using Computer Vision

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Ongoing advancements in Computer Vision, particularly in pattern recognition and scene classification, have paved the way for innovative environmental monitoring applications. Deep Learning has demonstrated promising results, enabling non-contact water quality monitoring and contamination assessment, both essential for disaster response and public health protection. This manuscript proposes HydroVision, a novel deep learning-based scene classification framework that estimates optically active water quality parameters, such as Chlorophyll-α, Chlorophylls, Colored Dissolved Organic Matter (CDOM), Phycocyanins, Suspended Sediments, and Turbidity, from Red-Green-Blue (RGB) images of surface water. It is trained on a diverse and extensive dataset of over 500,000 seasonally varied images sourced from the United States Geological Survey (USGS) Hydrologic Imagery Visualization and Information System (HIVIS) database between early 2022 and late 2024. The proposed model introduces an innovative approach to water quality monitoring and assessment using widely available RGB imagery, serving as a scalable and cost-effective alternative to traditional multispectral/hyperspectral remote sensing. The model is trained using four state-of-the-art Convolutional Neural Network (CNN) architectures: VGG-16, ResNet50, MobileNetV2, DenseNet121 and a Vision Transformer (ViT), leveraging transfer learning to determine the optimal framework for predicting six optically active parameters. Among them, the best-performing model achieves a validation R2 (coefficient of determination) score of 0.89 using DenseNet121 for predicting CDOM, underscoring its potential for real-world water quality assessment across diverse environmental conditions. Although trained on well-lit images, enhancing robustness under low-light or obstructed conditions offers a promising direction for expanding its practical utility.

Article activity feed