UAV-based Pipeline for Road Marking Condition Assessment and Localization

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study introduces an end-to-end framework for assessing the condition of road markings in high-resolution drone imagery by jointly localizing, classifying the type of road marking, and quantifying damage. Candidate markings are first detected with YOLOv9, enabling robust instance discovery across complex urban scenes. For fine delineation, detections from YOLOv9 are cropped and segmented by a standalone VGG16-UNet, which refines boundaries and object structures. Condition is then estimated at the pixel level by modeling appearance statistics with kernel density estimation (KDE) and Gaussian mixture modeling (GMM) to separate intact from distressed material. From these distributions, we derive a per-instance damage ratio summarizing the proportion of degraded pixels within each marking. All outputs are georeferenced to real-world coordinates, supporting map-based visualization and integration into road asset inventories. Experiments in unseen areas demonstrate consistent generalization, with performance reported using standard detection (precision, recall, mAP) and segmentation (IoU) metrics, alongside analyses of damage ratio stability and runtime. The results show that the proposed pipeline reliably identifies road markings, estimates their damage levels, and anchors findings in geographic space, offering actionable evidence for inspection prioritization and maintenance planning. Limitations and future work include broader category coverage, improved modeling under extreme lighting conditions, and cross-city validation.

Article activity feed