Lightweight MobileUNet-FPN for Real-Time Visual Navigation of Agricultural Robots

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Precise segmentation of drivable areas and real-time extraction of navigation lines are essential for autonomous navigation of agricultural robots. This study proposes a lightweight semantic segmentation framework, MobileUNet-FPN, to enhance visual navigation efficiency under complex field conditions. The model integrates an improved MobileNetV4 encoder for efficient feature extraction and a Feature Pyramid Network (FPN) decoder with depthwise separable convolutions for effective multi-scale feature fusion. A progressive thinning algorithm further refines segmentation results to produce smooth and topologically consistent navigation lines. Experimental evaluations conducted on the Paeonia lactiflora field dataset demonstrate that MobileUNet-FPN achieves an optimal balance between accuracy and computational efficiency. Data augmentation enhances the model’s generalization capability, while ablation experiments verify the rationality of the network design. Specifically, the model attains a mean pixel accuracy (MPA) of 97.16% and a mean intersection over union (MIoU) of 94.11%. The extracted navigation lines exhibit an average yaw deviation of 1.55° and a lateral deviation of 2.29 pixels. Deployed on a weeding robot, the model runs at 16.5 FPS with a peak memory usage of 0.38 GB, satisfying the requirements for real-time and reliable visual navigation in field environments.

Article activity feed