A Multi-Module Perception-Based Robot Dynamic Target Following Method—Collaboration of Improved KCF Tracking, Distance Mapping Function, and Obstacle Avoidance
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
To address the issues of visual tracking being susceptible to scale variations and occlusion interference, significant monocular ranging errors, and insufficient dynamic obstacle avoidance response in monocular robot dynamic target following, this paper proposes a high-precision dynamic target following method based on multimodal perception. The method enhances tracking robustness in complex scenarios by improving the KCF algorithm with multi-scale feature fusion and occlusion re-detection mechanisms. A distance mapping function integrating geometric distortion correction and pose compensation is designed to significantly improve monocular ranging accuracy. Simultaneously, LiDAR point cloud data is incorporated to construct an obstacle threat model, and an improved trajectory optimization algorithm is employed to achieve collaborative control of following and obstacle avoidance. Experimental results demonstrate that the proposed method maintains a 95.2% following success rate under occlusion, scale variations, and dynamic obstacle scenarios, with a distance estimation error not exceeding 3.5%, showcasing strong practicality and reliability.