CHAMELEON-SLAM: Adaptive Feature Selection and Uncertainty-Aware Matching for Robust Monocular Visual SLAM

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Visual SLAM systems employ fixed feature extractors regardless of scene content. This is suboptimal: high-texture outdoor environments favor fast, sparse features, while low-texture indoor scenes or motion-blurred frames demand denser or more robust alternatives. We present CHAMELEON-SLAM, a visual SLAM system that adapts its feature extraction strategy to scene characteristics and incorporates per-match uncertainty into pose optimization. Our approach introduces two complementary components: (1) a lightweight scene classifier that selects among XFeat, ALIKED, and SuperPoint based on texture density, edge structure, and blur metrics, adding under 1 ms of overhead, and (2) a multi-scale descriptor consistency measure that estimates per-keypoint uncertainty and down-weights unreliable matches in bundle adjustment without requiring additional training. We integrate both components into ORB-SLAM3's tracking and local mapping threads. Experiments on KITTI and EuRoC demonstrate that adaptive feature selection alone reduces absolute trajectory error by 41–65% over ORB-SLAM3 depending on the dataset, uncertainty weighting provides 34–59%, and the combined system achieves 45–70% lower ATE (57% on average) while maintaining real-time performance at 32 FPS. The system also improves by 11–25% over the strongest fixed baseline with learned matching (SuperPoint + LightGlue).

Article activity feed