Learning Traversable Scene Structures for Embodied Navigation with Movable Object Constraints

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Understanding how movable objects affect navigability is critical for embodied agents operating in realistic environments. This study proposes a learning-based approach to infer traversable scene structures under object mobility constraints. A neural graph encoder is trained to predict passability relations between spatial regions conditioned on object states, using RGB-D observations and interaction feedback. The model is trained on 15,000 simulated navigation trajectories generated in rearranged indoor scenes. Quantitative evaluation shows that the learned scene structure reduces navigation failure due to blocked paths by 28.4% and improves average navigation efficiency by 16.7% compared with static scene graph representations.

Article activity feed