A Dual‑Agent Learning Framework Enables Level 3+ Autonomous Robotic Bronchoscopy
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Accuracy access to peripheral lung lesions is critical for diagnosis and treatment, yet clinical bronchoscopy remains limited by operator variability and loss of spatial accuracy during respiratory motion. Conventional navigation systems depend on preoperative maps and external tracking, which fail in the deformable and continuously moving bronchial tree. We developed a fully autonomous, adaptive bronchoscopy platform that performs real‑time navigation directly from intraluminal visual-spatial cues, without predefined trajectories or external localization. A hierarchical dual‑agent framework enables a Navigator to infer dynamic waypoints and a Driver to execute fine‑scale steering with closed‑loop precision. In dynamically ventilated ex vivo lungs and live canine models, the system achieved consistent, lesion‑targeted navigation under rapid respiration without human input. Beyond bronchoscopy, this approach defines a general computational paradigm for autonomy in deformable living anatomy, offering a clinically translatable path toward operator‑independent intervention across organ systems.