Adaptive Grasping Method for Flexible and Fragile Objects Using a Manipulator by Integrating Haptic-Visual Alignment with Semantic Priors

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Deformations and damage frequently occur during the grasping of flexible and fragile objects, rendering traditional force-control and vision-based strategies inadequate for meeting robustness and safety requirements. This paper proposes an adaptive grasping method integrating tactile-visual feature alignment with semantic prior guidance.First, a large model extracts semantic constraints such as "material–deformation threshold–operation region" and aligns them with tactile encodings. Then, adaptive impedance control enables real-time force and pose adjustment. Experiments covering 60 categories of flexible/fragile objects and over 5,000 grasping tests demonstrate: compared to a diffusion policy without haptic feedback, this method achieves a +15.2% increase in grasping success rate, a 33.5% reduction in target damage rate, and an 18.4% decrease in 6D pose error. Ablation analysis indicates that tactile and semantic priors contribute +6.3% and +4.5% performance gains, respectively. These results demonstrate the method's effectiveness for real-time flexible grasping.

Article activity feed