Somatosensory-Driven Perception in Embodied Systems for Hand-Object Interaction

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Somatosensation is a powerful perceptual modality that enables accurate and robust sensing in challenging scenarios. It allows blind individuals to explore their surroundings with a white cane in darkness and assists surgeons who operate with a scalpel under occluded vision. However, robots lack somatosensory capabilities comparable to those of humans. To address this limitation, we introduce a perception framework that treats touch and proprioception as primary signals. Current neuroscience provides sufficient insight to define an analogous four stage processing pipeline that includes afferent integration, perceptual inference, error compensation, and gated convergence. Building on these principles, our artificial framework mirrors key elements of the cortical sensorimotor cascade. Experiments across wearable systems and dexterous robotic platforms equipped with tactile hands show that the framework overcomes previously unsolved challenges in estimating object orientation, relative position, and contact points under real-world non-convexities with wearable sensors, and enables tasks infeasible for vision-based perception, including estimating contact force, tip torque, and object mass, with accuracy surpassing human and state-of-the-art baselines. This framework provides a robust pathway toward human-like capabilities and a precise foundation for next-generation human–machine interaction, dexterous manipulation, and embodied intelligence applications.

Article activity feed