Multimodal Control of Manipulators: Coupling Kinematics and Vision for Self-Driving Laboratory Operations

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Autonomous experimental platforms increasingly rely on robust, vision-guided robotic manipulation to support reliable and repeatable laboratory operations. This work presents a modular motion-execution subsystem designed for integration into self-driving laboratory (SDL) workflows, focusing on the coupling of real-time visual perception with smooth and stable manipulator control. The framework enables autonomous detection, tracking, and interaction with textured objects through a hybrid scheme that couples advanced motion planning algorithms with real-time visual feedback. Kinematic analysis of the manipulator is performed using the screw theory formulations, which provide a rigorous foundation for deriving forward kinematics and the space Jacobian. These formulations are further employed to compute inverse kinematic solutions via the Damped Least Squares (DLS) method, ensuring stable and continuous joint trajectories even in the presence of redundancy and singularities. Motion trajectories toward target objects are generated using the RRT* algorithm, offering optimal path planning under dynamic constraints. Object pose estimation is achieved through a a vision workflow integrating feature-driven detection and homography-guided depth analysis, enabling adaptive tracking and dynamic grasping of textured objects. The manipulator’s performance is quantitatively evaluated using smoothness metrics, RMSE pose errors, and joint motion profiles, including velocity continuity, acceleration, jerk, and snap. Simulation results demonstrate that the proposed subsystem delivers stable, smooth, and reproducible motion execution, establishing a validated baseline for the manipulation layer of next-generation SDL architectures.

Article activity feed