Geometry-Aware Super-Resolution Fusion Calibration for Binocular Structured Light 3D Reconstruction

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

High-precision binocular camera calibration serves as the cornerstone for 3D reconstruction. Despite existing advances in calibration methods, the inherent sub-pixel accuracy of feature points remains compromised by various degradations, such as noise, blur, and distortion, which fundamentally constrain the accuracy and practical performance in uncontrolled real-world scenarios. This paper proposes a novel geometric-fusion calibration framework that introduces saddle point preservation and unbiased cell centroids based on the binocular epipolar line constraints to maintain the flexibility and robustness of global feature points for the first time, substantially advancing structured light 3D reconstruction accuracy. Extensive experiments demonstrate that the proposed method enhances the robustness of corner coordinates against noise, blur, and distortion and reduces reprojection error by 17%. More importantly, we validate significant improvements in point cloud accuracy for real-world 3D reconstruction tasks, establishing the practical value of our approach for precision-critical applications like medical endoscopy, AR/VR systems, and embodied AI platforms.

Article activity feed