Scene Reconstruction Based on a Liquid Lens Integrated with a Custom Diffuser

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study proposes a passive depth-sensing system that integrates a liquid lens with a custom-fabricated diffuser to achieve 2.5D scene reconstruction. The liquid lens leverages the electrowetting effect, enabling swift and precise adjustments of focal length through voltage control, thereby replacing conventional mechanical focusing mechanisms. To enhance depth discrimination, a high-transmittance diffuser featuring randomly structured microfeatures is implemented. This diffuser compresses the system’s depth of field and accentuates variations in image sharpness. Multi-focal images are captured by scanning the liquid lens, and distances to objects are estimated by assessing image sharpness using the Laplacian operator. Experimental results indicate an absolute error of less than ± 1.3 cm and a relative error below 3% within a measurement range of 20 to 70 cm. Moreover, the multi-focal images can be merged to reconstruct 2.5D models containing depth information, which can be exported in the GLB format for cross-platform compatibility. The proposed framework operates independently of the light source, demonstrates robustness under varying environmental conditions, and is computationally efficient, making it ideal for low-cost depth sensing and embedded vision applications.

Article activity feed