Depth Information Encoding in 2D Images via Projection of Multiple Parallel Laser Lines
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper proposes a monocular 3D perception method based on the projection of multiple fixed-spacing parallel laser lines. By leveraging the inverse proportionality between the spacing of parallel laser stripes in the camera imaging plane and their distance from the sensor—a fundamental perspective property—the author achieves depth information embedding directly within RGB images. The ultra-narrow laser stripes enable efficient compression into 2D vector graphics, forming a static LiDAR (Light Detection and Ranging) architecture devoid of mechanical scanning components. Preliminary modeling confirms that laser stripes inherently encode scene contour features, allowing direct extraction of deterministic spatial parameters for drivable areas and obstacles, outperforming probabilistic 3D occupancy prediction techniques reliant on complex neural network inference. Further analysis demonstrates that the depth-encoded laser stripe features simplify training workflows for machine learning models, substantially reducing convolutional computation demands in feature extraction. The system employs a fully static optical design without moving parts or programmable components, complies with the Class 1 laser safety standard (IEC 60825-1:2014), and establishes a novel hardware-algorithm co-design paradigm for autonomous driving perception.