DCNN-Transformer Hybrid Network for Robust Feature Extraction in FMCW LiDAR Ranging
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Frequency-Modulated Continuous Wave (FMCW) Laser Detection and Ranging (LiDAR) systems have been widely utilized for their high accuracy and high resolution. However, traditional distance extraction methods often suffer from insufficient robustness in high-noise and complex background environments. To address this issue, this manuscript proposes a deep learning-based signal information extraction method that integrates a Dual Convolutional Neural Network (DCNN) with a Transformer model to enhance system performance. The DCNN employs multi-layer deep convolutions and pointwise convolutions to meticulously extract multi-scale spatial features, while the Transformer leverages the self-attention mechanism to efficiently capture the global temporal dependencies of the beat frequency signals. The proposed DCNN-Transformer network is applied to beat frequency signal inversion experiments at various distances. Experimental results on the test dataset covering a ranging distance from 3m to 40m demonstrate that the proposed method achieves a mean absolute error (MAE) of 4.1mm and a root mean square error (RMSE) of merely 3.08mm. The results indicate that our method is capable of achieving excellent prediction stability and high accuracy, exhibiting strong generalization capability and robustness for FMCW Lidar system.