DCNN–Transformer Hybrid Network for Robust Feature Extraction in FMCW LiDAR Ranging

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Frequency-Modulated Continuous-Wave (FMCW) Laser Detection and Ranging (LiDAR) systems are widely used due to their high accuracy and resolution. Nevertheless, conventional distance extraction methods often lack robustness in noisy and complex environments. To address this limitation, we propose a deep learning-based signal extraction framework that integrates a Dual Convolutional Neural Network (DCNN) with a Transformer model. The DCNN extracts multi-scale spatial features through multi-layer and pointwise convolutions, while the Transformer employs a self-attention mechanism to capture global temporal dependencies of the beat-frequency signals. The proposed DCNN–Transformer network is evaluated through beat-frequency signal inversion experiments across distances ranging from 3 m to 40 m. The experimental results show that the method achieves a mean absolute error (MAE) of 4.1 mm and a root-mean-square error (RMSE) of 3.08 mm. These results demonstrate that the proposed approach provides stable and accurate predictions, with strong generalization ability and robustness for FMCW LiDAR systems.

Article activity feed