Abstract
Light detection and ranging (LiDAR) odometry plays a crucial role in autonomous mobile robots and unmanned ground vehicles (UGVs). This paper presents a deep learning–based odometry system using two successive three-dimensional (3D) point clouds to estimate their scene flow and then predict their relative pose. The network consumes continuous 3D point clouds directly and outputs their scene flow and uncertain mask in a coarse-to-fine fashion. A pose estimation layer without trainable parameters is designed to compute the pose with the scene flow. We also introduce a scan-to-map optimization algorithm to enhance the robustness and accuracy of the system. Our experiments on the KITTI odometry data set and our campus data set demonstrate the effectiveness of the proposed deep learning–based point cloud odometry.
Original language | English |
---|---|
Pages (from-to) | 274-286 |
Number of pages | 13 |
Journal | Transactions of the Institute of Measurement and Control |
Volume | 45 |
Issue number | 2 |
Early online date | 2 Aug 2022 |
DOIs | |
Publication status | Published - Jan 2023 |
Keywords
- 3D LiDAR point clouds
- deep learning
- Odometry estimation
- scene flow
ASJC Scopus subject areas
- Instrumentation