A 3D LiDAR odometry for UGVs using coarse-to-fine deep scene flow estimation

Chi Li, Fei Yan*, Sen Wang, Yan Zhuang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


Light detection and ranging (LiDAR) odometry plays a crucial role in autonomous mobile robots and unmanned ground vehicles (UGVs). This paper presents a deep learning–based odometry system using two successive three-dimensional (3D) point clouds to estimate their scene flow and then predict their relative pose. The network consumes continuous 3D point clouds directly and outputs their scene flow and uncertain mask in a coarse-to-fine fashion. A pose estimation layer without trainable parameters is designed to compute the pose with the scene flow. We also introduce a scan-to-map optimization algorithm to enhance the robustness and accuracy of the system. Our experiments on the KITTI odometry data set and our campus data set demonstrate the effectiveness of the proposed deep learning–based point cloud odometry.

Original languageEnglish
Pages (from-to)274-286
Number of pages13
JournalTransactions of the Institute of Measurement and Control
Issue number2
Early online date2 Aug 2022
Publication statusPublished - Jan 2023


  • 3D LiDAR point clouds
  • deep learning
  • Odometry estimation
  • scene flow

ASJC Scopus subject areas

  • Instrumentation


Dive into the research topics of 'A 3D LiDAR odometry for UGVs using coarse-to-fine deep scene flow estimation'. Together they form a unique fingerprint.

Cite this