TY - JOUR
T1 - Season-Invariant and Viewpoint-Tolerant LiDAR Place Recognition in GPS-Denied Environments
AU - Cao, Fengkui
AU - Yan, Fei
AU - Wang, Sen
AU - Zhuang, Yan
AU - Wang, Wei
N1 - Funding Information:
Manuscript received March 19, 2019; revised June 27, 2019 and October 23, 2019; accepted November 30, 2019. Date of publication January 1, 2020; date of current version October 19, 2020. This work was supported by the National Natural Science Foundation of China under Grant U1913201 and Grant 61375088. (Corresponding author: Fei Yan.) F. Cao, F. Yan, Y. Zhuang, and W. Wang are with the School of Control Science and Engineering, Dalian University of Technology, Dalian 116024, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]).
Publisher Copyright:
© 1982-2012 IEEE.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2021/1
Y1 - 2021/1
N2 - Place recognition remains a challenging problem under various perceptual conditions, e.g., all weather, times of day, seasons, and viewpoint shifts. Different from most of the existing place recognition methods using pure vision, this article studies light detection and ranging (LiDAR) based approaches. Point clouds have some benefits for place recognition since they do not suffer from illumination changes. On the other hand, they are dramatically affected by structural changes from different viewpoints or across seasons. In this article, a novel LiDAR-based place recognition system is proposed to achieve long-term robust localization, even under severe seasonal changes and viewpoint shifts. To improve the efficiency, a compact cylindrical image model is designed to convert three-dimensional point clouds to two-dimensional images representing the prominent geometric relationships of scenes. The contexts (buildings, trees, road structures, etc.) of scenes are utilized for efficient place recognition. A sequence-based temporal consistency check is also introduced for postverification. Extensive real experiments on three datasets (Oxford RobotCar [1] , NCLT [2] , and DUT-AS) show that the proposed system outperforms both state-of-the-art visual and LiDAR-based methods, verifying its robust performance in challenging scenarios.
AB - Place recognition remains a challenging problem under various perceptual conditions, e.g., all weather, times of day, seasons, and viewpoint shifts. Different from most of the existing place recognition methods using pure vision, this article studies light detection and ranging (LiDAR) based approaches. Point clouds have some benefits for place recognition since they do not suffer from illumination changes. On the other hand, they are dramatically affected by structural changes from different viewpoints or across seasons. In this article, a novel LiDAR-based place recognition system is proposed to achieve long-term robust localization, even under severe seasonal changes and viewpoint shifts. To improve the efficiency, a compact cylindrical image model is designed to convert three-dimensional point clouds to two-dimensional images representing the prominent geometric relationships of scenes. The contexts (buildings, trees, road structures, etc.) of scenes are utilized for efficient place recognition. A sequence-based temporal consistency check is also introduced for postverification. Extensive real experiments on three datasets (Oxford RobotCar [1] , NCLT [2] , and DUT-AS) show that the proposed system outperforms both state-of-the-art visual and LiDAR-based methods, verifying its robust performance in challenging scenarios.
KW - Across season
KW - light detection and ranging (LiDAR) sensors
KW - long-term localization
KW - mobile robots
KW - place recognition
UR - http://www.scopus.com/inward/record.url?scp=85094881318&partnerID=8YFLogxK
U2 - 10.1109/TIE.2019.2962416
DO - 10.1109/TIE.2019.2962416
M3 - Article
SN - 0278-0046
VL - 68
SP - 563
EP - 574
JO - IEEE Transactions on Industrial Electronics
JF - IEEE Transactions on Industrial Electronics
IS - 1
ER -