TY - JOUR
T1 - Robust 3D Reconstruction of Dynamic Scenes From Single-Photon Lidar Using Beta-Divergences
AU - Legros, Quentin
AU - Tachella, Julián
AU - Tobin, Rachael
AU - McCarthy, Aongus
AU - Meignen, Sylvain
AU - Buller, Gerald Stuart
AU - Altmann, Yoann
AU - McLaughlin, Stephen
AU - Davies, Mike E.
N1 - Funding Information:
Manuscript received April 22, 2020; revised November 2, 2020 and December 8, 2020; accepted December 11, 2020. Date of publication December 31, 2020; date of current version January 14, 2021. This work was supported in part by the Royal Academy of Engineering through the Research Fellowship Scheme under Grant RF201617/16/31; in part by the ERC Advanced Grant C-SENSE under Project 694888; in part by the U.K. Defence Science and Technology Laboratory under Grant DSTL X1000114765; in part by the Engineering and Physical Sciences Research Council (EPSRC) under Grant EP/N003446/1, Grant EP/T00097X/1, and Grant EP/S000631/1; and in part by the MOD University Defence Research Collaboration (UDRC) in Signal Processing. The work of Michael E. Davies was supported by the Royal Society Wolfson Research Merit Award. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Lisimachos P. Kondi. (Corresponding author: Yoann Altmann.) Quentin Legros, Rachael Tobin, Aongus McCarthy, Gerald S. Buller, Yoann Altmann, and Stephen McLaughlin are with the School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS, U.K. (e-mail: [email protected]).
Funding Information:
ACKNOWLEDGMENT The authors gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
Publisher Copyright:
© 1992-2012 IEEE.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2021
Y1 - 2021
N2 - In this article, we present a new algorithm for fast, online 3D reconstruction of dynamic scenes using times of arrival of photons recorded by single-photon detector arrays. One of the main challenges in 3D imaging using single-photon lidar in practical applications is the presence of strong ambient illumination which corrupts the data and can jeopardize the detection of peaks/surface in the signals. This background noise not only complicates the observation model classically used for 3D reconstruction but also the estimation procedure which requires iterative methods. In this work, we consider a new similarity measure for robust depth estimation, which allows us to use a simple observation model and a non-iterative estimation procedure while being robust to mis-specification of the background illumination model. This choice leads to a computationally attractive depth estimation procedure without significant degradation of the reconstruction performance. This new depth estimation procedure is coupled with a spatio-temporal model to capture the natural correlation between neighboring pixels and successive frames for dynamic scene analysis. The resulting online inference process is scalable and well suited for parallel implementation. The benefits of the proposed method are demonstrated through a series of experiments conducted with simulated and real single-photon lidar videos, allowing the analysis of dynamic scenes at 325 m observed under extreme ambient illumination conditions.
AB - In this article, we present a new algorithm for fast, online 3D reconstruction of dynamic scenes using times of arrival of photons recorded by single-photon detector arrays. One of the main challenges in 3D imaging using single-photon lidar in practical applications is the presence of strong ambient illumination which corrupts the data and can jeopardize the detection of peaks/surface in the signals. This background noise not only complicates the observation model classically used for 3D reconstruction but also the estimation procedure which requires iterative methods. In this work, we consider a new similarity measure for robust depth estimation, which allows us to use a simple observation model and a non-iterative estimation procedure while being robust to mis-specification of the background illumination model. This choice leads to a computationally attractive depth estimation procedure without significant degradation of the reconstruction performance. This new depth estimation procedure is coupled with a spatio-temporal model to capture the natural correlation between neighboring pixels and successive frames for dynamic scene analysis. The resulting online inference process is scalable and well suited for parallel implementation. The benefits of the proposed method are demonstrated through a series of experiments conducted with simulated and real single-photon lidar videos, allowing the analysis of dynamic scenes at 325 m observed under extreme ambient illumination conditions.
KW - 3D reconstruction
KW - Bayesian filtering
KW - robust estimation
KW - single-photon lidar
KW - variational methods
UR - http://www.scopus.com/inward/record.url?scp=85099083647&partnerID=8YFLogxK
U2 - 10.1109/TIP.2020.3046882
DO - 10.1109/TIP.2020.3046882
M3 - Article
C2 - 33382656
SN - 1057-7149
VL - 30
SP - 1716
EP - 1727
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -