TY - JOUR
T1 - Video super-resolution for single-photon LIDAR
AU - Mora-Martín, Germán
AU - Scholes, Stirling
AU - Ruget, Alice
AU - Henderson, Robert
AU - Leach, Jonathan
AU - Gyongy, Istvan
N1 - Publisher Copyright:
© 2023 OSA - The Optical Society. All rights reserved.
PY - 2023/2/27
Y1 - 2023/2/27
N2 - 3D time-of-flight (ToF) image sensors are used widely in applications such as self-driving cars, augmented reality (AR), and robotics. When implemented with single-photon avalanche diodes (SPADs), compact, array format sensors can be made that offer accurate depth maps over long distances, without the need for mechanical scanning. However, array sizes tend to be small, leading to low lateral resolution, which combined with low signal-to-background ratio (SBR) levels under high ambient illumination, may lead to difficulties in scene interpretation. In this paper, we use synthetic depth sequences to train a 3D convolutional neural network (CNN) for denoising and upscaling (×4) depth data. Experimental results, based on synthetic as well as real ToF data, are used to demonstrate the effectiveness of the scheme. With GPU acceleration, frames are processed at >30 frames per second, making the approach suitable for low-latency imaging, as required for obstacle avoidance.
AB - 3D time-of-flight (ToF) image sensors are used widely in applications such as self-driving cars, augmented reality (AR), and robotics. When implemented with single-photon avalanche diodes (SPADs), compact, array format sensors can be made that offer accurate depth maps over long distances, without the need for mechanical scanning. However, array sizes tend to be small, leading to low lateral resolution, which combined with low signal-to-background ratio (SBR) levels under high ambient illumination, may lead to difficulties in scene interpretation. In this paper, we use synthetic depth sequences to train a 3D convolutional neural network (CNN) for denoising and upscaling (×4) depth data. Experimental results, based on synthetic as well as real ToF data, are used to demonstrate the effectiveness of the scheme. With GPU acceleration, frames are processed at >30 frames per second, making the approach suitable for low-latency imaging, as required for obstacle avoidance.
UR - http://www.scopus.com/inward/record.url?scp=85148295055&partnerID=8YFLogxK
U2 - 10.1364/OE.478308
DO - 10.1364/OE.478308
M3 - Article
C2 - 36859845
SN - 1094-4087
VL - 31
SP - 7060
EP - 7072
JO - Optics Express
JF - Optics Express
IS - 5
ER -