Deep Learning for LiDAR Waveforms with Multiple Returns

Andreas Aßmann, Brian Stewart, Andrew Michael Wallace

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)
70 Downloads (Pure)


We present LiDARNet, a novel data driven approach to LiDAR waveform processing utilising convolutional neural networks to extract depth information. To effectively leverage deep learning, an efficient LiDAR toolchain was developed, which can generate realistic waveform datasets based on either specific experimental parameters or synthetic scenes at scale. This enables us to generate a large volume of waveforms in varying conditions with meaningful underlying data. To validate our simulation approach, we model a super resolution benchmark and cross-validate the network with real unseen data. We demonstrate the ability to resolve peaks in close proximity, as well as to extract multiple returns from waveforms with low signal-to-noise ratio simultaneously with over 99% accuracy. This approach is fast, flexible and highly parallelizable for arrayed imagers. We provide explainability in the deep learning process by matching intermediate outputs to a robust underlying signal model.
Original languageEnglish
Title of host publication2020 28th European Signal Processing Conference (EUSIPCO)
Number of pages5
ISBN (Electronic)9789082797053
Publication statusPublished - 18 Dec 2020
Event28th European Signal Processing Conference - Amsterdam, Netherlands
Duration: 18 Jan 202122 Jan 2021

Publication series

NameEuropean Signal Processing Conference
ISSN (Electronic)2076-1465


Conference28th European Signal Processing Conference
Abbreviated titleEUSIPCO 2020
Internet address


Dive into the research topics of 'Deep Learning for LiDAR Waveforms with Multiple Returns'. Together they form a unique fingerprint.

Cite this