Deep Learning for LiDAR Waveforms with Multiple Returns

Andreas Aßmann, Brian Stewart, Andrew Michael Wallace

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We present LiDARNet, a novel data driven approach
to LiDAR waveform processing utilising convolutional neural
networks to extract depth information. To effectively leverage
deep learning, an efficient LiDAR toolchain was developed, which
can generate realistic waveform datasets based on either specific
experimental parameters or synthetic scenes at scale. This enables
us to generate a large volume of waveforms in varying conditions
with meaningful underlying data. To validate our simulation
approach, we model a super resolution benchmark and crossvalidate
the network with real unseen data. We demonstrate
the ability to resolve peaks in close proximity, as well as to
extract multiple returns from waveforms with low signal-to-noise
ratio simultaneously with over 99% accuracy. This approach is
fast, flexible and highly parallelizable for arrayed imagers. We
provide explainability in the deep learning process by matching
intermediate outputs to a robust underlying signal model.
Original languageEnglish
Title of host publicationProceedings of the 28th European Signal Processing Conference
Publication statusAccepted/In press - 21 Sep 2020
Event28th European Signal Processing Conference - Amsterdam, Netherlands
Duration: 18 Jan 202122 Jan 2021


Conference28th European Signal Processing Conference
Abbreviated titleEUSIPCO 2020
Internet address

Fingerprint Dive into the research topics of 'Deep Learning for LiDAR Waveforms with Multiple Returns'. Together they form a unique fingerprint.

Cite this