DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks

Sen Wang, Ronald Clark, Hongkai Wen, Niki Trigoni

Research output: Chapter in Book/Report/Conference proceedingConference contribution

714 Citations (Scopus)

Abstract

This paper studies monocular visual odometry (VO) problem. Most of existing VO algorithms are developed under a standard pipeline including feature extraction, feature matching, motion estimation, local optimisation, etc. Although some of them have demonstrated superior performance, they usually need to be carefully designed and specifically fine-tuned to work well in different environments. Some prior knowledge is also required to recover an absolute scale for monocular VO. This paper presents a novel end-to-end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs). Since it is trained and deployed in an end-to-end manner, it infers poses directly from a sequence of raw RGB images (videos) without adopting any module in the conventional VO pipeline. Based on the RCNNs, it not only automatically learns effective feature representation for the VO problem through Convolutional Neural Networks, but also implicitly models sequential dynamics and relations using deep Recurrent Neural Networks. Extensive experiments on the KITTI VO dataset show competitive performance to state-of-the-art methods, verifying that the end-to-end Deep Learning technique can be a viable complement to the traditional VO systems.
Original languageEnglish
Title of host publication2017 IEEE International Conference on Robotics and Automation (ICRA)
PublisherIEEE
Pages2043-2050
Number of pages8
ISBN (Print)9781509046331
DOIs
Publication statusPublished - 24 Jul 2017

Fingerprint

Dive into the research topics of 'DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this