Abstract
For navigation and obstacle detection, it is necessary to develop robust and efficient algorithms to compute ego-motion and model the changing scene. These algorithms must cope with the high video data rate from the input sensor. In this paper, we present an approach to achieve improved motion tracking from a monocular image sequence acquired by a camera attached to a pedestrian. The human gait is modelled from the motion history of the camera, and used to predict the feature positions in successive frames. This is encoded within a maximum a posteriori (MAP) framework to seek fast and robust motion estimation. Experimental results show how use of the gait model can reduce the computational load by allowing longer gaps between successive frames, while retaining the robust ability to track features.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Acoustics, Speech, and Signal Processing, 2004. Proceedings |
Pages | III601-III604 |
Volume | 3 |
DOIs | |
Publication status | Published - 2004 |
Event | 29th IEEE International Conference on Acoustics, Speech, and Signal Processing 2004 - Montreal, Quebec, Canada Duration: 17 May 2004 → 21 May 2004 |
Conference
Conference | 29th IEEE International Conference on Acoustics, Speech, and Signal Processing 2004 |
---|---|
Country/Territory | Canada |
City | Montreal, Quebec |
Period | 17/05/04 → 21/05/04 |