Neighborhood Discriminative Manifold Projection for face recognition in video

John See*, Mohammad Faizal Ahmad Fauzi

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

This paper presents a novel supervised manifold learning method called Neighborhood Discriminative Manifold Projection (NDMP) for face recognition in video. By constructing a discriminative eigenspace projection of the high-dimensional face manifold, NDMP seeks to learn an optimal low-dimensional projection by solving a constrained least-squares objective function based on local and global constraints. Local geometry is preserved through the use of intra-class and inter-class neighborhood information while global manifold structure is retained by imposing rotational invariance. The proposed method is comprehensively evaluated on a large video data set. Experimental results and comparisons with classical and state-of-art methods demonstrate the effectiveness of our method.

Original languageEnglish
Title of host publication2011 International Conference on Pattern Analysis and Intelligence Robotics
PublisherIEEE
Pages13-18
Number of pages6
ISBN (Electronic)9781612844060
DOIs
Publication statusPublished - 4 Aug 2011
Event2011 International Conference on Pattern Analysis and Intelligent Robotics - Putrajaya, Malaysia
Duration: 28 Jun 201129 Jun 2011

Conference

Conference2011 International Conference on Pattern Analysis and Intelligent Robotics
Abbreviated titleICPAIR 2011
Country/TerritoryMalaysia
CityPutrajaya
Period28/06/1129/06/11

Keywords

  • Manifold learning
  • pattern recognition
  • subspace projection methods
  • video-based face recognition

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Neighborhood Discriminative Manifold Projection for face recognition in video'. Together they form a unique fingerprint.

Cite this