MMU GASPFA: A COTS multimodal biometric database

Chiung Ching Ho*, Hu Ng, Wooi Haw Tan, Kok Why Ng, Hau Lee Tong, Timothy Tzen Vun Yap, Pei Fen Chong, C. Eswaran, Junaidi Abdullah

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)

Abstract

This paper describes the baseline corpus of a new multimodal biometric database, the MMU GASPFA (Gait-Speech-Face) database. The corpus in GASPFA is acquired using commercial off the shelf (COTS) equipment including digital video cameras, digital voice recorder, digital camera, Kinect camera and accelerometer equipped smart phones. The corpus consists of frontal face images from the digital camera, speech utterances recorded using the digital voice recorder, gait videos with their associated data recorded using both the digital video cameras and Kinect camera simultaneously as well as accelerometer readings from the smart phones. A total of 82 participants had their biometric data recorded. MMU GASPFA is able to support both multimodal biometric authentication as well as gait action recognition. This paper describes the acquisition setup and protocols used in MMU GASPFA, as well as the content of the corpus. Baseline results from a subset of the participants are presented for validation purposes.

Original languageEnglish
Pages (from-to)2043-2050
Number of pages8
JournalPattern Recognition Letters
Volume34
Issue number15
DOIs
Publication statusPublished - Nov 2013

Keywords

  • COTS
  • Face
  • Gait action recognition
  • GASPFA
  • Multimodal biometric
  • Speech

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'MMU GASPFA: A COTS multimodal biometric database'. Together they form a unique fingerprint.

Cite this