Action recognition in low quality videos by jointly using shape, motion and texture features

Saimunur Rahman*, John See, Chiung Ching Ho

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Citations (Scopus)

Abstract

Shape, motion and texture features have recently gained much popularity in their use for human action recognition. While many of these descriptors have been shown to work well against challenging variations such as appearance, pose and illumination, the problem of low video quality is relatively unexplored. In this paper, we propose a new idea of jointly employing these three features within a standard bag-of-features framework to recognize actions in low quality videos. The performance of these features were extensively evaluated and analyzed under three spatial downsampling and three temporal downsampling modes. Experiments conducted on the KTH and Weizmann datasets with several combination of features and settings showed the importance of all three features (HOG, HOF, LBP-TOP), and how low quality videos can benefit from the robustness of textural features.

Original languageEnglish
Title of host publicationIEEE 2015 International Conference on Signal and Image Processing Applications (ICSIPA)
PublisherIEEE
Pages83-88
Number of pages6
ISBN (Electronic)9781479989966
DOIs
Publication statusPublished - 25 Feb 2016
Event4th IEEE International Conference on Signal and Image Processing Applications 2015 - Kuala Lumpur, Malaysia
Duration: 19 Oct 201521 Oct 2015

Conference

Conference4th IEEE International Conference on Signal and Image Processing Applications 2015
Abbreviated titleICSIPA 2015
Country/TerritoryMalaysia
CityKuala Lumpur
Period19/10/1521/10/15

ASJC Scopus subject areas

  • Computer Science Applications
  • Signal Processing

Fingerprint

Dive into the research topics of 'Action recognition in low quality videos by jointly using shape, motion and texture features'. Together they form a unique fingerprint.

Cite this