Activity Recognition for Ambient Assisted Living with Videos, Inertial Units and Ambient Sensors

Caetano Mazzoni Ranieri, Scott MacLeod, Mauro Dragone, Patricia Amancio Vargas, Roseli Aparecida Francelin Romero

Research output: Contribution to journalArticlepeer-review

43 Citations (Scopus)
191 Downloads (Pure)


Worldwide demographic projections point to a progressively older population. This fact has fostered research on Ambient Assisted Living, which includes developments on smart homes and social robots. To endow such environments with truly autonomous behaviours, algorithms must extract semantically meaningful information from whichever sensor data is available. Human activity recognition is one of the most active fields of research within this context. Proposed approaches vary according to the input modality and the environments considered. Different from others, this paper addresses the problem of recognising heterogeneous activities of daily living centred in home environments considering simultaneously data from videos, wearable IMUs and ambient sensors. For this, two contributions are presented. The first is the creation of the Heriot-Watt University/University of Sao Paulo (HWU-USP) activities dataset, which was recorded at the Robotic Assisted Living Testbed at Heriot-Watt University. This dataset differs from other multimodal datasets due to the fact that it consists of daily living activities with either periodical patterns or long-term dependencies, which are captured in a very rich and heterogeneous sensing environment. In particular, this dataset combines data from a humanoid robot's RGBD (RGB + depth) camera, with inertial sensors from wearable devices, and ambient sensors from a smart home. The second contribution is the proposal of a Deep Learning (DL) framework, which provides multimodal activity recognition based on videos, inertial sensors and ambient sensors from the smart home, on their own or fused to each other. The classification DL framework has also validated on our dataset and on the University of Texas at Dallas Multimodal Human Activities Dataset (UTD-MHAD), a widely used benchmark for activity recognition based on videos and inertial sensors, providing a comparative analysis between the results on the two datasets considered. Results demonstrate that the introduction of data from ambient sensors expressively improved the accuracy results.

Original languageEnglish
Article number768
Issue number3
Early online date24 Jan 2021
Publication statusPublished - 1 Feb 2021


  • Deep learning
  • Human activity recognition
  • Human–robot interaction
  • Inertial sensors
  • Multimodal datasets
  • Video classification

ASJC Scopus subject areas

  • Analytical Chemistry
  • Biochemistry
  • Atomic and Molecular Physics, and Optics
  • Instrumentation
  • Electrical and Electronic Engineering


Dive into the research topics of 'Activity Recognition for Ambient Assisted Living with Videos, Inertial Units and Ambient Sensors'. Together they form a unique fingerprint.

Cite this