Multimodal deep learning for group activity recognition in smart office environments

George Albert Florea, Radu-Casian Mihailescu*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)

Abstract

Deep learning (DL) models have emerged in recent years as the state-of-the-art technique across numerous machine learning application domains. In particular, image processing-related tasks have seen a significant improvement in terms of performance due to increased availability of large datasets and extensive growth of computing power. In this paper we investigate the problem of group activity recognition in office environments using a multimodal deep learning approach, by fusing audio and visual data from video. Group activity recognition is a complex classification task, given that it extends beyond identifying the activities of individuals, by focusing on the combinations of activities and the interactions between them. The proposed fusion network was trained based on the audio-visual stream from the AMI Corpus dataset. The procedure consists of two steps. First, we extract a joint audio-visual feature representation for activity recognition, and second, we account for the temporal dependencies in the video in order to complete the classification task. We provide a comprehensive set of experimental results showing that our proposed multimodal deep network architecture outperforms previous approaches, which have been designed for unimodal analysis, on the aforementioned AMI dataset.

Original languageEnglish
Article number133
JournalFuture Internet
Volume12
Issue number8
DOIs
Publication statusPublished - 9 Aug 2020

Keywords

  • Activity recognition
  • Deep learning
  • Multimodal learning

ASJC Scopus subject areas

  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Multimodal deep learning for group activity recognition in smart office environments'. Together they form a unique fingerprint.

Cite this