LiteEmo: Lightweight Deep Neural Networks for Image Emotion Recognition

Yan-Han Chew*, Lai-Kuan Wong, John See, Huai-Qian Khor, Balasubramanian Abivishaq

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)


Psychology studies have shown that an image can invoke various emotions, depending on the visual features as well as semantic content of the image. Ability to identify image emotion can be very useful for many applications, including image retrieval and aesthetics prediction. Notably, most of the existing deep learning-based emotion recognition models do not capitalize on additional semantics or contextual information and are computational expensive. Inspired to overcome these limitations, we proposed a lightweight multi-stream deep network that concatenates several MobileNet networks for performing image emotion analysis. Each stream in the multi-stream deep network represents the core emotion recognition, object recognition and image category recognition models respectively. Experimental results demonstrate the effectiveness of the additional contextual information in producing comparable performance as the state-of-the-art emotion models, but with lesser parameters, thus improving its practicality.

Original languageEnglish
Title of host publication2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP)
ISBN (Electronic)9781728118178
Publication statusPublished - 18 Nov 2019
Event21st IEEE International Workshop on Multimedia Signal Processing 2019 - Kuala Lumpur, Malaysia
Duration: 27 Sept 201929 Sept 2019


Conference21st IEEE International Workshop on Multimedia Signal Processing 2019
Abbreviated titleMMSP 2019
CityKuala Lumpur


  • Image emotion
  • lightweight
  • multi-stream network

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology


Dive into the research topics of 'LiteEmo: Lightweight Deep Neural Networks for Image Emotion Recognition'. Together they form a unique fingerprint.

Cite this