Aesthetic evaluation of facial portraits using compositional augmentation for deep CNNs

Magzhan Kairanbay*, John See, Lai-Kuan Wong

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)


Digital facial portrait photographs make up a massive portion of photos in the web. A number of methods for evaluating the aesthetics of photographs have been proposed recently. However, there have been a little work in the research community to address the aesthetics of targeted image domain, such as portraits. This paper introduces a new compositional-based augmentation scheme for aesthetic evaluation of portraits by well-known deep convolutional neural network (DCNN) models. We present a set of feature augmentation methods that take into account compositional photographic rules to ensure that the aesthetic in portraits are not hindered by standard transformations used for DCNN models. On a portrait subset of the large-scale AVA dataset, the proposed approach demonstrated a reasonable improvement in classification performance over the baseline and vanilla deep learning approaches.

Original languageEnglish
Title of host publicationComputer Vision – ACCV 2016 Workshops. ACCV 2016
EditorsChu-Song Chen, Jiwen Lu, Kai-Kuang Ma
Number of pages13
ISBN (Electronic)9783319544274
ISBN (Print)9783319544267
Publication statusPublished - 16 Mar 2017
Event13th Asian Conference on Computer Vision 2016 - Taipei, Taiwan, Province of China
Duration: 20 Nov 201624 Nov 2016

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference13th Asian Conference on Computer Vision 2016
Abbreviated titleACCV 2016
Country/TerritoryTaiwan, Province of China
City Taipei

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science


Dive into the research topics of 'Aesthetic evaluation of facial portraits using compositional augmentation for deep CNNs'. Together they form a unique fingerprint.

Cite this