TY - JOUR
T1 - Semantic facial scores and compact deep transferred descriptors for scalable face image retrieval
AU - Banaeeyan, Rasoul
AU - Lye, Haris
AU - Fauzi, Mohammad Faizal Ahmad
AU - Karim, Hezerul Abdul
AU - See, John
N1 - Funding Information:
This research was fully funded by the Ministry of Science, Technology, and Innovation (MOSTI), Malaysia [project Number 01-02-01-SF0232 ]. We would like to thank Ziwei Liu from Multimedia Lab, Department of Information Engineering, CUHK for providing identity annotations on CelebA face dataset. We would also like to thank the two anonymous reviewers for their comments and suggestions which improved and clarified the manuscript.
Publisher Copyright:
© 2018 Elsevier B.V.
Copyright:
Copyright 2018 Elsevier B.V., All rights reserved.
PY - 2018/9/25
Y1 - 2018/9/25
N2 - Face retrieval systems intend to locate index of identical faces to a given query face. The performance of these systems heavily relies on the careful analysis of different facial attributes (gender, race, etc.) since these attributes can tolerate some degrees of geometrical distortion, expressions, and occlusions. However, solely employing facial scores fail to add scalability. Besides, owing to the discriminative power of CNN (convolutional neural network) features, recent works have employed a complete set of deep transferred CNN features with a large dimensionality to obtain enhancement; yet these systems require high computational power and are very resource-demanding. This study aims to exploit the distinctive capability of semantic facial attributes while their retrieval results are refined by a proposed subset feature selection to reduce the dimensionality of the deep transferred descriptors. The constructed compact deeply transferred descriptors (CDTD) not only have a largely reduced dimension but also more discriminative power. Lastly, we have proposed a new performance metric which is tailored for the case of face retrieval called ImAP (Individual Mean Average Precision) and is used to evaluate retrieval results. Multiple experiments on two scalable face datasets demonstrated the superior performance of the proposed CDTD model outperforming state-of-the-art face retrieval results.
AB - Face retrieval systems intend to locate index of identical faces to a given query face. The performance of these systems heavily relies on the careful analysis of different facial attributes (gender, race, etc.) since these attributes can tolerate some degrees of geometrical distortion, expressions, and occlusions. However, solely employing facial scores fail to add scalability. Besides, owing to the discriminative power of CNN (convolutional neural network) features, recent works have employed a complete set of deep transferred CNN features with a large dimensionality to obtain enhancement; yet these systems require high computational power and are very resource-demanding. This study aims to exploit the distinctive capability of semantic facial attributes while their retrieval results are refined by a proposed subset feature selection to reduce the dimensionality of the deep transferred descriptors. The constructed compact deeply transferred descriptors (CDTD) not only have a largely reduced dimension but also more discriminative power. Lastly, we have proposed a new performance metric which is tailored for the case of face retrieval called ImAP (Individual Mean Average Precision) and is used to evaluate retrieval results. Multiple experiments on two scalable face datasets demonstrated the superior performance of the proposed CDTD model outperforming state-of-the-art face retrieval results.
KW - Compact deep transferred descriptor
KW - Deep transferred CNN feature
KW - Facial attribute classification
KW - Scalable face retrieval
UR - http://www.scopus.com/inward/record.url?scp=85047080362&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2018.04.056
DO - 10.1016/j.neucom.2018.04.056
M3 - Article
AN - SCOPUS:85047080362
SN - 0925-2312
VL - 308
SP - 111
EP - 128
JO - Neurocomputing
JF - Neurocomputing
ER -