TY - JOUR
T1 - Continual Facial Features Transfer for Facial Expression Recognition
AU - Maharjan, Rahul Singh
AU - Bonicelli, Lorenzo
AU - Romeo, Marta
AU - Calderara, Simone
AU - Cangelosi, Angelo
AU - Cucchiara, Rita
PY - 2025/4/15
Y1 - 2025/4/15
N2 - Facial Expression Recognition (FER) models based on deep learning mostly rely on a supervised train-once-test-all approach. These approaches assume that a model trained on an in-the-wild facial expression dataset with one type of domain distribution will perform well on a test dataset with a domain distribution shift. However, facial images in real-world can be from different domain distributions from which the model has been trained. However, re-training models on only new domain distributions will severely affect the performance of the previous domain. Re-training on all previous and new data can improve overall performance but is computationally expansive. In this study, we oppose the train-once-test-all approach and propose a buffer-based continual learning approach to enhance the performance of multiple in-the-wild datasets. We propose a model that continually leverages attention to important facial features from the pre-trained model to improve performance in multiple datasets. We validated our model using split-in-the-wild datasets where the dataset is provided to the model in an incremental setting instead of all at once. Furthermore, to evaluate the model performance, we continually used three in-the-wild datasets representing different domains (Domain-FER). Extensive experiments on these datasets reveal that the proposed model achieves better results than other Continual FER models.
AB - Facial Expression Recognition (FER) models based on deep learning mostly rely on a supervised train-once-test-all approach. These approaches assume that a model trained on an in-the-wild facial expression dataset with one type of domain distribution will perform well on a test dataset with a domain distribution shift. However, facial images in real-world can be from different domain distributions from which the model has been trained. However, re-training models on only new domain distributions will severely affect the performance of the previous domain. Re-training on all previous and new data can improve overall performance but is computationally expansive. In this study, we oppose the train-once-test-all approach and propose a buffer-based continual learning approach to enhance the performance of multiple in-the-wild datasets. We propose a model that continually leverages attention to important facial features from the pre-trained model to improve performance in multiple datasets. We validated our model using split-in-the-wild datasets where the dataset is provided to the model in an incremental setting instead of all at once. Furthermore, to evaluate the model performance, we continually used three in-the-wild datasets representing different domains (Domain-FER). Extensive experiments on these datasets reveal that the proposed model achieves better results than other Continual FER models.
KW - Facial Expression Recognition
KW - Continual Learning
KW - Deep Learning
KW - Affective Computing
KW - Continual Domain Adaptation
UR - http://www.scopus.com/inward/record.url?scp=105003037212&partnerID=8YFLogxK
U2 - 10.1109/taffc.2025.3561139
DO - 10.1109/taffc.2025.3561139
M3 - Article
SN - 1949-3045
SP - 1
EP - 14
JO - IEEE Transactions on Affective Computing
JF - IEEE Transactions on Affective Computing
ER -