Abstract
Facial Expression Recognition (FER) models based on deep learning mostly rely on a supervised train-once-test-all approach. These approaches assume that a model trained on an in-the-wild facial expression dataset with one type of domain distribution will perform well on a test dataset with a domain distribution shift. However, facial images in real-world can be from different domain distributions from which the model has been trained. However, re-training models on only new domain distributions will severely affect the performance of the previous domain. Re-training on all previous and new data can improve overall performance but is computationally expansive. In this study, we oppose the train-once-test-all approach and propose a buffer-based continual learning approach to enhance the performance of multiple in-the-wild datasets. We propose a model that continually leverages attention to important facial features from the pre-trained model to improve performance in multiple datasets. We validated our model using split-in-the-wild datasets where the dataset is provided to the model in an incremental setting instead of all at once. Furthermore, to evaluate the model performance, we continually used three in-the-wild datasets representing different domains (Domain-FER). Extensive experiments on these datasets reveal that the proposed model achieves better results than other Continual FER models.
| Original language | English |
|---|---|
| Pages (from-to) | 2352-2364 |
| Number of pages | 13 |
| Journal | IEEE Transactions on Affective Computing |
| Volume | 16 |
| Issue number | 3 |
| Early online date | 15 Apr 2025 |
| DOIs | |
| Publication status | Published - Jul 2025 |
Keywords
- Facial expression recognition
- affective computing
- continual domain adaptation
- continual learning
- deep learning
ASJC Scopus subject areas
- Software
- Human-Computer Interaction