Abstract
Recent efforts in interpreting Convolutional Neural Networks (CNNs) focus on translating the activation of CNN filters into a stratified Answer Set Program (ASP) rule-sets. The CNN filters are known to capture high-level image concepts, thus the predicates in the rule-set are mapped to the concept that their corresponding filter represents. Hence, the rule-set exemplifies the decision-making process of the CNN w.r.t the concepts that it learns for any image classification task. These rule-sets help understand the biases in CNNs, although correcting the biases remains a challenge. We introduce a neurosymbolic framework called NeSyBiCor for bias correction in a trained CNN. Given symbolic concepts, as ASP constraints, that the CNN is biased towards, we convert the concepts to their corresponding vector representations. Then, the CNN is retrained using our novel semantic similarity loss that pushes the filters away from (or towards) learning the desired/undesired concepts. The final ASP rule-set obtained after retraining, satisfies the constraints to a high degree, thus showing the revision in the knowledge of the CNN. We demonstrate that our NeSyBiCor framework successfully corrects the biases of CNNs trained with subsets of classes from the Places dataset while sacrificing minimal accuracy and improving interpretability.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 3799 |
Publication status | Published - 24 Oct 2024 |
Event | 40th International Conference on Logic Programming 2024 - Dallas, United States Duration: 12 Oct 2024 → 13 Oct 2024 |
Keywords
- Answer Set Programming
- CNN
- Neurosymbolic AI
- Representation Learning
- Semantic Loss
- XAI
ASJC Scopus subject areas
- General Computer Science