Abstract
Few-shot learning (FSL) algorithms are commonly trained through meta-learning (ML), which exposes models to batches of tasks sampled from a meta-dataset to mimic tasks seen during evaluation. However, the standard training procedures overlook the real-world dynamics where classes commonly occur at different frequencies. While it is generally understood that class imbalance harms the performance of supervised methods, limited research examines the impact of imbalance on the FSL evaluation task. Our analysis compares ten state-of-the-art ML and FSL methods on different imbalance distributions and rebalancing techniques. Our results reveal that: 1) some FSL methods display a natural disposition against imbalance while most other approaches produce a performance drop by up to 17% compared to the balanced task without the appropriate mitigation; 2) many ML algorithms will not automatically learn to balance from exposure to imbalanced training tasks; 3) classical rebalancing strategies, such as random oversampling, can still be very effective, leading to state-of-the-art performances and should not be overlooked.
Original language | English |
---|---|
Pages (from-to) | 1348-1358 |
Number of pages | 11 |
Journal | IEEE Transactions on Artificial Intelligence |
Volume | 4 |
Issue number | 5 |
Early online date | 24 Jul 2023 |
DOIs | |
Publication status | Published - Oct 2023 |
Keywords
- Class imbalance
- classification and regression
- few-shot learning (FSL)
- low-shot learning
- meta learning (ML)
ASJC Scopus subject areas
- Artificial Intelligence
- Computer Science Applications