Abstract
We investigate the geometric complexity of decision boundaries for robust training compared to standard training. By considering the local geometry of nearest neighbour sets, we study them in a model-agnostic way and theoretically derive a lower-bound R∗ ∈ R on the perturbation magnitude δ ∈ R for which robust training provably requires a geometrically more complex decision boundary than accurate training. We show that state-of-the-art robust models learn more complex decision boundaries than their non-robust counterparts, confirming previous hypotheses. Then, we compute R∗ for common image benchmarks and find that it also empirically serves as an upper bound over which label noise is introduced. We demonstrate for deep neural network classifiers that perturbation magnitudes δ ≥ R∗ lead to reduced robustness and generalization performance. Therefore, R∗ bounds the maximum feasible perturbation magnitude for norm-bounded robust training and data augmentation. Finally, we show that R∗ < 0.5R for common benchmarks, where R is a distribution’s minimum nearest neighbour distance. Thus, we improve previous work on determining a distribution’s maximum robust radius.
Original language | English |
---|---|
Title of host publication | Computer Vision – ACCV 2022 |
Publisher | Springer |
Pages | 627–645 |
Number of pages | 19 |
ISBN (Electronic) | 9783031263514 |
ISBN (Print) | 9783031263507 |
DOIs | |
Publication status | Published - 26 Feb 2023 |
Event | 16th Asian Conference on Computer Vision 2022 - Macau, Macao Duration: 4 Dec 2022 → 8 Dec 2022 https://www.accv2022.org/ |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Volume | 13846 |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 16th Asian Conference on Computer Vision 2022 |
---|---|
Abbreviated title | ACCV 2022 |
Country/Territory | Macao |
City | Macau |
Period | 4/12/22 → 8/12/22 |
Internet address |