Comparing Complexities of Decision Boundaries for Robust Training: A Universal Approach

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Downloads (Pure)


We investigate the geometric complexity of decision boundaries for robust training compared to standard training. By considering the local geometry of nearest neighbour sets, we study them in a model-agnostic way and theoretically derive a lower-bound R∗ ∈ R on the perturbation magnitude δ ∈ R for which robust training provably requires a geometrically more complex decision boundary than accurate training. We show that state-of-the-art robust models learn more complex decision boundaries than their non-robust counterparts, confirming previous hypotheses. Then, we compute R∗ for common image benchmarks and find that it also empirically serves as an upper bound over which label noise is introduced. We demonstrate for deep neural network classifiers that perturbation magnitudes δ ≥ R∗ lead to reduced robustness and generalization performance. Therefore, R∗ bounds the maximum feasible perturbation magnitude for norm-bounded robust training and data augmentation. Finally, we show that R∗ < 0.5R for common benchmarks, where R is a distribution’s minimum nearest neighbour distance. Thus, we improve previous work on determining a distribution’s maximum robust radius.
Original languageEnglish
Title of host publicationComputer Vision – ACCV 2022
Number of pages19
ISBN (Electronic)9783031263514
ISBN (Print)9783031263507
Publication statusPublished - 26 Feb 2023
Event16th Asian Conference on Computer Vision 2022 - Macau, Macao
Duration: 4 Dec 20228 Dec 2022

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference16th Asian Conference on Computer Vision 2022
Abbreviated titleACCV 2022
Internet address


Dive into the research topics of 'Comparing Complexities of Decision Boundaries for Robust Training: A Universal Approach'. Together they form a unique fingerprint.

Cite this