Relative Robustness of Quantized Neural Networks Against Adversarial Attacks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)
98 Downloads (Pure)


Neural networks are increasingly being moved to edge computing devices and smart sensors, to reduce latency and save bandwidth. Neural network compression such as quantization is necessary to fit trained neural networks into these resource constrained devices. At the same time, their use in safety-critical applications raises the need to verify properties of neural networks. Adversarial perturbations have potential to be used as an attack mechanism on neural networks, leading to "obviously wrong" misclassification. SMT solvers have been proposed to formally prove robustness guarantees against such adversarial perturbations. We investigate how well these robustness guarantees are preserved when the precision of a neural network is quantized. We also evaluate how effectively adversarial attacks transfer to quantized neural networks. Our results show that quantized neural networks are generally robust relative to their full precision counterpart (98.6%-99.7%), and the transfer of adversarial attacks decreases to as low as 52.05% when the subtlety of perturbation increases. These results show that quantization introduces resilience against transfer of adversarial attacks whilst causing negligible loss of robustness.
Original languageEnglish
Title of host publication2020 International Joint Conference on Neural Networks (IJCNN)
ISBN (Electronic)9781728169262
Publication statusPublished - 28 Sept 2020

Publication series

NameInternational Joint Conference on Neural Networks
ISSN (Electronic)2161-4407


  • adversarial attack
  • neural network
  • verification

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence


Dive into the research topics of 'Relative Robustness of Quantized Neural Networks Against Adversarial Attacks'. Together they form a unique fingerprint.

Cite this