Abstract
We introduce a new approach for robustness and generalisation of neural network models used in Network Intrusion Detection Systems (NIDS). Models for NIDS must be robust against both natural perturbations (accounting for typical network variations) and adversarial attacks (designed to conceal malicious traffic). The standard approach to robustness is a cycle of training to recognise existing attacks followed by generating new attack variations to defeat detection. Besides robustness, another problem with research NIDS models trained on limited datasets is the tendency to over-fit to the dataset chosen; this highlights the need for cross-dataset generalisation. We address both problems by incorporating recent formal verification tools for neural networks. These frameworks allow us to characterise the input space and we also use verification outputs to generate constrained counterexamples to generate new malicious and benign data. Then adversarial training improves both generalisation and adversarial robustness. We demonstrate these ideas with novel specifications for network traffic, training simple, verifiable networks. We show that cross-dataset and cross-attack generalisation of our models is good and can outperform more complex, state-of-the-art models, unable to be verified similarly.
Original language | English |
---|---|
Title of host publication | SAC '25: Proceedings of the 40th ACM/SIGAPP Symposium on Applied Computing |
Publisher | Association for Computing Machinery |
Pages | 1867-1876 |
Number of pages | 10 |
ISBN (Print) | 9798400706295 |
DOIs | |
Publication status | Published - 14 May 2025 |
Keywords
- IDS
- formal verification
- generalisation
- network security
- neural networks
- robustness
ASJC Scopus subject areas
- Software