Anomaly Detection for Insider Threats: An Objective Comparison of Machine Learning Models and Ensembles

Filip Bartoszewski, Mike Just, Michael Adam Lones, Oleksii Mandrychenko

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

1 Downloads (Pure)

Abstract

Insider threat detection is challenging due to the wide variety
of possible attacks and the limited availability of real threat data for
testing. Most previous anomaly detection studies have relied on synthetic
threat data, such as the CERT insider threat dataset. However, several
previous studies have used models that arguably introduce bias, such
as the selective use of metrics, and reusing the same dataset with the
prior knowledge of the answer labels. In this paper, we create and test
a host of models following some guidelines of good conduct to produce
what we believe to be a more objective comparison of these models. Our
results indicate that majority voting ensembles are a simple and cost-
effective way of boosting the quality of results from individual machine
learning models, both on the CERT data and on a version augmented
with additional attacks. We include a comparison of models with their
hyperparameters optimized for different target metrics.
Original languageEnglish
Title of host publication IFIP International Information Security and Privacy Conference - IFIP Technical Commitee 11 (IFIP SEC 2021)
PublisherSpringer
Publication statusAccepted/In press - 23 Mar 2021

Fingerprint

Dive into the research topics of 'Anomaly Detection for Insider Threats: An Objective Comparison of Machine Learning Models and Ensembles'. Together they form a unique fingerprint.

Cite this