Assessing the Reliability of Deep Learning Classifiers Through Robustness Evaluation and Operational Profiles

Xingyu Zhao, Wei Huang, Alec Banks, Victoria Cox, David Flynn, Sven Schewe, Xiaowei Huang

Research output: Contribution to journalConference articlepeer-review

5 Citations (Scopus)

Abstract

The utilisation of Deep Learning (DL) is advancing into increasingly more sophisticated applications. While it shows great potential to provide transformational capabilities, DL also raises new challenges regarding its reliability in critical functions. In this paper, we present a model-agnostic reliability assessment method for DL classifiers, based on evidence from robustness evaluation and the operational profile (OP) of a given application. We partition the input space into small cells and then "assemble" their robustness (to the ground truth) according to the OP, where estimators on the cells' robustness and OPs are provided. Reliability estimates in terms of the probability of misclassification per input (pmi) can be derived together with confidence levels. A prototype tool is demonstrated with simplified case studies. Model assumptions and extension to real-world applications are also discussed. While our model easily uncovers the inherent difficulties of assessing the DL dependability (e.g. lack of data with ground truth and scalability issues), we provide preliminary/compromised solutions to advance in this research direction.
Original languageEnglish
Article number16
JournalCEUR Workshop Proceedings
Volume2916
Publication statusPublished - 28 Jul 2021
EventAISafety 2021 -
Duration: 19 Aug 202120 Aug 2021
https://www.aisafetyw.org/

Keywords

  • Deep Learning
  • Reliability
  • robustness analysis
  • Probabilistic analysis

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Assessing the Reliability of Deep Learning Classifiers Through Robustness Evaluation and Operational Profiles'. Together they form a unique fingerprint.

Cite this