Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information

Iman Naja, Milan Markovic, Peter Edwards, Wei Pang, Caitlin Cottrill, Rebecca Williams

Research output: Contribution to journalArticlepeer-review

Abstract

To enhance trustworthiness of AI systems, various requirements have been proposed to document how AI systems are built and used. A key facet of realizing trust in AI systems is how to make such systems accountable - a challenging task, not least due to the varying definitions of accountability and different perspectives on what information should be recorded and how it should be used (e.g., to inform audit). Such information originates from across the various life cycle stages of an AI system and from a variety of sources (individuals, organizations, systems), raising numerous challenges around collection, management, and audit. In our previous work, we argued that semantic Knowledge Graphs (KGs) are ideally suited to address those challenges and we presented an approach utilizing KGs to aid in the tasks of modelling, recording, viewing, and auditing accountability information related to the design stage of AI system development. Moreover, as KGs store data in a structured format understandable by both humans and machines, we argued that this approach provides new opportunities for building intelligent applications that facilitate and automate such modelling, recording, viewing, and auditing. In this paper, we expand our earlier work by reporting additional detailed requirements for knowledge representation and capture in the context of AI accountability. These requirements extend the original scope of our previous work beyond the design stage of an AI system life cycle to also include the implementation stage. Furthermore, we present the <italic>RAInS</italic> ontology which has been extended to satisfy these requirements. We evaluate our approach against three popular baseline frameworks, namely, Datasheets, Model cards, and FactSheets, by comparing the range of information, relating to the design and implementation stages, that can be captured by our KGs against these three frameworks. We demonstrate that our approach subsumes and extends the capabilities of the baseline frameworks and discuss how KGs can be used to integrate and enhance accountability information collection processes.

Original languageEnglish
Pages (from-to)74383-74411
Number of pages29
JournalIEEE Access
Volume10
DOIs
Publication statusPublished - 6 Jul 2022

Keywords

  • Accountability
  • AI Systems
  • Artificial intelligence
  • Inspection
  • Law
  • Machine Learning
  • Ontologies
  • Ontology
  • Provenance
  • Rain
  • Recording
  • Task analysis

ASJC Scopus subject areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information'. Together they form a unique fingerprint.

Cite this