Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information

Iman Naja, Milan Markovic, Peter Edwards, Wei Pang, Caitlin Cottrill, Rebecca Williams

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)
87 Downloads (Pure)

Abstract

To enhance trustworthiness of AI systems, a number of solutions have been proposed to document how such systems are built and used. A key facet of realizing trust in AI is how to make such systems accountable - a challenging task, not least due to the lack of an agreed definition of accountability and differing perspectives on what information should be recorded and how it should be used (e.g., to inform audit). Information originates across the life cycle stages of an AI system and from a variety of sources (individuals, organizations, systems), raising numerous challenges around collection, management, and audit. In our previous work, we argued that semantic Knowledge Graphs (KGs) are ideally suited to address those challenges and we presented an approach utilizing KGs to aid in the tasks of modelling, recording, viewing, and auditing accountability information related to the design stage of AI system development. Moreover, as KGs store data in a structured format understandable by both humans and machines, we argued that this approach provides new opportunities for building intelligent applications that facilitate and automate such tasks. In this paper, we expand our earlier work by reporting additional detailed requirements for knowledge representation and capture in the context of AI accountability; these extend the scope of our work beyond the design stage, to also include system implementation. Furthermore, we present the RAInS ontology which has been extended to satisfy these requirements. We evaluate our approach against three popular baseline frameworks, namely, Datasheets, Model Cards, and FactSheets, by comparing the range of information that can be captured by our KGs against these three frameworks. We demonstrate that our approach subsumes and extends the capabilities of the baseline frameworks and discuss how KGs can be used to integrate and enhance accountability information collection processes.

Original languageEnglish
Pages (from-to)74383-74411
Number of pages29
JournalIEEE Access
Volume10
DOIs
Publication statusPublished - 6 Jul 2022

Keywords

  • AI systems
  • Accountability
  • machine learning
  • ontology
  • provenance

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information'. Together they form a unique fingerprint.

Cite this