Abstract
Artificial Intelligence (AI) shows promising applications for the perception and planning tasks in autonomous driving (AD) due to its superior performance compared to conventional methods. However, highly complex AI systems exacerbate the existing challenge of safety assurance of AD. One way to mitigate this challenge is to utilize explainable AI (XAI) techniques. To this end, we present the first comprehensive systematic literature review of explainable methods for safe and trustworthy AD. We begin by analyzing the requirements for AI in the context of AD, focusing on three key aspects: data, model, and agency. We find that XAI is fundamental to meeting these requirements. Based on this, we explain the sources of explanations in AI and describe a taxonomy of XAI. We then identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation. Finally, we propose a conceptual modular framework called SafeX to integrate the reviewed methods, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
Original language | English |
---|---|
Pages (from-to) | 19342-19364 |
Number of pages | 23 |
Journal | IEEE Transactions on Intelligent Transportation Systems |
Volume | 25 |
Issue number | 12 |
Early online date | 14 Oct 2024 |
DOIs | |
Publication status | Published - Dec 2024 |
Keywords
- Safety
- Surveys
- Explainable AI
- Stakeholders
- Planning
- Taxonomy
- Standards
- Monitoring
- Autonomous vehicles
- Terminology
- Autonomous driving
- autonomous vehicles
- explainable AI
- trustworthy AI
- AI safety