Trust in Robot Benchmarking and Benchmarking for Trustworthy Robots

Santosh Thoduka*, Deebul Nair, Praminda Caleb-Solly, Mauro Dragone, Filippo Cavallo, Nico Hochgeschwender

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Trustworthy evaluation of robots is necessary for them to be deployed and accepted in society. Scientific benchmarking competitions provide a way to evaluate robots outside of lab conditions. We propose a progressive and iterative benchmarking process through competitions, which incorporates an objective dataset-based evaluation, evaluation on a remote robot, and field evaluations for individual robot functionalities and complete tasks, in a cyclical process similar to the machine learning lifecycle, with a view to achieving trustworthy evaluation. The inclusion of out-of-distribution data, failure scenarios and user studies as part of the benchmarking process addresses the necessity to evaluate robot systems on non-functional qualities such as fault tolerance, adaptability, social acceptance, in addition to their functional abilities to improve trustworthiness.

Original languageEnglish
Title of host publicationProducing Artificial Intelligent Systems
PublisherSpringer
Pages31-51
Number of pages21
ISBN (Electronic)9783031558177
ISBN (Print)9783031558160
DOIs
Publication statusPublished - 5 Jun 2024

Publication series

NameStudies in Computational Intelligence
Volume1150
ISSN (Print)1860-949X
ISSN (Electronic)1860-9503

Keywords

  • Robot benchmarking
  • Trustworthiness

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Trust in Robot Benchmarking and Benchmarking for Trustworthy Robots'. Together they form a unique fingerprint.

Cite this