Scoring robotic competitions: Balancing judging promptness and meaningful performance evaluation

Fausto Ferreira, Gabriele Ferri, Yvan Petillot, Xingkun Liu, Marta Palau Franco, Matteo Matteucci, Francisco Javier Perez Grau, Alan Ft Winfield

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

To have a significant and fair competition an adequate scoring system is necessary. Different scoring systems exist, each one directly related to the nature and goals of the competition, e.g. student/educational, focused on benchmarking, Grand Challenge style, etc. However, the design of such systems is not an easy task. It is mandatory to design approaches which enable the judges to score teams in a reasonable time after the end of their performance. Scoring systems cannot be therefore extremely complex. On the other hand, it is crucial to have a judging system for teams which provides a meaningful and fair performance evaluation of what the teams achieved in the field. In this paper, several approaches to scoring are presented and compared. Our focus is on search and rescue competitions. Each approach is critically analysed and its features comparatively discussed. This analysis is beneficial to provide an overview of scoring systems tailored to different kinds of competitions. The reported examples can be used as building blocks to improve existing scoring systems or to design new approaches.

Original languageEnglish
Title of host publication18th IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)
PublisherIEEE
Pages179-185
Number of pages7
ISBN (Electronic)9781538652213
DOIs
Publication statusPublished - 7 Jun 2018

ASJC Scopus subject areas

  • Artificial Intelligence
  • Mechanical Engineering
  • Control and Optimization

Fingerprint Dive into the research topics of 'Scoring robotic competitions: Balancing judging promptness and meaningful performance evaluation'. Together they form a unique fingerprint.

Cite this