Adaptive fusion framework based on augmented reality training

P. Y. Mignotte, E. Coiras, H. Rohou, Y. Pétillot, J. Bell, K. Lebart

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)


A framework for the fusion of computer-aided detection and classification algorithms for side-scan imagery is presented. The framework is based on the Dempster-Shafer theory of evidence, which permits fusion of heterogeneous outputs of target detectors and classifiers. The utilisation of augmented reality for the training and evaluation of the algorithms used over a large test set permits the optimisation of their performance. In addition, this framework is adaptive regarding two aspects. First, it allows for the addition of contextual information to the decision process, giving more importance to the outputs of those algorithms that perform better in particular mission conditions. Secondly, the fusion parameters are optimised on-line to correct for mistakes, which occur while deployed. © The Institution of Engineering and Technology 2008.

Original languageEnglish
Pages (from-to)146-154
Number of pages9
JournalIET Radar, Sonar and Navigation
Issue number2
Publication statusPublished - 2008


Dive into the research topics of 'Adaptive fusion framework based on augmented reality training'. Together they form a unique fingerprint.

Cite this