A dual source, parallel architecture for computer vision

A. M. Wallace, G. J. Michaelson, N. Scaife, W. J. Austin

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

We present a parallel architecture for object recognition and location based on concurrent processing of depth and intensity image data. Parallel algorithms for curvature computation and segmentation of depth data into planar or curved surface patches, and edge detection and segmentation of intensity data into extended linear features, are described. Using this feature data in comparison with a CAD model, objects can be located in either depth or intensity images by a parallel pose clustering algorithm. The architecture is based on cooperating stages for low/intermediate level processing and for high level matching. Here, we discuss the use of individual components for depth and intensity data, and their realisation and integration within each parallel stage. We then present an analysis of the performance of each component, and of the system as a whole, demonstrating good parallel execution from raw image data to final pose. © 1998 Kluwer Academic Publishers.

Original languageEnglish
Pages (from-to)37-56
Number of pages20
JournalJournal of Supercomputing
Volume12
Issue number1-2
Publication statusPublished - 1998

Keywords

  • Cooperative processing
  • Multi-source data
  • Parallel vision

Fingerprint

Dive into the research topics of 'A dual source, parallel architecture for computer vision'. Together they form a unique fingerprint.

Cite this