Abstract
We present a parallel architecture for object recognition and location based on concurrent processing of depth and intensity image data. Parallel algorithms for curvature computation and segmentation of depth data into planar or curved surface patches, and edge detection and segmentation of intensity data into extended linear features, are described. Using this feature data in comparison with a CAD model, objects can be located in either depth or intensity images by a parallel pose clustering algorithm. The architecture is based on cooperating stages for low/intermediate level processing and for high level matching. Here, we discuss the use of individual components for depth and intensity data, and their realisation and integration within each parallel stage. We then present an analysis of the performance of each component, and of the system as a whole, demonstrating good parallel execution from raw image data to final pose. © 1998 Kluwer Academic Publishers.
Original language | English |
---|---|
Pages (from-to) | 37-56 |
Number of pages | 20 |
Journal | Journal of Supercomputing |
Volume | 12 |
Issue number | 1-2 |
Publication status | Published - 1998 |
Keywords
- Cooperative processing
- Multi-source data
- Parallel vision