Varying illumination conditions result in images of a same scene differing widely in color and contrast. Accommodating such images is a problem ubiquitous in machine vision systems. A general approach is to map colors (or features extracted from colors within some pixel neighborhood) from a source image to those in some target image acquired under canonical conditions. This article reports two different methods, one neural network-based and the other multidimensional probability density function matching-based, developed to address the problem. We explain the problem, discuss the issues related to color correction and show the results of such an effort for specific applications. © 2008 Springer-Verlag Berlin Heidelberg.
|Number of pages||25|
|Journal||Studies in Computational Intelligence|
|Publication status||Published - 2008|