A Multi-Scale Adaptive Grey World Algorithm - CiteSeerX

0 downloads 0 Views 136KB Size Report
Jul 7, 2007 - SUMMARY. Grey world algorithm is a well-known color constancy al- gorithm. It is based on the Grey-World assumption i.e., the average re-.
IEICE TRANS. INF. & SYST., VOL.E90–D, NO.7 JULY 2007

1121

LETTER

A Multi-Scale Adaptive Grey World Algorithm Bing LI†a) , Nonmember, De XU†b) , Member, Moon Ho LEE††c) , and Song-He FENG† , Nonmembers

SUMMARY Grey world algorithm is a well-known color constancy algorithm. It is based on the Grey-World assumption i.e., the average reflectance of surfaces in the world is achromatic. This algorithm is simple and has low computational costs. However, for the images with several colors, the light source color could not be estimated correctly using the Grey World algorithm. In this paper, we propose a Multi-scale Adaptive Grey World algorithm (MAGW). First, multi-scale images are obtained based on wavelet transformation and the illumination color is estimated from different scales images. Then according to the estimated illumination color, the original image is mapped into the image under a canonical illumination with supervision of an adaptive reliability function, which is based on the image entropy. The experimental results show that our algorithm is effective and also has low computational costs. key words: color constancy, Grey World, Multi-scale

1. Introduction The image recorded by a camera depends on three factors: the physical content of the scene, the illumination incident on the scene, and the characteristics of the camera [1]. The goal of color constancy is to recognize the color of objects invariant of the color of the light source. It generally includes two steps [2]. Firstly, estimate the color of light source from image data. Secondly, the illuminant invariant descriptors are computed, which is used to adjust image in order to get the objects’ colors under a known light source. The color constancy is important for a large variety of applications such as object recognition, scene understanding, image reproduction as well as digital photography [1]. So far, there are a number of leading color constancy algorithms. For example, Grey world (GW) algorithm, which is classical algorithm for color constancy, assumed that the average reflectance in the scene is achromatic. Another simple color constancy method, called max-RGB, estimated the light source color from the maximum response of the different color channels [2]. And Forsyth et al., proposed a gamut-mapping approach [3]. This method forms a set of all possible (R, G, B) due to surfaces in the world under a known, “canonical” illumination at first. And the set is repManuscript received December 7, 2006. Manuscript revised February 28, 2007. † The authors are with the Institute of Computer Science and Engineering,Beijing Jiaotong University, Beijing China. †† The author is with the Division of Electronics and Information Engineering, College of Engineering, Chonbuk National University, Jeonju Korea. a) E-mail: [email protected] b) E-mail: [email protected] c) E-mail: [email protected] DOI: 10.1093/ietisy/e90–d.7.1121

resented by a convex hull. Secondly, the set of all possible is represented under the unknown illumination by its convex hull. These two hulls are a unique diagonal mapping. The diagonal mapping is estimated at last. Many other methods have been proposed. Finlayson et al. proposed a Color by Correlation method to improve the Color in Perspective method [4]. And Neural network was also used to estimate the color of the illuminant [5], which can get the relationship between an image of a scene and the chromaticity of scene illumination through the training of neural network. In addition to these methods, there are still many color constancy algorithms, such as, illumination color by voting, probabilistic algorithms, color in perspective and so on. Although more elaborate algorithms exist, methods like GW and max-RGB are still widely used because of their low computational costs [2]. Unfortunately, many limitations also existed with GW method, when it faces the image with several colors. In this paper, based on GW algorithm, we propose a novel multi-scale adaptive Gray World algorithm (MAGW), which could improve the adaptive ability and the accuracy of illumination color estimation. This paper is organized as follows. In Sect. 2, GW algorithm is described. Then we will explain the details of proposed algorithm in Sect. 3. The experimental results and the comparison results are presented in Sect. 4. Section 5 concludes this paper. 2. Grey World Algorithm According to the lambertian reflectance model, the image f = (R, G, B)T can be computed as follow:  e(λ)s(x, λ)c(λ)dλ (1) f (x) = w

where x is the spatial coordinate, λ is wavelength and w represents the visible spectrum. e(λ) is color of light source, the surface reflectance is denoted as s(x, λ) and the camera sensitivity function c(λ) = (R(λ), G(λ), B(λ)). The goal of color constancy is to estimate e;  e= e(λ)c(λ)dλ (2) w

Because both e(λ) and c(λ) are unknown, it can not be solved without further assumptions. The GW algorithm is just based on the grey world assumption, i.e. the average reflectance in a scene is achromatic:

c 2007 The Institute of Electronics, Information and Communication Engineers Copyright 

IEICE TRANS. INF. & SYST., VOL.E90–D, NO.7 JULY 2007

1122

 s(x, λ)dx  =k dx

(3)

0

where k is a constant, representing the achromaticity. The light source color can now be estimated by computing the average pixel value:    e(λ)s(λ, x)c(λ)dλdx f (x)dx w   = dx dx  =k

 2       ξi ei  e = ξ0 e0 + ξ1 e1 + ξ2 e2 =   i=0     ξ +ξ +ξ =1

e(λ)c(λ)dλ = ke

(4)

1

(5)

2

Where e0 , e1 and e2 denote the light source color, which are estimated from the image I0 , I1 and I2 respectively. ξ0 , ξ1 , ξ2 are the weights of estimated light source colors, they are determined by experiments and experience, in this paper their values are given as in Sect. 4. In the multi-scale analysis illumination color is estimated from 3 scales images, and the high frequency information, such as counter, is partly removed after each transformation. So the illumination color estimation is more reliable. 3.2 Estimate Illumination Color of Each Scale Image

w

Consequently, the normalized light source color can be generated as: eˆ = ke/|ke|. Hence, this is indeed a very simple algorithm to compute the light source color of a scene. 3. Multi-Scale Adaptive Grey World Algorithm

In this section, how to compute the ei from each scale image is discussed. Before computing the ei , a preprocess step is necessary. At first, the pixels whose color values are too low must be removed, because the object which has dark color can’t reflect the light well. newIi = {Ii − Pi |R(Pi ) + G(Pi ) + B(Pi ) ≤ δ}

3.1 Multi-Scale Analysis for Color Constancy Wavelet multi-scale analysis is introduced into color constancy algorithm in this paper. Two disadvantages are available in GW algorithm. The first is that it is not accurate to estimate the light source color just from a single image. The other is the contours in the image (high frequency information in the image) is not sensitive to the illumination [6]. Aiming at removing the two disadvantages and considering computational costs, we use wavelet transformation on the image twice. If we use more than three wavelet transformations, the computational costs will become very high and after the third transformation, images become too small to use for MAGW; However, if we use wavelet transformation only one time, the effect of multi-scale analysis is not obvious, because the image after the second transformation is still useful for MAGW. As we can see in Fig. 1, the low frequency part of the image is reserved in every transformation. Consequently, we have three images on the different scales, which are denoted as I0 , I1 , I2 respectively. The color of the light source e is also estimated from the three different scales images.

(6)

where the δ is the threshold. δ can’t be very large, if so, many useful pixels will be removed. According to experiments, δ = 6 is suitable for most images. The newIi is the image removed the dark pixels. Pi represents the pixel in the image Ii , The R(Pi ), G(Pi ) and B(Pi ) respectively represent the Red, Green, Blue values of the very pixel. After removing these pixels, we will compute the illumination color ei from the image newIi . According to GW, ei = (R¯ i , G¯ i , B¯ i )T . The R¯ i , G¯ i , B¯ i are the average color value of each scale image. It is easily affected by noise to directly compute the average color (R¯ i , G¯ i , B¯ i )T as ei from the image. It is better to get ei based on regions in the image. It means that we use average color of each regions to compute the ei . So the region-growing method is used to segment the image on the constrain that all pixels in a region were within a certain absolute tolerance of each other. Because a region-growing algorithm just segments the images into regions in MAGW, how to select initial seeds has nearly no affect to the performance of MAGW. We simply use scanning image method to get seeds for region-growing (color distance threshold is 25). The image newIi is segmented into A1i , A2i , . . . , AiNumi Numi regions. newIi = A1i + A2i + . . . + AiNumi =

Num i

j

Ai

(7)

j=1

After image segmentation, Minkowski-norm [7] is used the compute the ei based on region’s average colors as follows. Fig. 1 (A) original image. (B) the image after wavelet transformation, and (C) the image after twice wavelet transformation. The low frequency parts of images in both (B) and (C) are reserved for estimating illumination color.

 1p   ¯f p ( j)d j    i  = kei      d j 

(8)

LETTER

1123

where the f¯i ( j) is the average color of the jth region in the image newIi .The ei can be estimate from Eq. (8). This method is hard to be affected by the surface, which takes up the major part of the image. Because in this situation, the illumination color, which is directly computed as the average color from the image, is easy to response to the color of this surface. Finlayson and Trezzi [7] found that the best results were obtained with a Minkowski norm with p = 6. So in our experiment, we make p=6, too.

with Kc .If RC = 0, that is H¯ = 0, Kc∗ = [1, 1, 1]T . In this situation the average entropy of image is equal to 0, the illumination estimated from the image is not reliable at all. So the image can’t be adjusted with the illumination.Under the ¯ the supervision of the reliable coefficient RC, the larger is H, ∗ ¯ nearer does Kc approach to Kc , the smaller is H, the nearer does Kc∗ trend to [1, 1, 1]T . Consequently, the image can adaptively adjust itself color using our proposed method. 4. Experiments

3.3 Reliable Function Based on Entropy After estimating the illumination color, we should adjust the image to get the objects’ color under a known light source. In this computation, the diagonal model is needed. The diagonal model is to map the image taken under one illuminant, to the image taken under another illuminant (e.g., the canonical), by scaling each channel independently [1]. For concreteness, consider a scene with a patch. Suppose that the color of the unknown illuminant is (Ru , Gu , Bu )T , and the color of the known, canonical, illuminant is (Rc , Gc , Bc )T .Then the response to this patch can be mapped from the unknown case to the canonical case simc c c ply by scaling the three channels by R /Ru , G /Gu and B /Bu respectively. The illumination color e = (Re , Ge , Be )T has been computed with Eqs. (8) and (5). In order to keep the average color of the image, we map the image to the image under the canonical white illumination ( Re +G3e +Be , Re +G3e +Be , Re +G3e +Be )T So the scaling coefficient of the three channels Kc = (KR , KG , KB )T is as follows: T

Re + G e + Be Re + G e + Be Re + G e + Be Kc = , , 3Re 3Ge 3Be (9) But the image may be wrongly adjusted, if it is scaled by the coefficient Kc directly, because the illumination color sometimes is estimated with some errors. According to the fig. (4) in paper [1], we can find that the fewer are colors in the image, the larger is the error in illumination estimation. In order to avoid incorrectly adjusting the image, we propose the reliable coefficient (RC) based on the entropy of the image to supervise the coefficient Kc .  1 H¯ ≥ ε RC = (10) ¯ H/ε H¯ < ε Where H¯ is the average entropies in Red, Green, Blue three color channels. According to experiments, the threshold ε can be fixed as 3.5 in most situations. Then the Kc is adjusted as Kc∗ = (Kc − [1, 1, 1]T ) × RC + [1, 1, 1]T

(11)

In Eq. (11), if RC = 1, then H¯ ≥ ε, Kc∗ = Kc . It means that the average entropy of image is larger enough, the estimated illumination is reliable enough to adjust the image directly

A large data set of colorful object under varying light sources is used to test MAGW [1]. The data set consists of 321 images with varying light sources over a total of 30 scenes. For all images the correct light source is measured, el .The angular error between the estimated light source ee and the measured light source el is used as an error measure. angularerror = cos−1 (ˆel • eˆe )

(12)

where eˆl • eˆ e is the dot product of the two normalized vectors el and ee . Ideally, the angle is 0, which is the case when both vectors have the same direction. Results of other color constancy algorithms on this standard data set are available in [1], [2]. Although there are 5 parameters in MAGW. According to large scale experiments, in most situations δ and ε can be fixed, e.g. δ = 6, ε= 3.5. The parameters ξ0 , ξ1 , ξ2 could be ξ0 = 0.7, ξ1 = 0.2, ξ2 = 0.1, and do tiny adjustments as required. According to the Table 1, there are only one method perform better than our MAGW algorithm. But this method is considerably more complex and therefore requires higher computational costs. In conclusion, the MAGW is a useful alternative when both computational speed and performance are needed. A comparison experiment results between the GW and MAGW have been show in Fig. 2. The value of Y-coordinate is the mean angular error of each scene under 11 different light sources. From Fig. 2, the angular errors of MAGW are much smaller than GW, and the stabilization of MAGW is better than GW. However, we can find that the angular errors of MAGW in scene 7 and 22 are obviously larger than GW. Because images of the two scenes with too many dark pixels which take up nearly 70% area of image, the MAGW doesn’t perform well with these images. Figure 3 illustrates the function of reliable coefficient. Obviously, image (B) adjusted without reliable coefficient, Table 1 Mean angular error (degrees) for various color constancy methods on the data set. Mean Grey-World (= L1 ) 9.8 White-Patch (= L∞ ) 9.2 Minkowski-norm (= L6 ) 6.3 Gamut-mapping 5.6 Color-by-Correclation 9.9 GCIE Version 3, 11 lights 4.9 Grey-edge, L6 5.7 MAGW 5.5

IEICE TRANS. INF. & SYST., VOL.E90–D, NO.7 JULY 2007

1124

of them. Secondly, MAGW is stabler than some other algorithms. At last, it is most important that our algorithm can adaptively adjust the coefficients to map the image to the image under the canonical illumination, which is never taken account by other algorithms. 5. Conclusion

Fig. 2

The comparison between the GW and MAGW.

In this paper, we propose a Multi-scale Adaptive Grey World (MAGW) algorithm. Multi-scale analysis and reliable function are introduced to the MAGW algorithm. The algorithm is tested on a large data set, and results show that our algorithm is more accurate and adaptive than the original GW algorithm and most other algorithms. It also has low computational costs. Acknowledgments

(A) Original image

(B) Adjusted without RC Fig. 3 Table 2

(C) Adjusted with RC

The effect of reliable coefficient.

The effect of reliable coefficient in color constancy.

H¯ 3.3 < H¯ < 3.5 3.0 < H¯ ≤ 3.3 0 ≤ H¯ ≤ 3.0

Image Number 10 9 11

Effect Fair Good Very Good

is with slight red all over the image. The reason is that the illumination color estimated is not accurate. However, through using the reliable coefficient adaptively supervision, the image (C) is much better than image (B). The comparison experiments have been done in the experimental data set. The experimental results are shown in Table 2. The ¯ the more obvious is the effect of reliable coefsmaller is H, ficient. Generally, through the experiments and analysis, the MAGW has at least three advantages: Firstly, the angular error of MAGW is lower than most existent algorithm, and the computational costs of MAGW is much lower than some

This work is supported by Ministry of information and Communication (MIC) under the IT Foreign Specialist Inviting Program (ITFSIP) supervised by IIFA and ITRC supervised by IITA and International Cooperative Research Program of the Ministry of Science and Technology and KOTEF, 2nd stage BK21, Korea. References [1] K. Barnard, V. Cardei, and B.V. Funt, “A comparison of computational color constancy algorithms-part 1: Methodology and experiments with synthesized data,” IEEE Trans. Image Process., vol.11, no.9, pp.972–983, Sept. 2002. [2] J. van de Weijer and T. Gevers, “Color constancy based on the greyedge hypothesis,” IEEE Conf. on Image Processing, pp.722–725, Sept. 2005. [3] D. Forsyth, “A novel algorithm for color constancy,” Int. J. Comput. Vis., vol.5, pp.5–36, 1990. [4] G. Finlayson, S. Hordley, and P. Hubel, “Color by correlation: A simple,unifying framework for color constancy,” IEEE Trans. Pattern Anal. Mach. Intell., vol.23, no.11, pp.1209–1221, Nov. 2001. [5] B.V. Funt, V. Cardei, and K. Barnard, “Learning color constancy,” Proc. IS&T/SID Fourth Color Imaging Conference: Color Science, Systems and Applications, pp.58–60, 1996. [6] P. Bao and X. Zhang, “Image retrieval based on multi-scale edge model,” IEEE Conf. on Multimedia and Expo,vol.2, pp.417–420, 2006. [7] G.D. Finlayson and E. Trezzi, “Shades of gray and colour constancy,” The Twelfth Color Imaging Conference,IS&T - The Society for Imaging Science and Technology, pp.37–41, 2004.

Suggest Documents