flection components using a single color image without ex- plicit color segmentation. .... Finally, we can get the following inequality. m(n) s (x1) > rd à m(n) s (x2).
FAST SEPARATION OF REFLECTION COMPONENTS USING A SPECULARITY-INVARIANT IMAGE REPRESENTATION Kuk-Jin Yoon, Yoojin Choi, and In So Kweon Robotics and Computer Vision Lab. Dept. Electrical Engineering and Computer Science, KAIST, Korea ABSTRACT In this paper, we propose a fast method for separating reflection components using a single color image. We first propose a specular-free two-band image that is a specularity-invariant color image representation. Reflection components separation is achieved by comparing local ratios at each pixel and making those ratios equal in an iterative framework. The proposed method is very fast and shows reasonable results for textured indoor/outdoor images. Index Terms— Image color analysis, image representations, reflection 1. INTRODUCTION Most computer vision methods assume that scene surfaces are perfectly Lambertian and consider only diffuse reflection. However, specular reflection due to non-Lambertian scene surfaces is frequent and it causes severe errors. Therefore, it is useful to separate reflection components to make computer vision methods robust to specular reflection. Many methods have been proposed to separate reflection components. Polarization-based methods [7, 12] use polarizing filters to separate reflection components. These methods show accurate results even for highly textured images. However, these methods are impractical because they need some additional filters to get polarization images and it is hard to find the maximum and the minimum degree of polarization of the same scenes. Multiple-image methods [3, 4, 8] use many images of the same scene to separate reflection components. Although these methods show reasonable results, they are also impractical because obtaining multiple images for the same scene may be difficult in many cases. For that reason, some other methods [1, 2, 9] try to separate reflection components using a single color image. However, although these methods can be used practically, they require color segmentation that is hard when dealing with highly textured images. Recently, Tan et al. [10,11] proposed new methods that do not require precise color segmentation. However, these methods are time-consuming. This research was supported by the Korean Ministry of Science and Technology for the National Research Laboratory Program (grant number M1-0302-00-0064).
In this paper, we propose a fast method for separating reflection components using a single color image without explicit color segmentation. We first propose a new specularityinvariant color image representation and then propose an iterative reflection components separation method. Throughout this work, we make some reasonable assumptions. First, it is assumed that an input image is taken under the uniform illuminant color.1 Second, it is assumed that all pixels in an input image are chromatic and not saturated. In addition, we assume that at least one pixel is a diffuse pixel in each color region. 2. DICHROMATIC REFLECTION MODEL AND IMAGE FORMATION There are two kinds of reflection under the dichromatic reflection model: diffuse and specular. The dichromatic reflection model for dielectric materials, which was proposed by Shafer [9], suggests that the spectral factor can be expressed as the linear weighted sum of two reflectance functions. When an image is taken by a camera, image formation can be described as Ir (x) Λr (x) Γr (x) Ig (x) = md (x) Λg (x) + ms (x) Γg (x) (1) Ib (x) Λb (x) Γb (x) where Λ = [Λr (x), Λg (x), Λb (x)]T denotes the diffuse chromaticity at x and Γ(x)=[Γr (x), Γg (x), Γb (x)]T the specular or illuminant chromaticity at x. md (x) and ms (x) are diffuse and specular reflection coefficients that depend on scene geometry at x. When an input image is taken under the white illumination, the color of specular reflection is pure-white regardless of an image position in an input image. In this case, Eq. (1) can be simply rewritten as Ir (x) Λr (x) 1/3 Ig (x) = md (x) Λg (x) + ms (x) 1/3 (2) Ib (x) Λb (x) 1/3 1 If an input image is taken under the non-white illumination, we normalize an input image using an illuminant color estimated by using existing color constancy methods.
3.2. Specular-Free Two-Band Image Generation
I(x 2 )
Ig
˜ ˜ Let I(x) = min{Ir (x), Ig (x), Ib (x)} and Λ(x) = min{ ˜ Λr (x), Λg (x), Λb (x)}. Then, the relationship between I(x) ˜ and Λ(x) is easily derived from Eq. (2) as ˜ ˜ I(x) = md (x) · Λ(x) + ms (x)/3 I (x1 )
R d (x 2 )
specular reflection (pure white, slope=1) R s (x 2 )
ˆI(x ) 2
ˆI(x ) 1
R d (x1 ) R s (x1 )
Ir
Fig. 1. Specularity-invariant value and ratio 3. SPECULARITY-INVARIANT COLOR IMAGE REPRESENTATION To separate reflection components using a single color image, we first find out a specularity-invariant quantity for each pixel, which is independent of the specularity of a pixel, and propose a specular-free two-band image that is a new specularityinvariant color image representation as in [5, 6, 10, 11]. The proposed specular-free two-band image is free from specular reflection and has the same geometrical profile as the diffuse reflection component of the input image. In addition, it can be generated in real-time. 3.1. Specularity-Invariant Value and Ratio The idea of the proposed specular-free two-band image is shown in Fig. 1.2 Suppose that we have two adjacent pixels at x1 and x2 with the same diffuse color. When denoting the diffuse and specular reflection components of two pixels as Rd (x1 ), Rd (x2 ), Rs (x1 ), and Rs (x2 ), respectively, the pixel intensities at x1 and x2 can be expressed as I(x1 ) = Rd (x1 ) + Rs (x1 ), I(x2 ) = Rd (x2 ) + Rs (x2 ) (3) Here, when Ig (x1 ) ≥ Ir (x1 ) and Ig (x2 ) ≥ Ir (x2 ) as ˆ 1 ) and I(x ˆ 2 ) simply as shown in Fig. 1, we can compute I(x ˆ 1 ) = Ig (x1 ) − Ir (x1 ), I(x ˆ 2 ) = Ig (x2 ) − Ir (x2 ) I(x
(4)
since the color of specular reflection is pure-white. From Fig. 1, we can see that Iˆ is independent of the specular reflection component and it depends only on the diffuse reflection component. In other words, Iˆ is specularity-invariant. In addition, the ratio between Iˆ values is dependent only on the diffuse reflection components. 2A
two-dimensional representation is given for visualization.
(5)
˜ Since I(x) can be computed simply, we can get the following values for each pixel. ˆ ˜ ˜ Ir (x) I(x) Λr (x) − Λ(x) Ir (x) ˜ = md (x) Λg (x) − Λ(x) ˜ Iˆg (x) = Ig (x) − I(x) ˜ ˜ ˆ I (x) Λb (x) − Λ(x) I(x) b Ib (x) (6) As shown in Eq. (6), Iˆr (x), Iˆg (x), and Iˆb (x) are independent of the specular reflection coefficient ms (x). Therefore, a specularity-invariant color image representation can be sim˜ ply achieved in real-time by subtracting I(x) from all color bands as Eq. (6). The resultant image is named as a specularfree two-band image because one of Iˆr (x), Iˆg (x), and Iˆb (x) ˜ ˜ is zero according to the definitions of I(x) and Λ(x). Here, it is worthy of note that a specular-free two-band image has the same geometrical profile as the diffuse reflection component of an input image. 4. REFLECTION COMPONENTS SEPARATION Based on the properties of a specular-free two-band image, reflection components separation can be achieved by comparing local ratios computed by using an input image and a specularfree two-band image and by making those ratios equal. 4.1. Image Local-Ratios For the two adjacent pixels at x1 and x2 with the same diffuse color (i.e., Λ(x1 ) = Λ(x2 ) = Λ), the following value is independent of the specularities (i.e., specular reflection coefficients) of two pixels. P ˆ md (x1 ) c∈{r,g,b} Ic (x1 ) rd = P = ˆ md (x2 ) c∈{r,g,b} Ic (x2 ) On the other hand, we can also compute another ratio using two pixels at x1 and x2 in an input image. P md (x1 ) + ms (x1 ) c∈{r,g,b} Ic (x1 ) = (7) rd+s = P I (x ) md (x2 ) + ms (x2 ) c∈{r,g,b} c 2 If the pixels at x1 and x2 are diffuse pixels, then rd and rd+s are the same because ms (x1 ) and ms (x2 ) are equal to zero. Therefore, we can generate a diffuse image and separate reflection components by making rd+s equal to rd for every pixel assuming that at least one diffuse pixel such that ms = 0 exists in each color region. For this, we propose an iterative framework in which the specular reflection coefficients are iteratively decreased to make rd+s = rd .
(n)
4.2. Iterative Framework
On the other hand, when rd+s is smaller than rd , we can
Suppose that we have a diffuse image I dn and a specular-free two-band image Iˆ at the nth iteration.3 For the pixels at x1 (n) and x2 in a diffuse image, we can compute rd+s as (7). (n)
rd+s = (n)
md (x1 ) + md (x2 ) +
(n) ms (x1 ) (n) ms (x2 )
(8)
(n)
(n)
update ms (x2 ) in the same manner. (n) m(n) s (x2 ) ⇐ ms (x1 )/rd
This can be accomplished as dn dn Ir (x2 ) Ir (x2 ) 1/3 Igdn (x2 ) ⇐ Igdn (x2 ) − m × 1/3 1/3 Ibdn (x2 ) Ibdn (x2 )
(16)
(17)
Here, ms (x1 ) and ms (x2 ) are specular reflection coeffi(n) cients in the diffuse image I dn such that 0 ≤ ms (x1 ) ≤ (n−1) (n) (n−1) ms (x1 ) and 0 ≤ ms (x2 ) ≤ ms (x2 ). We up(n) date the diffuse image to make rd+s equal to rd by decreasing
where
ms (x1 ) or ms (x2 ) adaptively. When rd+s is larger than rd , we can get the following relationship.
Since we assume that at least one diffuse pixel satisfying ms = 0 exists in each color region, we can remove specular reflection component from an input image and separate reflection components in the iterative framework. Here, boundary pixels in a specular-free two-band image are detected first by using chromaticity thresholding and excluded in the iterative framework. We use a simple pixel-wise thresholding method to detect color boundary pixels as in [10].
(n)
(n)
(n)
(n)
md (x1 ) + ms (x1 ) md (x2 ) +
(n) ms (x2 )
>
md (x1 ) = rd md (x2 )
(9)
This means that (n) md (x1 ) + m(n) s (x1 ) > rd × md (x2 ) + rd × ms (x2 ) (10)
m=
c∈{r,g,b}
(11)
Finally, we can get the following inequality. (n) m(n) s (x1 ) > rd × ms (x2 )
(12) (n)
From (9) and (12), we can see that we can make rd+s (n)
equal to rd by decreasing ms (x1 ) as (n) m(n) s (x1 ) ⇐ rd × ms (x2 )
(13) (n)
Because we do not know the exact values of ms (x1 ) and (n) (n) ms (x2 ), we can not update ms (x1 ) directly. Fortunately, however, this can be accomplished as dn dn Ir (x1 ) Ir (x1 ) 1/3 Igdn (x1 ) ⇐ Igdn (x1 ) − m × 1/3 (14) 1/3 Ibdn (x1 ) Ibdn (x1 ) where m = = =
(n) m(n) s (x1 ) − rd × ms (x2 )
{md (x1 ) + m(n) s (x1 )} −{rd × md (x2 ) + rd × m(n) s (x2 )} X X dn Ic (x1 ) − rd × Icdn (x2 ) (15) c∈{r,g,b}
c∈{r,g,b}
3 Note that Iˆ and r values do not change throughout the proposed method d while a diffuse image is updated at each iteration.
P Icdn (x2 )
−
dn c∈{r,g,b} Ic (x1 )
rd
(18)
5. EXPERIMENTS
Since rd × md (x2 ) = md (x1 ), Eq. (10) is rewritten as (n) md (x1 ) + m(n) s (x1 ) > md (x1 ) + rd × ms (x2 )
X
The proposed method was applied to synthetic and real images.4 Specular-free two-band images for textured images with specular highlights are shown in Fig. 2. We can see that, although pixel colors are changed, the specularity of each pixel is successfully removed while the shading information is correctly preserved. Separation result for a synthetic image is shown in Fig. 3 and results for indoor/outdoor real images are shown in Fig. 4. In addition, Fig. 5 shows the result of the proposed method and the result of Tan’s method [10]. Although there are some errors due to saturated pixels, the proposed method accurately and robustly separate reflection components for indoor and outdoor images even when images are highly textured. In all experiments, the processing time of the proposed method is less than one second in the AMD 2.4GHz machine while the Tan’s method takes a few minutes for a 640×480 input image. 6. CONCLUSION In this paper, we have proposed a fast method to separate specular-diffuse reflection components using a single color image. We first proposed the specular-free two-band image that is a new specularity-invariant color image representation. Using a specular-free two-band image, we successfully separated reflection components by comparing local ratios at each pixel and making those ratios equal in an iterative framework. Experimental results show that the proposed method produces reasonable results for textured indoor/outdoor images rapidly. 4 We
did not use HDR images in our experiments.
(a) input images
(b) specular-free two-band images
Fig. 2. Results of specular-free two-band image generation
(a) input im- (b) true (c) true specu- (d) separated (e) separated age diffuse comp. lar comp. diff. comp. spec. comp.
Fig. 3. Result for a synthetic image 7. REFERENCES [1] R. Bajcsy, S.W. Lee, and A. Leonardis, “Detection of Diffuse and Specular Interface Reflections and InterReflections by Color Image Segmentation,” Int’l J. Comp. Vis., vol. 17, no. 3, pp. 241–272, 1996. [2] G.J. Klinker, S.A. Shafer, and T. Kanade, “A Physical Approach to Color Image Understanding,” Int’l J. Comp. Vis., vol. 4, no. 1, pp. 7–38, 1990.
(a) input images
(b) diffuse comp.
(c) specular comp.
Fig. 4. Results for indoor and outdoor real images
[3] Y. Li, S. Lin, H. Lu, S.B. Kang, and H.-Y. Shum, “Multibaseline Stereo in the Presence of Specular Reflections,” in Proc. Int’l Conf. Pat. Recog., pp. 573–576, 2002. [4] S. Lin, Y. Li, S.B. Kang, X. Tong, and H.-Y. Shum, “Diffuse-Specular Separation and Depth Recovery from Image Sequences,” in Proc. European Conf. Comp. Vis., vol. 3, pp. 210–224, 2002. [5] S. Mallick, T. Zickler, D. Kriegman, and P. Belhumeur, “Beyond Lambert: Reconstructing Specular Surfaces Using Color,” in Proc. IEEE Conf. Comp. Vis. Patt. Recog., vol. 2, pp. 619–626, 2005. [6] D. Miyazaki, R.T. Tan, K. Hara, and K. Ikeuchi, “Polarization-based Inverse Rendering from a Single View,” in Proc. Int’l Conf. Comp. Vis., pp. 982–987, 2003. [7] S.K. Nayar, X.S. Fang, and T. Boult, “Separation of Reflection Components using Color and Polarization,” Int’l J. Comp. Vis., vol. 21, no. 3, pp. 163–186, 1996. [8] Y. Sato and K. Ikeuchi, “Temporal-Color Space Analysis of Reflection,” J. Opt. Soc. Am. A, vol. 11, no. 11, pp. 2990–3002, 1994.
(a) input image
(b) result of [10]
(c) our result
Fig. 5. Result comparison [9] S. Shafer, “Using Color to Separate Reflection Components,” Color Res. Appl., vol. 10, pp. 210–218, 1985. [10] R.T. Tan and K. Ikeuchi, “Separating Reflection Components of Textured Surfaces Using a Single Image,” IEEE Trans. Pat. Anal. and Mach. Intel., vol. 27, no. 2, pp. 178–193, 2005. [11] R.T. Tan and K. Ikeuchi, “Reflection Components Decomposition of Textured Surfaces using Linear Basis Functions,” in Proc. IEEE Conf. Comp. Vis. Patt. Recog., vol. 1, pp. 125–131, 2005. [12] L.B. Wolff and T.E. Boult, “Constraining Object Features using Polarization Reflectance Model,” IEEE Trans. Pat. Anal. and Mach. Intel., vol. 13, no. 7, pp. 635–657, 1991.