2664
OPTICS LETTERS / Vol. 34, No. 17 / September 1, 2009
Optimal illumination for discriminating objects with different spectra Moon-Hyun Lee, Dong-Kyun Seo, Byung-Kuk Seo, and Jong-Il Park* Department of Electronics and Computer Engineering, Hanyang University, 17 Haengdang-dong, Seongdong-ku, Seoul, Korea 133-791 *Corresponding author:
[email protected] Received May 18, 2009; revised July 23, 2009; accepted July 28, 2009; posted August 3, 2009 (Doc. ID 111430); published August 28, 2009 Color differences are determined by illumination, the spectral reflectance of objects, and the spectral sensitivity of the imaging sensor. We explore the optimal illumination conditions that best separate one object from another. Given two objects with distinct spectra, we derive the optimal illumination spectrum to maximize their color distance with a plain RGB camera. In practice, it is crucial to compose the most appropriate illuminations using available lighting sources, since creating an arbitrary illumination spectrum is unrealistic. Therefore, we derive the optimal linear combination of the provided illumination sources. Finally, we verify the effectiveness of the methods through experiments. © 2009 Optical Society of America OCIS codes: 110.2945, 170.2945.
Color is the most important factor for low-level image processing such as classification or segmentation in computer vision. Three-channel color models are widely used in most computer vision systems, but they have inherent limitations because they are crude approximations of a continuous spectrum. Thus, many researchers have been interested in spectrum-based imaging as a good solution for overcoming the limitations of channel color models. Much research has been done on multispectral or hyperspectral imaging that computationally estimates the spectral reflectance of an object and synthesizes images of the object under a variety of illumination conditions. Moreover, the characteristic of illumination that significantly affects the color of an object has popularly been used for object recognition based on spectrum, and several approaches have been proposed for color distinction of objects by varying illumination conditions [1–4]. In particular, spectrum-based imaging has steadily developed for diagnosis of disease in the field of medical imaging. For instance, techniques have been used for illumination with the IR band for distinguishing irregular tissues from normal tissues, and a method that enhances the sensitivity of the spectral reflectance of the scattering feature in the superficial tissue layers was proposed [5]. Spectral reflectance for object distinction has also been a focus of research [6], but the focus has mainly been on estimation and analysis of differences of the spectral reflectance with no attention to illumination. The light emitted from an illumination source is reflected from a scene element and then enters an imaging device. Thus, the color of a scene element is obtained by integrating the spectral power distribution of the illumination source, the spectral reflectance of the object, and the spectral sensitivity of the digital imaging device. Let ck共兲, s共兲, and l共兲 represent the spectral response of a camera in channel k, the spectral reflectance of a scene point, and the spectral power distribution of an illumination, respectively. Then the value Ik measured at a pixel is given by 0146-9592/09/172664-3/$15.00
Ik =
冕
ck共兲s共兲l共兲d.
共1兲
Equation (1) can be rewritten in a vector-matrix form as follows: Ik = ckTSl,
共2兲
where S is a diagonal matrix consisting of spectral reflectance s. Let two objects with distinct spectra Sa and Sb be given. The color distance d can be obtained by d2 = 兺 关ckT共Sa − Sb兲l兴2 .
共3兲
k
To derive the optimal illumination spectrum that maximizes the distance in the color space, we can formulate the following maximization problem: lopt = arg max 兺 关ckT共Sa − Sb兲l兴2, l
储l储 = 1,
共4兲
k
where the constraint 储l储 = 1 is enforced for fair comparison. Denoting qkT = ckT共Sa − Sb兲, we can rewrite the Eq. (4) as lopt = arg max lT l
冋兺 册
qkqkT l.
共5兲
k
Because Eq. (5) is a quadratic form, the largest eigenvalue of 兺kqkqkT is the maximum d2 and its corresponding eigenvector is lopt, according to the constrained extremum theorem (see Appendix A). Since any spectral illumination function must be positive, the condition li 艌 0 for all i must be enforced. Therefore, we can obtain a solution that satisfies both conditions 储l储 = 1 and li 艌 0 for all i by constraining the negative elements of the eigenvector to zero and renormalize the vector, lopt = ˜e1 ,
共6兲
where ˜e1 is a nonnegative renormalized eigenvector of 兺kqkqkT corresponding to the largest eigenvalue. © 2009 Optical Society of America
September 1, 2009 / Vol. 34, No. 17 / OPTICS LETTERS
Fig. 1. (Color online) Left: our endoscope system using controllable illumination. It consists of an endoscope, an RGB camera, lens array, LEDs, and an LED controller. Right: spectra of the five types of LEDs (solid lines) and the spectral responses of the three color channels of the PointGrey Dragonfly Express camera (dashed lines) used in our system.
Since creating an arbitrary illumination spectrum is unrealistic, it is crucial to compose the most appropriate illuminations using available lighting sources. Therefore, it is useful to find the most appropriate illumination using linear combinations of the illumination sources at hand. If controllable light sources whose spectra are l1 , l2 , . . . , lN, are given, we can formulate the problem of finding the best linear combination of illumination sources as follows: xopt = arg max 兺 关ckT共Sa − Sb兲Lx兴2 , x
共7兲
k
where L is a matrix consisting of source vector l1 , l2 , . . . , lN and x represents a weight vector for light sources. Here, the conditions 储x储 = 1 and xi 艌 0 for all i must be satisfied because the illumination power is limited and nonnegative. Then we have xopt = arg max xT x
冋兺
册
wkwkT x,
k
共8兲
where wkT = ckT共Sa − Sb兲L. The optimal weight vector x can be derived by the same process as described above. Finally, we obtain the optimal linear combination of light sources, lcomposed = Lxopt .
2665
lumination of endoscope is easily controllable, since it is completely isolated from exterior lights. In our experiments, we designed a new endoscope system to verify the effectiveness of our method. We used LEDs as illumination sources because they were cheap, small, and controllable. The LEDs consist of power LEDs (Z-Power LED P4) of five different colors (white, red, amber, green, and blue), and their spectra were measured by a spectroradiometer (Luchem SPR-4001). The light of the LEDs was concentrated on an optical fiber through a lens array, and the optical fiber delivered the light to the tip of the endoscope. An RGB camera (PointGrey Dragonfly Express) was used for imaging, and an LED controller was used for regulating the power ratio of the LEDs. If the spectral reflectance of the target objects is known, it is easy to compute an optimal illumination. Otherwise, the spectral reflectance should be measured or estimated first using spectral reflectance measuring devices such as spectrophotometer. However, it is difficult to apply these devices to endoscopic imaging where work space (=measuring environment) is limited. Thus, we circumvented this problem by using one of the multispectral imaging methods for estimating spectral reflectance, proposed by Park et al. [2], because it could be easily implemented using the same hardware setting. Figure 1 shows our endoscope system, and a multispectral image of a part of the throat surface captured by using the system is shown in Fig. 2. The image consists of two regions, internal tissue and blood vessels, and their respective spectral reflectance was estimated, as shown in Fig. 2. With the estimated spectral reflectance and the camera spectral sensitivity of Fig. 1, we obtained the optimal illumination spectrum. It is shown at the top left of Fig. 3. When we relit the multispectral image using the computed optimal illumination spectrum, the blood vessels were more cleanly distinguished compared with using a xenon lamp, a halogen lamp, and white LEDs, which are commonly used for endoscopy [Fig. 3 (clockwise)]. When we calculated the color distances
共9兲
The proposed method is suitable for endoscopic imaging, because it aims at the diagnosis of disease by distinguishing one tissue from another. Moreover, il-
Fig. 2. (Color online) Multispectral image using the endoscope system shown in Fig. 1. The image is of a human throat. The spectral reflectance of the tissue is shown on the left, and the blood vessel reflectance spectrum is shown on the right.
Fig. 3. (Color online) Comparison of relighting results using the derived optimal illumination, xenon lamp, white LEDs, and halogen lamp (clockwise). These reillumination spectra are normalized for fair comparison.
2666
OPTICS LETTERS / Vol. 34, No. 17 / September 1, 2009
Fig. 4. (Color online) Left, multispectral relighting result using the optimized LED spectrum. Right, optimized LED spectra. The dashed curve is the weighted sum of the LED spectra.
between the tissue and the vessel regions, the optimal illumination consistently resulted in greater distances. For example, the distances between the O-marked pixel (blood vessel) and the X-marked pixel (tissue) were 132.33, 113.64, 126.27, and 100.27 for the optimal illumination, a xenon lamp, a halogen lamp, and white LEDs, respectively. These results demonstrate the validity of the proposed method. LEDs are widely used as an efficient light source. We used a variety of LEDs to show that the proposed optimization with a given set of illumination sources is efficient and practical. To distinguish blood vessels from internal tissue, we computed the optimal linear combination of the LEDs using Eq. (9). The optimal LED spectrum is shown in Fig. 4. Figure 4 shows the multispectral relit images using the optimized LED illumination spectrum. We see that the tissue and blood vessel are well distinguished in both cases. Careful examination reveals that the optimal illumination shows slightly better results than the optimized LEDs, because the optimized LED illumination is constrained in the spectrum shape. We proposed spectrum-based optimal illumination to efficiently discriminate objects with distinct spectral absorption and scattering characteristics. In our approach, we derived the optimal illumination spectrum to maximize the color distance when captured with a plain RGB camera. We also derived the optimal linear combination of practically available illumination sources. To verify the effectiveness of our approach, we designed an endoscope system capable of controlling illumination and applied the proposed method to endoscopic imaging. Experimental results show that the color of the target object was clearly discriminated from others. One big advantage of the proposed approach is its flexibility. We can easily apply target-specific illumi-
nation. If we combine the proposed technique with multispectral imaging, more flexible detecting methodology would be possible. The limitation of the method in this Letter is that human vision is more sensitive to chromaticity than intensity in object discrimination, but chromaticity optimization for light spectra could be possible using nonlinear numerical methods. Currently, we are trying to resolve all these problems by using a virtual reillumination and numerical optimization method. Appendix A Constrained Extremum Theorem [7]: Let A be a symmetric neutral network matrix whose eigenvalue in order of decreasing size 1 艌 2 艌 . . . 艌 n. Then: (a) There is a maximum value and a minimum value for xTAx on the unit sphere 储x储 = 1. (b) The maximum value is 1 (the largest eigenvalue), and this maximum occurs if x is a unit eigenvector of A corresponding to 1. (c) The minimum value is n (the smallest eigenvalue), and this minimum occurs if x is a unit eigenvector of A corresponding to n. This work was supported by the IT R&D program of the Ministry of Knowledge Economy/Ministry of Culture Sports and Tourism/Institute for Information Technology and Advancement [2008-F-031-01, Development of Computational Photography Technologies for Image and Video Contents]. References 1. C. Chi, H. Yoo, and M. Ben-Ezra, Int. J. Comput. Vis. 10.1007/s 11263-008-0176-y (2008). 2. J. Park, M. Lee, M. D. Grossberg, and S. K. Nayar, in Proceedings of the IEEE 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8. 3. D. W. Fletcher-Holmes and A. R. Harbvey, J. Opt. A 7, S298 (2005). 4. M. Yamaguchi, H. Haneishi, H. Fukuda, J. Kishimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, Proc. SPIE 6062, 129 (2006). 5. K. Gono, M. Igarashi, T. Obi, M. Yamaguchi, and N. Ohyama, Opt. Lett. 29, 971 (2004). 6. M. Sambongi, M. Igarashi, T. Obi, M. Yamaguchi, N. Ohyama, M. Kobayashi, Y. Sano, S. Yoshida, and K. Gono, Opt. Rev. 9, 238 (2002). 7. H. Anton and R. C. Busby, Contemporary Linear Algebra (Wiley, 2003).