Recognition of polychromatic three-dimensional objects - UV

1 downloads 0 Views 541KB Size Report
and the joint transform correlator2 (JTC), have been devoted to the recognition of bidimensional (2-D) im- ages. As there are applications in which the neces-.
Recognition of polychromatic three-dimensional objects Jose´ J. Esteve-Taboada, Nicolas Palmer, Jean-Christophe Giannessini, Javier Garcı´a, and Carlos Ferreira

We propose to use optical multichannel correlation in various chromatic systems to obtain a setup for recognition of polychromatic three-dimensional 共3-D兲 objects based on Fourier-transform profilometry. Because red– green– blue color components are not able to split the luminance information of objects in a defined component, when the 3-D objects are brighter than the reference objects the correlation result gives false alarms. We demonstrate that it is possible to use different color spaces that can split luminance from chromatic information to yield adequate recognition of polychromatic 3-D objects. We show experimental results that prove the utility of the proposed method. © 2004 Optical Society of America OCIS codes: 100.4550, 100.6890, 330.1690.

1. Introduction

Recognition of three-dimensional 共3-D兲 objects is an increasingly important issue in the field of optical pattern recognition. However, most of the existing mechanisms for pattern recognition that use optical setups, such as the VanderLugt optical correlator1 and the joint transform correlator2 共JTC兲, have been devoted to the recognition of bidimensional 共2-D兲 images. As there are applications in which the necessary information is contained not just in one 2-D projection of the target but in all the target’s 3-D shape, a full 3-D treatment will be required. Much research has been devoted to the task of optically recognizing 3-D objects by use of range images,3–5 of several cameras in an optoelectronic setup,6,7 or even of digital holography as a method for recording 3-D information.8 –10 Recently, a real-time optical technique for the recognition of 3-D objects was proposed.11 This method is based on using Fouriertransform profilometry12 共FTP兲 to introduce the 3-D information on the object into the system. The FTP technique relies on projecting a grating onto the surface of a 3-D object and capturing the resultant 2-D image, which is a deformed fringe pattern that car-

` ptica, Facultat de The authors are with the Departament d’O Fı´sica, Universitat de Vale`ncia, C兾Doctor Moliner, 50, 46100 Burjassot, Spain. C. Ferreira’s e-mail address is carlos.ferreira@ uv.es. Received 16 April 2003; revised manuscript received 23 July 2003; accepted 4 August 2003. 0003-6935兾04兾020433-09$15.00兾0 © 2004 Optical Society of America

ries all the 3-D information on the object. As was demonstrated in Ref. 11, the analysis of such patterns is the basis of the method for recognizing 3-D objects. The discrimination capabilities of any objectrecognition technique, and in particular of the 3-Dobject recognition method based on FTP, may be improved by addition of object color information in the detection process. Several techniques for obtaining polychromatic 2-D-object recognition have been proposed. These techniques are based on trichromatic decomposition,13–15 matched filtering,16 –18 optical correlation,19 projection preprocessing,20 or nonlinear morphological correlation.21 The most common way to introduce the object color information into a recognition system is by capturing the input scene with a color camera, thus obtaining a trichromatic decomposition of the image in the red– green– blue 共RGB兲 channels. Then, one can easily carry out polychromatic recognition by optical RGB multichannel correlation, processing each channel independently. From the correlation results obtained in each channel the targets can be recognized by both shape and color information. However, there are some cases in which the use of the RGB chromatic system is not appropriate for obtaining effective polychromatic object recognition. In such cases, other well-known color specification systems, such as the 1976 CIE L* a* b* 共CIELAB; Ref. 22兲 and achromatic–tritan– deutan 共ATD兲 color spaces,19 can be used to improve the correlation results obtained in the conventional RGB system. In this paper we propose a method for obtaining effective recognition of polychromatic 3-D objects. 10 January 2004 兾 Vol. 43, No. 2 兾 APPLIED OPTICS

433

The method relies on using the FTP-based technique for obtaining 3-D-object recognition combined with decomposition of the 2-D deformed fringe patterns 共which carries all the 3-D information on the object兲 in different chromatic systems. In Section 2 we review the main aspects of the 3-D-object recognition method introduced in Ref. 11 that are relevant for our purposes. In Section 3 we introduce four scenes that we use to demonstrate the abilities of the new techniques. In Section 4 we show the correlation results obtained when RGB channels are used as a color-definition system. In Sections 5 and 6 the ATD and the CIELAB color spaces, respectively, are used to improve the 3-D recognition results obtained when decomposition in RGB channels is used. Finally, in Section 7 the main conclusions are outlined. 2. Recognition by Fourier-Transform Profilometry of a Three-Dimensional Object

The FTP technique12 is used for obtaining volume information on a 3-D object. It is based on projecting a grating onto the 3-D object’s surface and capturing the resultant 2-D image with a camera. If the axes of the projector and the camera are not coincident, we obtain a distorted fringe pattern that codifies all the 3-D information of the object. In our experiment we employ parallel-axis geometry in which the optical axes of the projector and of the camera lie in the same plane and are parallel. As is explained in Ref. 11, for a general 3-D object with various values of h共x, y兲 the distorted grating pattern can be described by the following expression: ⬁

g共 x, y兲 ⫽ r共 x, y兲



A n exp兵i关2␲nf 0 x ⫹ n␾共 x, y兲兴其,

n⫽⫺⬁

(1) where f0 is the fundamental frequency of the observed grating image and r共x, y兲 is the reflectivity distribution on the object’s surface 关r共x, y兲 is zero outside the object兴. Function ␾共x, y兲 contains all the information about the 3-D shape because the connection between ␾共x, y兲 and the height of the object h共x, y兲 can be written as ␾共 x, y兲 ⫽

⫺2␲f 0dh共 x, y兲 , L ⫺ h共 x, y兲

(2)

where d is the distance between the projector and the camera and L is the distance between the camera and the object. In particular, if L ⬎⬎ h共x, y兲 it is clear that the phase is just proportional to the height of the object. One can obtain this phase function, which contains all the 3-D information on the object, digitally 共as is explained in Ref. 11兲 by selecting only the first order of the 2-D Fourier transform of the distorted grating pattern and performing an inverse 2-D Fourier transform of the centered result. This will allow one to obtain a complex function whose phase is just function ␾共x, y兲, whereas the amplitude is directly pro434

APPLIED OPTICS 兾 Vol. 43, No. 2 兾 10 January 2004

portional to the reflectivity of the object. Therefore we are able to encode the 3-D object in a complex image, which is called a phase-encoded height function 共PEHF兲. Then one can obtain 3-D-object recognition by encoding the 3-D input objects into PEHFs and correlating them. As is shown in Ref. 11, this idea can be experimentally implemented with a modified JTC. The outline of the method is as follows: The joint input plane is prepared with the scene to be analyzed and the target. Both are obtained by projection of fringes onto the 3-D objects and grabbing the result as 2-D images. After performing a Fourier transform, in the first diffraction order we have the joint spectrum of the PEHFs that corresponds to the 3-D objects in the input plane. Taking the intensity of this first order and retransforming, we obtain the correlation between the PEHFs and the input objects. This permits 3-D-object recognition by correlation of PEHFs. As is shown in Ref. 23, the method can be also implemented by use of a classic convergent correlator24 as the optical system that permits the correlation between the corresponding PEHFs to be obtained. 3. Scenes Used in the Experiments

In practice, real 3-D objects are polychromatic. As both the 3-D shape and the characteristic colors of the 3-D objects are essential features for purposes of pattern recognition, the aim of this study is to extend the detection capabilities of the 3-D recognition method to recognizing polychromatic objects. The objective is, therefore, to find a method with which to recognize a specific 3-D object 共reference target兲 in a scene composed of objects with the same 3-D shape but with different colors. We use the method for detecting 3-D targets that was explained in Ref. 11, which is based in the use of a modified JTC. To test the discrimination abilities of the various techniques we use four different scenes. Each of them can be divided in two parts. The upper part is composed of three different 3-D objects, first two 共from left to right兲 with the same 3-D shape but with different colors and a third that has a completely different 3-D shape. The lower part is composed of only one 3-D object, which is considered to be the reference target, that is, the 3-D object to be detected in the scene. Therefore the two first objects in the upper part of the scene are used to test the colordiscrimination capabilities of the system, and the third one is used to test the 3-D shape discrimination. From each of these four 3-D scenes, a 2-D image containing the deformed fringe patterns obtained by the FTP method is captured by a three-CCD color camera 共Sony Model DXC-950P兲. The four images are presented in Figs. 1– 4; Figs. 1共a兲– 4共a兲 show grayscale versions of the corresponding images and Figs. 1共b兲–1共d兲 locate the colors of the 3-D objects 共the letters W, R, G, and B refer to the white, red, green, and blue colors, respectively兲. Just for reference, Table 1 lists the decomposition of each of the colors of the 3-D objects in the RGB chromatic components.

Fig. 1. 共a兲 Gray-scale version of the first input scene. 共b兲 Description of the colors of the 3-D objects 共here and in the figures that follow, the letters W, R, G, and B refer to the colors white, red, green, and blue兲.

4. Three-Dimensional Recognition in the RGB Chromatic System

As was stated above, a common way to introduce color information into a 3-D-object recognition setup to capture the input scene with a color camera, thus obtaining a trichromatic decomposition of the input image in the RGB channels. In this case, the easiest color-recognition process consists in carrying out a multichannel operation, processing each RGB channel independently. Properly combining the correlation results obtained in all the channels allows for recognition of polychromatic 3-D objects. The whole experimental setup for obtaining 3-Dobject recognition by use of a modified JTC is depicted in Fig. 5. We prepare the joint input scene with the 3-D objects to be analyzed in the upper part and the reference target in the lower part by projecting fringes onto the 3-D objects and grabbing the result as 2-D images 共Figs. 1– 4兲. After performing a Fourier transform with the modified JTC

Fig. 2. 共a兲 Gray-scale version of the second input scene. 共b兲 Description of the colors of the 3-D objects.

and taking the intensity of the first diffraction order 共which contains the joint spectrum of the PEHFs that correspond to the 3-D objects兲, we retransform the result to obtain the correlation plane. This procedure is performed for each of the RGB chromatic channels of the input scene, and the resultant correlation planes are combined by multiplication to produce a final correlation plane for polychromatic 3-D objects. In Fig. 6 we show the experimental correlation planes obtained in the RGB channels when the input scene in Fig. 1 is considered. Figures 6共a兲, 6共b兲, and 6共c兲 are the correlation signals that correspond to channels R, G, and B, respectively, taken from the correlation planes at a horizontal line that crosses the 3-D objects in the input scene. We can see only two correlation peaks, which permit the detection of the two 3-D objects with the same shape as the reference target. The 3-D object with a completely different 3-D shape 共but the same 2-D ground plan兲 is completely discarded. Figure 6共d兲 shows the multiplication of the three RGB correlation signals, in this case 10 January 2004 兾 Vol. 43, No. 2 兾 APPLIED OPTICS

435

Fig. 3. 共a兲 Gray-scale version of the third input scene. 共b兲 Description of the colors of the 3-D objects.

Fig. 4. 共a兲 Gray-scale version of the fourth input scene. 共b兲 Description of the colors of the 3-D objects.

allowing for an adequate polychromatic recognition. The second 3-D object, which has the same 3-D shape as the reference target but a different color distribution, can be discarded by use of an appropriate threshold operation in the final plane. However, in those cases in which the colors of the 3-D object to be discarded are brighter than those of the reference target, recognition of the polychromatic 3-D object by use of the RGB chromatic channels fails. This is what occurs in the scenes shown in Figs. 2– 4.

For example, Fig. 7 gives the experimental correlation plane obtained in each RGB channel when the input scene in Fig. 2 is considered. As can be seen, in this case polychromatic recognition fails because the 3-D object to be rejected is brighter than the reference tar-

Table 1. Decomposition of the Colors of the 3-D Objects in the RGB Chromatic Componentsa

Color Channel

White

Red

Green

Blue

R G B

0.36 0.34 0.30

0.74 0.13 0.13

0.23 0.64 0.13

0.20 0.34 0.46

a

Arbitrary units.

436

APPLIED OPTICS 兾 Vol. 43, No. 2 兾 10 January 2004

Fig. 5. Whole experimental setup for recognition of 3-D objects by use of a modified JTC: SLM, spatial light modulator; L1, lens.

Fig. 6. 共a兲, 共b兲, 共c兲 Experimental correlation signals obtained in channels R, G, and B, respectively, when the input scene in Fig. 1 is considered. 共d兲 Result of multiplication of the RGB correlation signals.

Fig. 7. 共a兲, 共b兲, 共c兲 Experimental correlation signals obtained in channels R, G, and B, respectively, when the input scene in Fig. 2 is considered. 共d兲 Result of multiplication of the RGB correlation signals. 10 January 2004 兾 Vol. 43, No. 2 兾 APPLIED OPTICS

437

Fig. 8. 共a兲, 共b兲, 共c兲 Experimental correlation signals obtained channels A, T, and D, respectively, when the input scene in Fig. 2 is considered. 共d兲 Result of multiplication of the ATD correlation signals.

get 共in each chromatic channel the correlation signal that corresponds to the object to be discarded is greater than the signal that corresponds to the reference object兲. This problem can be solved by use of different systems for the specification of color information. This is what we do in the following sections. 5. Three-Dimensional Recognition in ATD Coordinates

The ATD spaces are more-or-less sophisticated human color vision models. They define, from the RGB signals, a luminance channel and two opponent color channels. The ATD model consists of an achromatic channel, A, that may be regarded as the luminance channel, and of two opponent color channels, the T 共tritan兲 channel that corresponds to the opponent response red– green and the D 共deutan兲 channel that corresponds to the opponent response yellow– blue. Although many ATD models have been proposed and tested in the past 30 years, two models, one by Boynton25 and the other by Guth et al.,26 have been used lately in optical pattern recognition.19 In what follows, we consider the definition proposed by Guth et al. From the RGB responses we computed the ATD descriptors in Guth’s model as follows:

冉冊 冋

0.5967 0.3654 0 A 0 T ⫽ 0.9553 ⫺1.2836 ⫺0.0248 0 0.0483 D

438

册冉 冊

R G . B

APPLIED OPTICS 兾 Vol. 43, No. 2 兾 10 January 2004

(3)

By applying Eq. 共3兲 to the RGB color components that define the different scenes shown above we can obtain the same scenes but defined in ATD space. Now we can consider these scenes in the same experimental setup as that shown in Fig. 5 to obtain the corresponding correlation planes for the ATD space. The result for the scene shown in Fig. 2 is shown in Fig. 8. Figures 8共a兲, 8共b兲, and 8共c兲 are the correlation signals that correspond to channels A, T, and D, respectively, taken from the correlation planes at a horizontal line crossing the 3-D objects in the input scene, and Fig. 8共d兲 shows the results of multiplication of the three ATD correlation signals. As can be seen, the ATD space permits the polychromatic recognition of the 3-D reference object in this scene. The enhancement occurs because this color space is able to split the luminance information from the chromatic information. Table 2 summarizes the results obtained for the four input scenes depicted in Figs. 1– 4. We can see that recognition in such cases in which the colors of the 3-D object to be discarded are brighter than those of the reference target, as occurs in the scenes shown in Figs. 2– 4, can be obtained without any difficulty. This idea can be seen in the correlation peak values of the luminance channel, in which the correlation peak that corresponds to the object to be rejected is greater than that of the reference target. However, the discrimination performed in the opponent color channels T and D

Table 2. Correlation Peak Values Obtained in the ATD Chromatic Channels for All the Input Scenesa and Their Product for All Channelsb

Channel Scene

A

T

D

Product

1 2 3 4

0.73 1.26 1.60 0.98

0.64 0.30 0.47 0.36

0.40 0.43 1.42 0.53

0.25 0.05 0.69 0.15

a

See Figs. 1– 4. In each cell the value that corresponds to the reference target is 1, and that to the false object is listed. b

permits polychromatic recognition when the three correlation channels are combined. 6. Three-Dimensional Recognition in CIELAB Coordinates

The CIELAB coordinates L* a* b* 共as defined in the CIE 1976 standard兲 can be calculated by use of the formulas for nondark samples22:

冉冊 冋冉 冊 冉 冊 册 冋冉 冊 冉 冊 册

Y L* ⫽ 116 Y0 a* ⫽ 500 b* ⫽ 200

1兾3

⫺ 16,

X X0

1兾3

Y Y0

1兾3

Y Y0

1兾3



Z Z0

1兾3



(4) ,

(5)

,

(6)

where X, Y, and Z are the tristimulus values of the color and X0, Y0, and Z0 are the tristimulus values of the reference white. The CIELAB system is a cylindrical representation, where L* gives the central axis. The circular plane sections of the cylinder correspond to the chromatic coordinates 共a*, b*兲. The reference white used in this study corresponds to a real white sample with RGB color components 共255, 255, 255兲. Two useful magnitudes, chroma C* and hue h*, are defined from the a* and b* coordinates through a change to polar coordinates as follows: C* ⫽ 共a* 2 ⫹ b* 2兲 1兾2,

(7)

h* ⫽ arctan共b*兾a*兲.

(8)

Both C* and h* values define the chromaticity of a stimulus, and together with the L* value they constitute the cylindrical coordinates of the CIELAB chromatic system. In Ref. 27 Corbala´ n et al. showed a method based on a linear transformation for obtaining X, Y, and Z tristimulus values from the RGB values of a color sample registered by a color CCD camera. The linear transformation that takes into account the particular spectral radiant power distribution of the source of light provides the best approach to obtaining a color-imaging system with color constancy. The transformations obtained by the authors of Ref. 27 to get the XYZ tristimulus values from the RGB

Fig. 9. 共a兲, 共b兲, 共c兲 Experimental correlation signals obtained in CIELAB channels L, C*, and h*, when the input scene in Fig. 2 is considered. 共d兲 Result of multiplication of the L, C*, and h* correlation signals. 10 January 2004 兾 Vol. 43, No. 2 兾 APPLIED OPTICS

439

Table 3. Correlation Peak Values Obtained in the CIELAB Chromatic Channel for All the Input Scenesa and Their Product for All Channelsb

Channel Scene

L

C*

h*

Product

1 2 3 4

1.12 1.00 1.39 0.97

0.20 0.23 0.20 0.98

0.16 0.14 0.83 0.14

0.03 0.05 0.14 0.05

a

See Figs. 1– 4. In each cell the value corresponding to the reference target is 1, and that to the false object is listed. b

components for incandescent illumination are the following:

冉冊 冋

X 1.21 0.26 0.23 Y ⫽ 0.71 0.98 ⫺0.02 Z ⫺0.01 0.03 1.70

册冉 冊

R G . B

(9)

Considering Eq. 共9兲, we can obtain the XYZ values that correspond to the RGB color components of the input scenes. From these tristimulus values, and by use of Eqs. 共4兲–共8兲, it is easy to obtain the L*, C*, and h* CIELAB coordinates for each of the scenes. As happened with the ATD representation, CIELAB space splits the chromatic information from the luminance information. This permits polychromatic recognition even when the colors of the object to be rejected are brighter than the colors of the reference target. This can be seen, for instance, from Fig. 9, in which the result for the correlation signals obtained for the second input scene in the CIELAB color space is shown. As was previously done for the other color spaces, Figs. 9共a兲, 9共b兲, and 9共c兲 are the correlation signals that correspond to channels L, C*, and h*, respectively, taken from the correlation planes at a horizontal line crossing the 3-D objects in the input scene, and Fig. 9共d兲 shows the multiplication of the three L, C*, and h* correlation signals. Table 3 lists the correlation peak values obtained in the CIELAB chromatic channels for four input scenes. It can be seen that, for all the scenes, 3-D polychromatic recognition can be obtained in the correlation plane. Despite the fact that in luminance channel L we obtain false alarms owing to the similar brightness of the 3-D objects, channels C* and h*, which carry the chromatic information, allow us to discriminate among the colors of the reference target, permitting the combination of the three correlation results to give the proper correlation signal. 7. Conclusions

We have proposed the use of different chromatic systems to obtain a setup for recognition of polychromatic 3-D objects. One introduces the object color information into a recognition system by capturing the input scene with a color camera, thus obtaining trichromatic decomposition of the image in the RGB channels. Then the polychromatic rec440

APPLIED OPTICS 兾 Vol. 43, No. 2 兾 10 January 2004

ognition is carried out by optical multichannel correlation, in which each chromatic channel is processed independently. However, because the RGB color components are not able to split the luminance information of the objects in a defined component, in those cases in which the 3-D objects are brighter than the reference objects the correlation result, which is proportional to the luminance of the objects, gives false alarms. Therefore, as has been demonstrated, it is possible to use different chromatic systems that can split the luminance information from the chromatic information to produce adequate recognition of polychromatic 3-D objects. The color representations that were used are the ATD and the CIELAB color spaces. It is likely that the changes in illumination are associated with changes in chromaticity. However, this does not affect the choice of method, provided that the 3-D targets and the 3-D objects to be tested share the same illumination. Experimental results have shown the utility of the proposed method. Jose´ J. Esteve-Taboada acknowledges a grant from the Conselleria de Cultura, Educacio´ i Cie`ncia 共Generalitat Valenciana兲, Spain. This study was supported by the Spanish Ministerio de Ciencia y Tecnologı´a and Fondo Europeo de Desarrollo Regional 共project BFM2001-3004兲 and by the Direccio´ General de Recerca of the Generalitat of Catalunya 共project XT01-0015兲. References 1. A. VanderLugt, “Signal detection by complex spatial filtering,” IEEE Trans. Inf. Theory IT-10, 139 –145 共1964兲. 2. C. S. Weaver and J. W. Goodman, “A technique for optically convolving two functions,” Appl. Opt. 5, 1248 –1249 共1966兲. 3. E. Paquet, M. Rioux, and H. H. Arsenault, “Invariant pattern recognition for range images using the phase Fourier transform and a neural network,” Opt. Eng. 34, 1178 –1183 共1995兲. 4. E. Paquet, P. Garcı´a-Martı´nez, and J. Garcı´a, “Tridimensional invariant correlation based on phase-coded and sine-coded range images,” J. Opt. 29, 35–39 共1998兲. 5. J. J. Esteve-Taboada and J. Garcı´a, “Detection and orientation evaluation for three-dimensional objects,” Opt. Commun. 217, 123–131 共2003兲. 6. J. Rosen, “Three-dimensional electro-optical correlation,” J. Opt. Soc. Am. A 15, 430 – 436 共1998兲. 7. J. Rosen, “Three-dimensional joint transform correlator,” Appl. Opt. 37, 7538 –7544 共1998兲. 8. T. Poon and T. Kim, “Optical image recognition of threedimensional objects,” Appl. Opt. 38, 370 –381 共1999兲. 9. T. Kim and T. Poon, “Extraction of 3-D location of matched 3-D object using power fringe-adjusted filtering and Wigner analysis,” Opt. Eng. 38, 2176 –2183 共1999兲. 10. B. Javidi and E. Tajahuerce, “Three-dimensional object recognition by use of digital holography,” Opt. Lett. 25, 610 – 612 共2000兲. 11. J. J. Esteve-Taboada, D. Mas, and J. Garcı´a, “Threedimensional object recognition by Fourier transform profilometry,” Appl. Opt. 38, 4760 – 4765 共1999兲. 12. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22, 3977–3882 共1983兲. 13. E. Badique´ , Y. Komiya, N. Ohyama, J. Tsujiuchy, and T.

14.

15.

16. 17.

18.

19.

20.

Honda, “Colour image correlation,” Opt. Commun. 61, 181– 186 共1987兲. E. Badique´ , Y. Komiya, N. Ohyama, T. Honda, and J. Tsujiuchy, “Use of color image correlation in the retrieval of gastric topography by endoscopic stereopair matching,” Appl. Opt. 27, 941–948 共1988兲. M. Corbala´ n, M. S. Milla´ n, and M. J. Yzuel, “Color pattern recognition with CIELAB coordinates,” Opt. Eng. 41, 130 –138 共2002兲. F. T. S. Yu, “Color image recognition by spectral-spatial matched filtering,” Opt. Eng. 23, 690 – 694 共1984兲. F. T. S. Yu and B. Javidi, “Experiments on real-time polychromatic signal detection by matched spatial filtering,” Opt. Commun. 56, 384 –388 共1986兲. M. S. Milla´ n, J. Campos, C. Ferreira, and M. J. Yzuel, “Matched filter and phase only filter performance in colour image recognition,” Opt. Commun. 73, 277–284 共1989兲. M. S. Milla´ n, M. Corbala´ n, J. Romero, and M. J. Yzuel, “Optical pattern recognition based on color vision models,” Opt. Lett. 20, 1722–1724 共1995兲. V. Kober and T. S. Choi, “Color optical pattern recognition based on projection preprocessing,” in Algorithms, Devices, and Systems for Optical Information Processing, B. Javidi and D. Psaltis, eds., Proc. SPIE 3159, 144 –152 共1999兲.

21. P. Garcı´a, C. Ferreira, and J. Garcı´a, “Color optical pattern recognition using nonlinear morphological correlation,” in 18th Congress of the International Commission for Optics, A. J. Glass, J. W. Goodman, M. Chang, A. H. Guenther, and T. Asakura, eds., Proc. SPIE 3749, 204 –206 共1999兲. 22. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae 共Wiley, New York, 1982兲. 23. J. J. Esteve-Taboada, J. Garcı´a, and C. Ferreira, “Optical recognition of three-dimensional objects with scale invariance using a classical convergent correlator,” Opt. Eng. 41, 1324 – 1330 共2002兲. 24. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. 共McGraw-Hill, Singapore, 1996兲. 25. R. M. Boynton, “A system of photometry and colorimetry based on cone excitations,” Color Res. Appl. 11, 244 –252 共1986兲. 26. S. L. Guth, R. W. Massof, and T. Benzschawel, “Vector model for normal and dichromatic vision,” J. Opt. Soc. Am. 70, 197– 212 共1980兲. 27. M. Corbala´ n, M. S. Milla´ n, and M. J. Yzuel, “Color measurement in standard CIELAB coordinates using a 3CCD camera: correction for the influence of the light source,” Opt. Eng. 39, 1470 –1476 共2000兲.

10 January 2004 兾 Vol. 43, No. 2 兾 APPLIED OPTICS

441

Suggest Documents