Combination of Color and Binary Pattern Codification for an Error ...

2 downloads 0 Views 2MB Size Report
Combination of Color and Binary Pattern Codification for An Error Correcting. M-array Technique. Udaya Wijenayake, Sung-In Choi and Soon-Yong Park.
2012 Ninth Conference on Computer and Robot Vision

Combination of Color and Binary Pattern Codification for An Error Correcting M-array Technique Udaya Wijenayake, Sung-In Choi and Soon-Yong Park School of Computer Science & Engineering Kyungpook National University Daegu, South Korea [email protected], [email protected], [email protected]

so that code-words are assigned to a set of pixels, and thus there is a direct mapping from every code-word to their corresponding coordinates of the pixel in the pattern. There are different methods to represent the code-word in the pattern, such as using gray levels, colors or geometrical representations. In addition, there are many strategies to assign a code-word to a set of pixels. Therefore the structured light pattern projection techniques can be classified into three categories according to the coding strategy of the projected pattern: time multiplexing, neighborhood codification and direct codification [1]. Time multiplexing methods project a sequence of simple patterns over time and generate the code word by combining all the patterns. The main drawback of this method is the weakness in capturing a dynamic scene as they use multiple patterns. Direct codification techniques provide a good spatial resolution, because they define a code-word for every pixel, which is equal to their gray level or color. But the applicability is limited due to their sensitivity to noise and light variation. The neighborhood techniques tend to concentrate the entire coding scheme in a unique pattern so that a code-word for a certain point is obtained from a neighborhood of the points around it. The property of having a unique pattern, leads this technique to use in dynamic scene capturing. However, the decoding stage of the neighborhood codification will be more difficult since spatial neighborhood cannot always be recovered when uncertain shadows or occlusions occurred. So this technique needs a good encoding method which also increases the robustness of the decoding algorithm. Our research is directed to find such an enhanced encoding and decoding method based on spatial neighborhood theories.

Abstract—Much research has been conducted so far to find a perfect structured light coding system. Among them, spatial neighborhood techniques which use a single pattern have become more popular as they can be used for dynamic scene capturing. But the difficulties of decoding the pattern when it loses few pattern symbols still remain as a problem. As a solution for this problem we introduce a new strategy which encodes two patterns into a single pattern image. In our particular experiment, we show that our decoding method can decode the pattern even it has some lost symbols. Keywords-Computer vision; structured light; m-array; spatial neighborhood; color-coding; code-word

I. I NTRODUCTION With the improvement of computer capabilities and control theories, application areas of Computer Vision and Robotics have been increased largely in past few decades. 3D object modeling, automatic vehicle driving, biometrics and automatic inspection are such few application areas. One of the important problems of these applications is how to measure a 3D surface correctly. In past years there have been introduced several 3D measuring techniques such as laser scanning, silhouette carving, voxel coloring, stereo vision and structured light. Laser scanning uses a single laser stripe and can achieve very accurate reconstructions, but it cannot reconstruct the scene at once and also needs larger mechanical devices. Silhouette carving uses several images of the scene, but cannot reconstruct it exactly same as the original. Stereo vision uses two or more cameras to imagine the scene from different views and measure 3D points using triangulation. It also has a difficult problem of finding corresponding points between different views accurately. To simplify this correspondence problem, structured light techniques have been introduced by replacing one camera of the stereo vision system with a light pattern projector.

A. M-array Based Pattern Coding Some authors have invented several spatial neighborhood codification systems based on non-formal coding [2] [3] [4] and De Bruijn sequences [5] [6] [7]. De Bruijin sequences have some limitations due to their 1D spatial coding method. Here, the baseline between camera and projector should be orthogonal and it will reduce the flexibility of the system configuration. Apart from these techniques some authors

II. S TRUCTURED L IGHT C ODING Coded structured light systems are based on projection of single or multiple patterns to a target scene and measure the surface of the scene by imaging the illuminated scene with a single camera or set of cameras. The pattern is designed 978-0-7695-4683-4/12 $26.00 © 2012 IEEE DOI 10.1109/CRV.2012.26

139

{1, 2, ..., p} where different numbers represent different colors. It contains two patterns, which are binary pattern with (7 × 7) window property and color pattern with (3 × 3) window property. The task of assigning color primitives to the pattern is divided into two stages as binary encoding and color encoding.

have adopted the theory of Perfect Maps to encode unique patterns [10]–[14]. The perfect map is a matrix of dimension (m × n) where each element is taken from an alphabet P which has p number of symbols and has the window property, i.e. each different sub-matrix of dimension (x × y) appears exactly once. If the matrix contains all the submatrices except the one filled with 0’s, then it is called M-array or pseudo random array [9] [8]. Perfect sub-map is a (m × n) matrix where all the (x × y) sub-matrices are unique within the matrix, but does not necessarily contain all the possible (x × y) sub-matrices. First work on pseudo random arrays was done by MacWilliams in 1976. His encoding method was based on arranging a larger pseudo random sequence into a matrix according to a defined pattern [9]. Griffin introduced another encoding method using two horizontal and vertical pseudo random sequences [11]. This encoding scheme has a drawback of having a larger width compared to the height. Another interesting encoding method was introduced by Morano in 1998. He used a brute-force algorithm to fill the matrix with random colors starting from the top left corner [10]. Albitar also used the same brute-force method, but used different shapes to represent the pattern symbols as they used this pattern to illuminate the colored scenes like human abdomen [13]. In 2008, Chen proposed a 3D imaging system which used the Griffins method with some modifications. He added a connectivity condition and eliminated the gaps between two symbols to achieve dense reconstruction [14]. All those methods encoded only one pattern to a single image, hence they have the difficulty of decoding the pattern when they lose some pattern symbols due to the uncertainty of the scene. In this paper, we introduce a new color encoding method which can encode two M-arrays into a single pattern image. Further, we discuss a decoding method which uses the above mentioned advantage to handle uncertain shadows and occlusions in the scene. Ultimately we elaborate some experiments conducted to analyze the performance of our encoding and decoding method along with their results.

A. Binary Pattern Encoding In the binary encoding stage, we assign only one color primitive to the pattern as each (7 × 7) window becomes unique within the pattern. We use the background color and the assigned color primitive to form the code-word of a given point. If xij is an assigned point in the matrix M where i is the row number and j is the column number, code-word for defining this location, wij , is the sequence wij

=

{xi−3,j−3 , xi−3,j−2 , . . . , xi,j , . . . , xi+3,j+2 , xi+3,j+3 }

(1)

which consists of color primitives of (7 × 7) window centered at the given point. The matrix M consists of (m − 6) × (n − 6) total number of code-words. A lookup table is maintained to record all the code-words and their location in the pattern. In this binary encoding stage, two conditions have been considered when assigning the color primitive to a point of the matrix M . First condition is that there are no repeated code-words in the matrix. Equation (2) represents this condition and here W is a set which consists of all used code-words. ⎧ ⎫ wij = wkl , (i, j) = (k, l) ⎬ ⎨ wij 4 ≤ i, k ≤ (m − 3) W = (2) ⎩ ⎭ 4 ≤ j, l ≤ (n − 3) The second condition is no eight connected color primitive within the pattern. In the matrix M , xij can be assigned with either color (c) or background (b) primitive. According to the second condition, if a selected point is a color primitive, then all the eight neighbors (S) of that point cannot be assigned with the color primitive and should remain as the background. Equation (3) represents this condition.  xij = c, xkl = c, xkl ∈ S xij M= (3) xij = b

III. P ROPOSED PATTERN E NCODING M ETHOD Here we propose a new color codification method that avoids the drawbacks of the traditional coding systems. The main concern of the codification method is to address the requirement of uniquely indexing the pattern for handling uncertain occlusions and discontinuities in the scene. All the previous works on M-array structured light systems encoded only one pattern to a single image and then loosing few pattern symbols in the captured scene will lead to huge difficulties in decoding. As a solution for this problem we introduce a new method that encodes two patterns to a single image. The proposed M-array pattern is a (m × n) matrix M filled with color primitives taken from the alphabet P =

The first condition is to simplify the correspondence problem of stereo matching. When the pattern is projected onto a scene, the code-word of an image point (u, v) of captured image can be identified by analyzing the colors of that point and its neighbors within the (7 × 7) window. If that code-word is unique within the pattern we can easily find the corresponding point (i, j) in M by searching through the lookup table. The second constraint is to simplify the pattern segmentation process in the decoding stage. When we create dense

140

structured light patterns with only one color primitive, having connected primitives will lead to huge difficulties when identifying each primitive separately within the captured image. Thus in our pattern encoding method we avoid any code-word with connected color primitives. Fig. 1 shows a sample binary pattern encoded by applying the above two conditions.

number of code-words and we can maintain another lookup table for keep all the code-words and their location on the pattern image. To solve the correspondence problem, uniqueness condition as in (5) is applied to the color pattern. ⎫ ⎧   wij = wkl , (i, j) = (k, l) ⎬ ⎨ w W = (5) 2 ≤ i, k ≤ (m − 1) ⎭ ⎩ ij 2 ≤ j, l ≤ (n − 1)

B. Color Pattern Encoding

The color pattern does not have any background primitives within the pattern. All the elements of the matrix M are assigned with a color primitive taken from the alphabet P . Additionally, in purpose of generating dense 3D reconstruction we eliminate the gaps between pattern primitives. Thus all the primitives are connected with their eight neighbors. Therefore, to simplify the identification of pattern primitives separately in decoding stage, we apply the second condition, that every element has a color different from its eight neighbors as in (6). Fig. 2 shows an example color pattern encoded on top of the binary pattern shows in Fig. 1. ⎧ ⎫ xij = xkl , (i, j) = (k, l) ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ (i − 1) ≤ k ≤ (i + 1) xij M= (6) (j − 1) ≤ l ≤ (j + 1) ⎪ ⎪ ⎪ ⎪ ⎩ ⎭ 2 ≤ i ≤ (m − 1), 2 ≤ j ≤ (n − 1)

The color pattern encoding stage starts after finishing the binary pattern encoding. The color pattern is encoded on top of the binary pattern. Here, in this stage we assign color primitive from the alphabet P to every background primitives in the binary image. The colors other than the color primitive used in binary encoding can be used to fill the background primitives in the matrix M . Two conditions applied to the binary pattern, also applied to the color pattern with few changes. In color encoding system, code-word of a selected point xij in matrix M , is defined by the color primitives of (3×3) window centered at the selected point (color primitive of  selected point and its eight neighbors). If wij is the codeword of the point xij , it can be represented by the sequence  wij

=

{xi−1,j−1 , xi−1,j , xi−1,j+1 , xi,j−1 , xi,j , xi,j+1 , xi+1,j−1 , xi+1,j , xi+1,j+1 }. (4)

C. Pattern Codification Algorithm

To fill the (m × n) matrix M , it needs (m − 2) × (n − 2)

In 1998 Morano et al. [10] introduced an algorithm based on a brute-force approach to generate a perfect sub-map

Figure 1. Part of a binary pattern encoded with (7 × 7) window property and uniqueness and connectivity conditions. 0’s represent the background color and 1’s represent assigned color primitive. Code-word below the pattern is the code-word of selected (7 × 7) window.

Figure 2. Part of a color pattern encoded with (3 × 3) window property on top of the binary pattern shows in Fig. 1. All the background points (represents by 0’s) in binary pattern are replaced by a color primitive taken from the alphabet P . The color primitive used in binary pattern (represents by 1’s) keep unchanged in the color pattern too.

141

of size (m × n), in which each element of the pattern is taken from an alphabet of P with p number of symbols. It has the window property so that each (x × y) sub matrix appears exactly once within the whole pattern and keeps the Hamming distance h between every window.

IV. P ROPOSED PATTERN D ECODING M ETHOD For robust 3D reconstruction, the most important step is to correctly map a given point in a captured scene with the projected pattern. To do this in M-array structured light systems, we have to correctly identify the given image point and few of their neighbors. In our color codification method we used small colored squares as the pattern primitives and there are no gaps between two squares. First step of the pattern decoding is identifying the square grid point (center of the small pattern square) at a given image point in the captured scene. To identify a square grid point, we find the quadrangle (deformed pattern square) using a color similarity measurement as in (7). It searches for slightly changing color pixels in the nearby area.

For an example, to generate a pattern using three symbols with a window property of (3 × 3), the algorithm starts by seeding the top left (3 × 3) window with random letter assignment. Then consecutive (3 × 1) columns with random letter assignment are added to the right of the initial window, maintaining window property and Hamming distance between windows. That followed by adding random (1 × 3) rows beneath the initial window in a similar way. Then the both horizontal and vertical processes are repeated by incrementing the starting position by one in both directions, until whole pattern is filled. In this last step it has to add only a single point to the pattern and check the constraints. Whenever the process reaches a position where no possible symbol can be placed, whole pattern is cleared and starts again with another initial window.

Sim(p1 , p2 ) =

(7) cr (r1 − r2 )2 + cg (g1 − g2 )2 + cb (b1 − b2 )2 3 Here p1 and p2 are two image pixels and (r1 , g1 , b1 ) and (r2 , g2 , b2 ) are their respective color values. cr , cg and cb are related coefficient values obtained from the color calibration. Returned value of this function is compared with a predefine threshold and decide whether it is outside pixel or

For both binary and color pattern codification in our encoding method, we use a similar algorithm with some modifications to imposed additional conditions we have introduced. In binary codification method, the alphabet is P = {0, 1}. But we use only one color to represent 1’s and 0’s are represented by the background color. So instead of checking the Hamming distance between each window, we check for the connected 1’s within each added window. If it has connected 1’s, we change the added (3 × 1) column or (1 × 3) row or a single point to another possible one and check again for the connectivity. If a newly added window passes the connectivity constraint then we check for the uniqueness of the code-word formed by the window. We continue these steps until it fills the whole array. If there is any state that there is no any possible way to place the primitives, we start the algorithm again with another initial window as in Moranos algorithm. Fig. 3 explains the algorithm that used to encode a 5 × 7 perfect sub-map with the binary pattern. For simplifying the explanation we only use (3 × 3) window property. Same brute-force algorithm is used to encode the color pattern on top of the binary pattern. First it loads the completed binary pattern to the memory and replaces all the background primitives with another color primitive from alphabet P by following the above discussed procedure. Fig. 4 shows an example of how to encode a color pattern on top of the binary pattern in Fig. 3. Here, in this example, the color pattern also has the same (3 × 3) window property. But in actual situation, to get the real advantage of having two patterns encoded into a single image, window property of binary pattern should be larger than the color pattern.

Figure 3. The process of encoding binary pattern to a 5 × 7 matrix with (3 × 3) window property. (a) Step 1: Seeding the first window with random letter assignment. (b) Step 2: Assigning (3×1) random columns to the right of the initial window. (c) Step 3: Assigning (1 × 3) random columns to the beneath of the initial window. (d) Step 4: Increment the starting position by one row and column. (e) Step 5: Add random color primitive to form the (3 × 3) window and continue the process horizontally and vertically.

142

not. Centroid detection of identified quadrangle can be used to define the square grid point. After detecting the square grid point, we have to find the assigned color primitive of that point. First we defined a color profile for each color primitive by color calibration of the captured scene. We use p number of colors and their color profiles can be defined as (8). C1 = (R1 , G1 , B1 ) C2 = (R2 , G2 , B2 ) .. .

The next step of the decoding is to identify the code-word of every square grid points. Flow chart in Fig. 5 explains our approach to decode all the code-word in the pattern. Each square grid point xij , where (4 ≤ i ≤ m − 3) and (4 ≤ j ≤ n − 3) in the projected pattern has two codewords defined by its (3 × 3) window and (7 × 7) window. Decoding a small (3×3) window is more efficient than larger window sizes when considering the time and computational complexity. Therefore in our proposed method, first we try to decode the (3 × 3) widow filled with color primitives. Based on the square grid point, we try to locate its eight adjacent neighbors by simply adding or subtracting an offset that is set to be the normal size of a square grid. Using these points we can extract the eight neighbor quadrangles, their square grid points and the assigned color primitives. Then we can generate the code-word of that image point and by searching through the lookup table it is possible to find the corresponding point in the projected pattern. If any of the adjacent grid point cannot be found or not located in regular shape, then that area of the pattern may have been affected

(8)

Cp = (Rp , Gp , Bp ) Then we use distance measuring (9) for finding the most similar color primitive to the color of the square grid point. Dist(Cp ) = (r − Rp )2 + (g − Gp )2 + (b − Bp )2 (9) Here (r, g, b) are the color values of the grid point. We find this distance for every color primitive and the color that has minimum distance is defined as the primitive of the square grid point. If all the distances are larger than a pre-defined threshold that point can be a shadow or a point in out of the pattern area. So it will not be considered as a square grid point of the pattern.

Figure 4. The process of encoding color pattern on top of the binary pattern. (a) Step 1: Load the binary pattern to the memory. (b) Step 2: Remove the background color primitives from the matrix and assigning random color primitives to initial window. (c) Step 3, (d) Step 4 and (e) Step 5 are same as the binary encoding. (f) Completed color pattern. Color primitive (1’s) assigned in binary encoding stage still keep unchanged in the final pattern.

Figure 5.

143

Flow chart of the code-word identification process.

percentage of the full pattern in all of the 1000 trials is given by Average. Practical digital projectors have a resolution with 4:3 or 16:9 for width: height. Therefore, we desire to generate a pattern that adheres to the above resolution or square shape. In the results we can see that the largest possible resolution with (7 × 7) window is 80 × 60. Using a larger resolution arise another problem in color pattern encoding stage. In our color pattern encoding we use a (3 × 3) window and also consider about the eight connectivity of the same color primitive. Therefore, to fill a larger resolution it will need more number of color primitives. But in other hand, as this is a dense pattern with no gaps, we have to keep enough distance between two colors to reduce the complexity of identifying color codes of a grid point. To keep the balance between these two concerns we select six colors: Red, Green, Blue, Cyan, Magenta and Yellow that have enough distance between each other. Only with these six colors; the largest possible color pattern resolution that can encode on top of a binary pattern is 45 × 45. Fig. 7 shows the 45 × 45 binary pattern and the color pattern generated using our proposed method. As a preliminary experiment we project our pattern to simple flat and curve planes. Then our color decoding method is applied to extract the grid points and the code-word from the captured image. Fig. 8 shows some captured images and their decoding results. Using our decoding algorithm, we have successfully decoded the full pattern even it has some missing symbols. Performance analyzing of the pattern decoding is still in experimental stage and we are not going to explain it here.

with shadows or occlusion. In such a situation we go for the (7 × 7) binary decoding. In binary decoding, we first try to locate the (7 × 7) window centered at the given square grid point. Using already known offset we can detect the foremost corners and extract the window area. For binary decoding, we only need the color primitive assigned in binary pattern encoding stage. We keep that color primitive and remove all the other colors and set that points as the background by examine the extracted window. Then we find the vectors from center square grid point to the centroid of the remaining color quadrangles. Dividing these vectors by the offset will extract the indexes within the (7 × 7) window of every quadrangle. The binary code-word of that grid point is defined by using these indexes. Then we search for the similar code-words in the lookup table using Hamming distance techniques. After finding similar code-words; we match the corresponding window area of the captured image with the (7 × 7) window area of the projected pattern defined by these similar codewords. Using this color matching, we can define the exact binary code-word of the square grid point. Fig. 6 explains the process of extracting the binary code-word. V. E XPERIMENT The experiment is conducted to test both of the encoding and decoding methods. The first experiment is to find a suitable pattern size for projection. For binary encoding, the window size is more important as it should be large enough to handle uncertain shadows and occlusions, and also small enough to decode in real-time. Considering these two facts we select (7 × 7) window size for our binary pattern. Then an experiment is conducted to find a better resolution and Table I shows the results. B INARY

Resolution Total Codewords Average Maximum Completed Number Completed

VI. C ONCLUSION We have introduced a new structure light codification method based on pseudo random arrays. This method encodes two different patterns into a single image so that each point in the pattern has two code-words. The first pattern is a binary pattern with larger window size and uses only one color primitive. The second one is a color pattern with small window size and has more color primitives. The second pattern is encoded on top of the binary pattern. Using the introduced decoding algorithm, we showed that it is possible to decode the code-word of a given point; even it has lost few of its nearby pattern symbols. As a future work we are going to improve the performance of the decoding algorithms. Furthermore we are planning to implement a real-time 3D imagine device which is more robust to uncertainties of the scene.

Table I (7 × 7) WINDOW PROPERTY

ENCODING WITH

(20×20) (40×30) (45×45) (80×60) (100×100) 196

816

1521

3996

8836

99.4

91.9

75.2

36.3

17.1

100

100

100

100

50.1

982

763

390

7

0

The table parameters summarized 1000 searches for a given resolution: Total Code-words, Average, Maximum completed and Number completed. Total code-words means number of total code-words needed to fill the pattern with the given resolution. Number completed gives the number of times out of 1000 trials which successfully finds the full pattern. Maximum completed will be 100% if the full pattern is generated in any trial. Otherwise it gives maximum percentage filled by the algorithm. The average of the filling

ACKNOWLEDGMENT This research was supported by the MKE(The Ministry of Knowledge Economy), Korea, under the Core Technology Development for Breakthrough of Robot Vision Research support program supervised by the NIPA(National IT Industry Promotion Agency) (NIPA-2011-C7000-1101-0006).

144

(a)

(b)

(d)

(c)

(e)

(f)

Figure 6. (a) The process of decoding the binary pattern with (7 × 7) window. (a) A captured scene with simulated shadow. (b) Enlarged view to clearly see the pattern primitives near the shadow. (c) Finding foremost corners of the (7 × 7) window centered at square grid point xij . (d) Extract the window. (e) Remove all the colors except the one we have used in binary decoding (f) Find the square grid point and the vector to the center grid point of each remaining quadrangle. By finding the index of each grid point within the window we can generate the code-word of xij .

(a) Figure 7.

(b)

(a) 45×45 Binary pattern with (7×7) window property. (b) Color pattern encoded on top of the binary pattern with (3×3) window property.

145

(a)

(c)

(b)

(d)

(e)

Figure 8. (a) Pattern projected to a flat plane. (b) Pattern image decoded only using (3×3) color coding. (c) Pattern projected to a curved plane and missing pattern symbols are simulated on the image (d) Pattern image decoded only using color coding. Holes are still appeared on the image. (e) After decoding the missing symbols using (7×7) binary code.

R EFERENCES

[8] T. Etzion, ”Constructions for perfectmaps and pseudorandom arrays,” IEEE Transactions on Information Theory, 34 (5Part1) 1308-1316, 1988

[1] J. Salvi, J. Pages and J. Batlle, ”Pattern codification strategies in structured light systems,” Pattern Recognition, 37 (4) (2004) 827-849

[9] F. MacWilliams and N. Sloane, ”Pseudo-random sequences and arrays,” Proceedings of the IEEE, 64 (12) 1715-1729, 1976

[2] M. Maruyama, S. Abe, ”Range sensing by projecting multiple slits with random cuts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 15 (6) 647-651, 1993

[10] A. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, and J. Nissanov, ”Structured light using pseudo-random codes,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 20(3):322-327, 1998.

[3] M. Ito and A. Ishii, ”A three-level checkerboard pattern (TCP) projection method for curved surface measurement,” Pattern Recognition, 28 (1) 27-40, 1995

[11] P. Griffin, L. Narasimhan and S. Yee, ”Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognition, 25 (6) 609-616, 1992

[4] F. Forster, ”A high-resolution and high accuracy real-time 3D sensor based on structured light,” in: Proceedings of the 3th International Symposium on 3D Data Processing, Visualization, and Transmission, pp.208-215, 2006

[12] J. Pages, C. Collewet, F. Chaumette, J. Salvi, S. Girona and F. Rennes, ”An approach to visual servoing based on coded light,” in: IEEE International Conference on Robotics and Automation, ICRA, vol.6, 2006, pp.4118-4123

[5] J. Salvi, J. Batlle and E. Mouaddib, ”A robust-coded pattern projection for dynamic 3D scene measurement,” Pattern Recognition Letters, 19 (11) 1055-1065, 1998

[13] C. Albitar, P. Graebling and C. Doignon, ”Design of a monochromatic pattern for a robust structured light coding,” in: IEEE International Conference on Image Processing, ICIP, vol.6, 2007, pp.529-532

[6] T. Monks, J. Carter and C. Shadle, ”Colour-encoded structured light for digitization of real-time 3D data,” in: Proceedings of the IEE 4th International Conference on Image Processing, pp.327-330, 1992

[14] S.Y. Chen, Y.F. Li and Jianwei Zhang,”Vision Processing for Realtime 3-D Data Acquisition Based on Coded Structured Light,” IEEE Transactions on Image Processing, vol.17, no.2, pp.167-176, Feb. 2008

[7] K. Boyer and A. Kak, ”Color-encoded structured light for rapid active ranging,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 9 (1) 14-28, 1987

146

Suggest Documents