PHASE CHANNEL MULTIPLEXING PATTERN STRATEGY FOR ACTIVE STEREO VISION Kai Liu,1 Yongchang Wang,2,∗ 1
School of Electrical Engineering and Information, Sichuan University, Chengdu, Sichuan 610065, China 2 KLA-Tencor, Milpitas, CA 95134, USA ∗ Corresponding author:
[email protected] ABSTRACT
To achieve more accurate 3-D measurements, Active Stereo Vision through structured-light illumination involves the projection of a series high-frequency fringe patterns to suppress uncertainty from additive noise. Typically, the phase derived from the high-frequency patterns must be unwrapped spatially or temporally. Spatial approaches fail for discontinuous targets while temporal methods need additional fringe patterns. In this paper, we propose a phase channel multiplexing pattern strategy with improved resilience to errors along surface discontinuities without the need for intermediate patterns, making it suited for real-time applications. Index Terms— Active stereo vision, structured light illumination, high-frequency patterns, phase channel multiplexing pattern, noise, real-time 3-D scanning. 1. INTRODUCTION Active Stereo Vision (ASV), or Structured Light Illumination (SLI) [1], is a method of optical, active, triangulation-based three-dimensional (3-D) shape measurement, which is widely used in industries [2, 3, 4]. The ultimate goal of ASV/SLI is to obtain 3-D data both instantaneously and accurately. In particular, ASV/SLI typically requires less computation while yielding higher accuracy compared to traditional Passive Stereo Vision (PSV) [5], but researchers/developers have had to make trade-offs between speed and accuracy where higher accuracy has traditionally been achieved by using more ASV/SLI patterns. And as the number of ASV/SLI patterns increases, the scanning time likewise increases, making the system more susceptible to error caused by object motion [6]. So the challenge for ASV/SLI is maximize the achievable accuracy while minimizing the number of patterns. For those methods of ASV/SLI that attempt to scan moving objects, a large category of techniques are one-shot methods that attempt to reconstruct depth from a single, continuously projected pattern by spatially decoding the captured image to determine that neighborhood’s correspondence inside the projected pattern. Using a single captured image, these systems are invariant to object motion, but in practice, since these schemes behave similar to passive stereo-vision systems
with high computational cost [7]. Separate from one-shot pattern schemes, real-time ASV/SLI methods that rely on multishot pattern schemes achieve real-time by driving the camera/projector pair at very high frame rates to minimize the effect of motion. With regards to reconstruction accuracy, multipath/highspeed ASV/SLI systems are able to exploit the low-computation cost traditionally associated with ASV/SLI systems [8], but they behave poorly in areas of surface and/or texture discontinuities of moving objects. To this end, methods have been introduced to efficiently detect these boundaries and their related artifacts, but even so, the existing of these artifacts limits the number of component patterns. Limiting the number of component patterns then has the undesired effect of reducing the reconstruction quality in areas of smooth surface and texture, where ASV/SLI traditionally outperforms other methods of 3-D vision. With the overall goal of minimizing the number of component patterns while not sacrificing reconstruction quality, we note that high-frequency fringe pattern strategies are effective at noise suppresion [5], but they also introduce ambiguities in the reconstruction caused by improperly mapping a camera pixel to one of several available pattern stripes. For this reason, considerable effort has been made toward developing phase unwrapping strategies that take into account spatial or temporal phase information. Spatial approaches fail for discontinuous targets while temporal methods need additional intermediate patterns [8]. In light of the ambiguities when using high-frequency fringe patterns, we propose an instantly-decodable, high-frequency, fringe pattern strategy, based on the previous works of Liu et al. [8] and Zhang et al. [9], which employs 3 or 4 patterns to reconstruct 3-D more accurately than previously achieved in a real-time system. The new method works by embedding a unit-frequency fringe pattern into a high-frequency pattern such that two phase terms are extracted during pattern processing where the unit-phase term is used in place of phase unwrapping of the high-frequency phase. The technical novelties and contributions of this work include • further reducing the number of patterns down to 3 or 4 compared to the 5-pattern scheme proposed in [8] by
c 978-1-4799-1580-4/12/$31.00 2012 IEEE
•
• • •
analyzing a parameter-detailed ASV/SLI model [9] and utilizing the feature of harmonic [10]; separate from phase, the method extracts the reflectivity of the scanned object along with the intensity of the ambient light; employing the surface reflectivity as a robust shadow noise filter and as the texture for the scanned object; employing the ambient light measure to improve phase decoding; achieving real-time performance for phase generation in practice. 2. RELATED WORK
In ASV/SLI, the key stage is the design of the patterns [11], i. e., the coding method for the projected patterns and its corresponding decoding algorithm for the captured patterned images such that the correspondences between camera and projector can be determined accurately with minimum computation. Numerous pattern strategies have been studied and proposed. For real-time applications, the pattern strategies in ASV/SLI can be classified into one-shot schemes and multi-shot schemes. One-shot schemes, such as spatial frequency multiplexing [7, 12], dynamic adapting [13], color channel multiplexing [14], direct coding [15] and spatial codewords coding [16], are insensitive to motion of the scanned objects in theory. However, in practice, one-shot schemes mainly suffer from the limited bandwidth of hardware, e. g., the intensity resolution of the cameras, so the one-shot pattern strategies perform well in simulation but are accuracy-limited in practice. Also, the computation cost of data processing of one-shot schemes is very high [7]. Multi-shot pattern schemes achieve real-time by driving the camera/projector pair as fast as possible to minimize the effect of motion and involving patterns as little as possible [17, 8,18]. Typically, multi-shot schemes are not suitable for scanning moving target, but they have advantages with regards to computational cost [8, 18]. Additionally, if the motion is relatively slow, multi-shot schemes can obtain more accurate 3D reconstruction [8]. The methods of motion detection [19] and compensation [20] for multi-shot schemes have also been proposed. With regards to the accuracy of the 3-D measurement, high-frequency stripe pattern strategies are effective for suppressing the additive noise [5]. However, high-frequency patterns introduce ambiguities in the phase, requiring unwrapping strategies either spatially [21, 22] or temporally [5, 23]. Spatial unwrapping fails near surface discontinuities while temporal approaches require additional patterns. If a particular pattern strategy achieves temporal phase unwrapping without the need for additional patterns beyond those necessary for measuring the wrapped phase, we call it
an instantly-decodable pattern strategy since it extracts an unwrapping parameter along with the wrapped phase. Li et al. [24] proposed such a scheme by using a two-frequency pattern strategy that integrated both high-frequency and unit-frequency into a single pattern, but 2N (N ≥ 3) patterns were needed and the high frequency had to be equal to N . Su and Liu [25] treated the algorithm of Li et al. as one of two basis in their one-shot pattern strategy. Separately, Kim et al. [26] proposed an algorithm without phase unwrapping but did so requiring 4N (N ≥ 2) patterns. Wang et al [27] proposed a non-ambiguity high frequency pattern strategy with maximum SNR. However, the number of non-ambiguity high frequency is limited by the number of patterns. Liu et al. [8] proposed one-equation dual-frequency pattern scheme that combined, into a single pattern, a high frequency SLI pattern with a unit frequency pattern and had a minimum number of patterns of 5. And in this paper and motivated by Liu et al.’s work, we further reduce the number of patterns down to 4 and even 3 by analyzing a parameterdetailed ASV/SLI model [9, 10]. 3. PARAMETERS ANALYSIS FOR ASV/SLI In many ASV/SLI implementations, Phase Measuring Profilometry (PMP) [6] is employed for its simplicity and depth accuracy [28]. The projected patterns are expressed as 2πn 1 1 p p p p + cos 2πf y − + βp , (1) In (x , y ) = αp 2 2 N where xp , y p ∈ [0, 1] and (xp , y p ) is the normalized coordinates of a pixel in the projector; Inp is the light intensity of that pixel; f is the frequency of the sine wave; n represents the phase-shift index; N ≥ 3 is the total number of phase shifted patterns; αp is the amplitude constant of the sine wave; and βp is the balance constant preventing images captured by a camera from underflow. In the camera, the corresponding captured images are defined according to Inc (xc , y c )
= Ac (xc , y c )
(2)
2πn + B c (xc , y c ) cos φ(xc , y c ) − N where (xc , y c ) is the camera coordinate. The coordinate index in both the projector and camera will be removed from our equations, henceforth, to simplify the notation. In the capc tured images,Pthe terms, Ac and Bp , are computed according N −1 c 1 c c 2 + C 2 where S to A = N n=0 In and B = SN N = PN −1 c PN −1N c 2 2πn 2πn 2 I sin I cos and C = . N n n n=0 n=0 N N N N The phase value of the captured sinusoid pattern, φ, is obtained as SN , (3) φ = arctan CN which is related to y p through φ = 2πf y p . With the phase φ and the camera coordinate (xc , y c ), the 3-D world coordi-
nates of the scanned object can, therefore, be derived through triangulation with the projector [5]. For the purpose of analyzing issues conveniently, we rewrite Eq. (2) as 1 1 2πn c In = α αp + cos φ − + βp + β + β, (4) 2 2 N where α is the reflectivity of the scanned object for the camera with a value range of [0, 1] and β is the intensity of ambient light for the camera. The signal terms inside {·}, of Eq. (4), with the coefficient α represent the light reflected off the target object originating from either the pattern projector or from any ambient light sources, while the last term β, in Eq. (4), represents the ambient light directly entering the camera without reflection [9]. Both α and β are functions of (xc , y c ) in the camera. From Eq. (4), the terms Ac and B c can be expressed as 1 c A = α 2 αp + βp + β + β and B c = 12 ααp . The ambient light intensity, β, can be derived according to c
β=
A −
α( 21 αp
+ βp )
α+1
(5)
2B c . αp
(6)
Now if β is positive, then the term, βp , will typically be set to zero in order to make αp as large as possible; however, if β is negative, it is necessary to choose a suitable positive value for βp in order to keep Inc from an underflow condition. That is, if we let αp = 0 and solve the inequality Inc > 0, then a suitable value for βp , in Eq. (4), can be derived from 1 βp > − 1 + β, (7) α which is very sensitive to α since the derivative of βp is equal to −(β/α2 )dα, if we replace the inequality of Eq. (7) with an equality. So if we select βp with α1 , all the pixels with α ≤ α1 will no longer be reliable. On the other hand, once the value of βp is fixed, then reliable values of α will be determined according to α>−
β , βp + β
4. PHASE CHANNEL MULTIPLEXING PATTERN STRATEGY In the previous section, we analyzed the basic parameters in PMP technique. In this section, we will discuss how to embed auxiliary signal used for phase decoding into the main signal based on the analysis of sinusoid wave harmonics [10]. Suppose we have a group of phase-shifted sinusoid wave signals, mixed with harmonics, defined as In
2πn = A + B cos φ − N ∞ X 2πn , + Bk cos k φk − N
(9)
k=2
The reflectivity, α, can be computed according to α=
where α, in the normalized range of [0, 1], is the reflectivity of the scanned object at each pixel for the common view of the camera and the projector. The term, Ac , depends on α, αp , β, and βp , while α is the reflectivity and represents the texture more naturally. The value range of B c depends on the amplitude, αp , of the projected patterns while α does not.
(8)
if β is negative. If β is positive, then α can usually be any value in the range [0, 1] with smaller α leading to lesser reliability in the phase measurement, which we will analyze in a later section. In previous studies [18], researchers employed Ac as a measure of the texture of scanned objects with B c [28] or B c /Ac [29] as a phase quality indicator. In this paper, we tend to use α as both the texture and the quality indicator
where N is total number of the signals and ≥ 3, n is phaseshift index, A is the basic direct component (DC) of the signals, cos (φ − 2πn/N ) is the main signal with phase φ and energy B, and cos [k (φk − 2πn/N )] are the harmonics with phase φk and energy Bk . Depending on the number of the signals, N , one phase channel is shared by lots of harmonics. The two basic phase channel are the DC channel and the main channel in which the main signal lives. We encode harmonics according to different cases of N . For N = 3, the only additional and available phase channel is the DC channel, so after we keep the first signal and remove the others in the DC channel, Eq. (9) becomes
2πn In = A + B cos φ − 3
+ B3 cos (3φ3 ) .
(10)
Then for N = 4, besides the DC channel, one more available phase channel is at the phase shift π. Because cos(nπ) = ±1 and sin(nπ) = 0, we call this channel as half phase channel. When only the main signal and the first harmonic signal in the half phase channel are kept, Eq. (9) becomes πn In = A + B cos φ − + B2 cos (2φ2 − πn) . 2
(11)
The case for N ≥ 5 was proposed by Liu et al. [8]. In the paper, our proposed pattern strategies are based on Eqs. (10) and (11) according to the different cases of N and both of them are called phase channel multiplexing pattern (PCMP).
Intensity
250 200
n=0 n=1 n=2
150 100 50
100
150
200 250 300 Y direction
350
400
450
Fig. 1. Cross-sections of proposed PCMP for N = 3 with fh = 3, αp = 200, βp = 55, c1 = 0.5, c2 = 0.5, and R(y p ) = 2y p − 1. 4.1. Three patterns When the number of patterns is 3, taking advantage of the features Eq. (10), we design the projected patterns as 2πn 1 1 p p + c1 cos 2πfh y − (12) In = αp 2 2 3 1 + c2 R(y p ) + βp , 2 where y p is the normalized projector coordinates with range [0, 1], αp is the amplitude of the pattern, βp is the minimum value of the pattern, and c1 and c2 are modulation coefficients of main signal and reference signal respectively. The terms c1 and c2 are related through c1 + c2 = 1. R(y p ) is the reference signal and it can be any monotonic function with the value range [−1, 1] over the domain of y p . Figure 1 shows a crosssection of proposed PCMP for N = 3 with fh = 3, αp = 200, βp = 55, c1 = 0.5, c2 = 0.5, and R(y p ) = 2y p − 1. The patterned images captured by the camera are expressed by 2πn Inc = Ac + B c cos φh − , (13) 3 where Ac , B c and φh can be computed respectively. Like the parameters analysis in the previous section, the terms Ac and B c can be expressed as 1 1 Ac = α αp + c2 R(φu ) + βp + β + β (14) 2 2 and
1 ααp c1 . (15) 2 Then the reference signal, R(φu ), containing the coarse unitfrequency phase φu can be derived from Eq. (14) according to 1 2(Ac − β) 2(βp + β) R(φu ) = − −1 , (16) c2 ααp αp Bc =
where
2B c α= αp c1
(17)
Fig. 2. The patterns of proposed PCMP for N = 4 with fh = 6, αp = 200, βp = 55, c1 = 0.6, c2 = 0.4, and R(y p ) = cos(πy p ). and β is calibrated by traditional PMP patterns using Eq. (5). Once φu is solved by the inverse function of R(·), phase decoding can be performed for φh . 4.2. Four patterns In case of N = 3, the DC phase channel was employed to embed a reference signal into the main signal. The shortcomings of that method is that we have to calibrate ambient light intensity before scanning. While in the subsection, when the number of patterns is 4, by employing the half phase channel described in Eq. (11), the issue arisen from N = 3 can be controlled well and the projected patterns are designed as: 1 1 πn p In = αp + c1 cos 2πfh y p − (18) 2 2 2 1 + c2 S[R(y p ), 2n, 4] + βp , 2 where R(y p ) is an arbitrary monotonic reference signal, and S(t, n, N ) is the phase-shifting function for reference signal and has the same action as cos(2πf t − 2πn/N ). Figure 2 shows our proposed PCMP for N = 4 with fh = 6, αp = 200, βp = 55, c1 = 0.6, c2 = 0.4, and R(y p ) = cos(πy p ). The patterned images captured by the camera are expressed by πn Inc = Ac + B1c cos φh − + B2c S[R(y p ), 2n, 4], (19) 2 where Ac , B1c and φh can be computed respectively. B2c is the modulation for the embedded auxiliary signal in the half phase channel and can not be calculated directly, but in theory, it equals to B2c = B1c c2 /c1 . The information containing the reference signal in the half phase channel can be computed by c Bcos =
3 1X c [I cos (πn)] . 4 n=0 n
(20)
Table 1. Percentage of pixels successfully phase decoded for N = 3 with different fh . fh = 3 4 6 99.19 96.28 85.17 Table 2. Percentage of pixels successfully phase decoded for N = 4 with different fh and c1 . c1 fh = 4 6 8 0.5 99.79 98.65 96.37 0.6 99.80 98.63 96.30 0.7 99.71 98.15 95.22
R(φu ) =
c 2Bcos
ααp c2
,
(23)
where α = 2B1c /(αp c1 ). Once φu is obtained by the inverse function of R(.), phase decoding for φh can be achieved. 4.3. Noise effect on PCMP In practice, the captured images are always polluted by additive noise, so we evaluate the performance of PCMP by adding the additive noise to Eqs. (13) and (19) respectively as I˜nc = Inc + wn , where I˜nc is the noised images and wn is the additive zero-mean Gaussian noise with σ 2 = 1.55. The simulations are performed for 3 and 4-pattern PCMP respectively. For 3-pattern PCMP with settings as α = 0.5, β = −7, c1 = c2 = 0.5, R(y p ) = 2y p − 1 and αp = 200 and fh = 3, 4 and 6 respectively in Eq (13), the percentage of pixels successfully phase decoded is listed in Table 1. For 4-pattern PCMP with settings as α = 0.5, β = −7, αp = 200, c2 = 1 − c1 , R(y p ) = cos(πn), c1 = 0.5, 0.6 and 0.7 respectively, and fh = 4, 6 and 8 respectively in Eq. (19), the percentage of pixels successfully phase decoded is listed in Table 2. 5. EXPERIMENTS In order to demonstrate the advantages and performance of our proposed PCMP, we employed the ASV/SLI experimental system shown in Fig. 3. The imaging sensor is an 8 bitper-pixel, monochrome, Prosilica GC640M, gigabit ethernet camera with 640 × 480 pixel resolution. The exposure time
Number of pixels
c With the detailed parameters, the terms B1c and Bcos can be expressed as 1 B1c = ααp c1 (21) 2 and 1 c (22) Bcos = ααp c2 R(φu ), 2 so the reference signal can be solved by
Fig. 3. Experimental setup
10000
5000
0 −12
−11
−10
−9
−8
−7 −6 Intensity
−5
−4
−3
Fig. 4. The histogram of computed β for the pixels with α ≥ 0.7 of the camera was set to 1ms in our experiments. The projector is composed of a Texas Instrument’s Discovery 1100 board with ALP-1 controller and LED-OM with 225 ANSI lumens. The projector has a resolution of 1024 × 768 and 8 bit-per-pixel grayscale depth resolution. The camera and projector are synchronized by an external triggering circuit. As our processing unit, we used a Dell Optiplex 960 with an Intel Core 2 Duo Quad Q9650 processor running at 3.0 GHz. The first experiment is measuring the intensity of the ambient light and the second group of experiments are on the 3D reconstructions by means of our proposed 3 and 4-pattern PCMP. The scanned objects include a white foam board for performance discussion and a textured plastic angel, shown in Fig. 3, for effects demonstration. 5.1. Measuring the intensity of the ambient light With Eq. (5), we employ classic PMP patterns to calibrate ambient light intensity. The parameters for Eq. (1) are N = 240, f = 1, αp = 200 and βp = 55. The calibration target is a white foam board with much higher α at each measurable pixel. Figure 4 shows the histogram of computed β over 133, 644 pixels with α ≥ 0.5. The mean, median and mode values of β over those pixels are −7.17, −7.22 and −7 respectively. The variance of β is 0.63, which means the ambient light intensity is almost uniformly distributed. We will employ the mode number, −7, as the intensity of the ambient light in the future.
φh φu
4
decoded φh
2 0 100
150
200
250 300 Row number
350
400
0 −0.02 150
200
250 300 Row number
350
400
450
Fig. 6. Scanned with 3-pattern PCMP and 3-pattern unitfrequency PMP, the cross-section at 320th column of phase error of the board.
Fig. 7. Scanned with 3-pattern PMP (left) and PCMP (right), the side view of 3-D reconstructed angel. According to Eq. (8), with βp = 55 and β = −7, the reliable value of α should be larger than 0.1458. If we only scan an object with uniform texture and high reflectivity, for example, α > 0.5, an optimized value of βp can be 18 according to Eq. (7). But for an object with rich texture, such as our angel model, a lot of phase data will be removed for those pixels with α ≤ 0.5. So we still involve βp = 55 and use α = 0.15 as the quality indicator for phase in the following experiments. 5.2. PCMP scanning For 3-pattern PCMP described by Eq. (12), the parameters are set as fh = 3, c1 = c2 = 0.5, αp = 200, βp = 55 and R(y p ) = 2y p − 1. When we compute R(φu ) with Eq. (16),
2
150
200
250 300 Row number
350
400
450
0.06 Phase Error (rad)
Phase Error (rad)
3−pattern PCMP 3−pattern PMP
decoded φh
Fig. 8. Scanned with 4-pattern PCMP, the cross-section at 120th column of φh , φu , and decoded φh of the board.
0.04 0.02
φu 4
0 100
450
Fig. 5. The cross-section at 120th column of φh , φu , and decoded φh (right) of the board scanned with 3-pattern PCMP.
−0.04 100
φh
6 Phase (rad)
Phase (rad)
6
4−pattern PCMP 4−pattern PMP
0.04 0.02 0 −0.02 −0.04 100
150
200
250 300 Row number
350
400
450
Fig. 9. Scanned with 4-pattern PCMP and 4-pattern unitfrequency PMP, the cross-section at 320th column of phase error of the board. the term β = −7 obtained from the previous subsection is applied. First we scan the white foam board, and Fig. 5 shows the cross-section at 120th column of φh , φu , and decoded φh of the board. The percentage of pixels successfully phase decoded in this case was 97.61%, which is less than the prediction listed in Table. 1, because besides effected by noise, the error of decoding is also caused by the non-uniform distributed ambient light intensity which is supposed to be uniform, and such error typically occurs at the edge of the phase map. For the successfully decoded pixels, the variance of absolute error of 3-pattern PCMP is 1.13 × 10−4 , while the variance of the absolute error of 3-pattern PMP is 2.43 × 10−4 . The ratio of 2.43 × 10−4 over 1.13 × 10−4 is 2.15, so with the percentage of pixels successfully decoded > 97%, the phase quality of 3-pattern PCMP is twice better than 3-pattern unit-frequency PMP. Figure 6 shows, scanned with 3-pattern PCMP and 3-pattern unit-frequency PMP, the cross-section at 320th column of phase error of the board. Figure 7 shows the side view of the 3-D point clouds of angel scanned with 3pattern unit-frequency PMP (left) and 3-pattern PCMP (right) respectively. For 4-pattern PCMP described by Eq. (18), the parameters are set as c2 = 1 − c1 , αp = 200, βp = 55, R(y p ) = cos(πy p ), c1 = 0.5, 0.6 and 0.7 respectively, and fh = 4, 6 and 8 respectively. There are 9 scans totally. The term R(φu ) is computed with Eq. (23), which has nothing to do with the ambient light intensity. First we scan the white foam board, and Fig. 8 shows the cross-section at 120th column of φh , φu , and decoded φh of the board with fh = 4 and
Table 3. Percentage of pixels successfully phase decoded for 4-pattern PCMP with different fh and c1 and the corresponding improvements compared to 4-pattern unit-frequency PMP. c1 fh = 4 6 8 0.5 99.86 (3.47×) 99.52 (7.04×) 96.46 (12.88×) 0.6 99.82 (4.89×) 99.49 (9.77×) 96.35 (20.41×) 0.7 99.76 (7.39×) 99.38 (12.98×) 95.32 (25.93×)
proposed algorithm well where we demonstrate significant improvement in noise reduction over traditional phase-shift methods. And the significance of these results, as they were demonstrated, is improved performance in real-time applications where the number of patterns must be minimized in order to achieve accurate scans for moving objects. If the number of PCMP is increased to 6, we can perform both phase decoding [8] and motion detecting [19]. In the future, we will further analyze the effect of additive noise for PCMP, and find out optimized settings to suppress noise as more as possible with high rate of correctly decoding phase. 7. ACKNOWLEDGEMENTS The authors would like to thank Dr. D. L. Lau and Dr. L. G. Hassebrook, at University of Kentucky, for providing the experiment equipments. 8. REFERENCES
Fig. 10. Side views of the 3-D reconstructed angel using 4pattern unit-frequency PMP (top-1) and 4-pattern PCMP with c1 = 0.5 and fh = 4 (top-2), c1 = 0.5 and fh = 6 (top-3), c1 = 0.5 and fh = 8 (top-4), c1 = 0.6 and fh = 4 (top5), c1 = 0.6 and fh = 6 (bottom-1), c1 = 0.6 and fh = 8 (bottom-2), c1 = 0.7 and fh = 4 (bottom-3), c1 = 0.7 and fh = 6 (bottom-4), and c1 = 0.7 and fh = 8 (bottom-5). c1 = 0.5, and Fig. 9 shows, scanned with 4-pattern PCMP and 4-pattern unit-frequency PMP, the cross-section at 320th column of phase error of the board. The percentage of pixels successfully phase decoded in this case are listed in Table 3, which are a little higher than the predictions listed in Table. 2 because the averaged α of the board was higher than the 0.5 employed in the simulation. For the successfully decoded pixels, the improvements of 4-pattern PCMP compared to the 4-pattern unit-frequency PMP, whose variance of absolute error is 1.80×10−4 , are also listed in Table 3. Figure 10 shows the side views of the 3-D reconstructed point clouds of the angel using 4-pattern unitfrequency PMP (top-1) and 4-pattern PCMP with differenct c1 as well as different fh (remained). 6. CONCLUSION In this paper, we propose high frequency decodable PCMP which combined high-frequency signal and unit-frequency signal together by means of taking advantage of availed phase channel according to N = 3 and 4 respectively. Traditional two-frequency PMP needs at least 6 to achieve high-quality 3-D reconstruction while our PCMP only needs 3 or 4 patterns to achieve high quality. Our experiments supported the
[1] A. Dipanda and S. Woo, “Efficient correspondence problem-solving in 3-D shape reconstruction using a structured light system,” Optical Engineering, vol. 44, no. 9, p. 093602, 2005. [2] Y. Wang, L. G. Hassebrook, and D. L. Lau, “Data acquisition and processing of 3-d fingerprints,” IEEE Transactions on Information Forensics and Security, vol. 5, pp. 750–760, 2010. [3] Q. Hu, J. Zhang, and U. Becker, “Growth process and crystallographic properties of vaterite,” in MRS Symp. Proceedings, vol. 1272, 2010, pp. 1–5. [4] S. Ma, R. Zhu, C. Quan, L. C. C. J. Tay, and B. Li, “Flexible structured-light-based three-dimensional profile reconstruction method considering lens projectionimaging distortion,” Applied Optics, vol. 51, pp. 2419– 2428, 2012. [5] J. Li, L. G. Hassebrook, and C. Guan, “Optimized twofrequency phase-measuring-profilometry light-sensor temporal-noise sensitivity,” Journal of the Optical Society of America A, vol. 20, no. 1, pp. 106–115, 1 2003. [6] V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-d diffuse objects,” Applied Optics, vol. 23, no. 18, pp. 3105–3108, 9 1984. [7] C. Guan, L. G. Hassebrook, and D. L. Lau, “Composite structured light pattern for three-dimensional video,” Optics Express, vol. 11, no. 5, pp. 406–417, 2003. [8] K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-d shape measurement,” Optics Express, vol. 18, no. 5, pp. 5229–5244, 3 2010.
[9] S. Zhang and P. S. Huang, “Phase error compensation for a 3-d shape measurement system based on the phaseshifting method,” Optical Engineering, vol. 46, no. 6, pp. 063 601:1–9, 6 2007. [10] K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Gamma model and its analysis for phase measuring profilometry,” Journal of the Optical Society of America A, vol. 27, no. 3, pp. 553–562, 3 2010.
[21] S. Li, W. Chen, and X. Su, “Reliability–guided phase unwrapping in wavelet-transform profilometry,” Applied Optics, vol. 47, no. 18, pp. 3369–3377, 6 2008. [22] Y. Shi, “Robust phase unwrapping by spinning iteration,” Optics Express, vol. 15, no. 13, pp. 8059–8064, 1 2007.
[11] J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognition, vol. 43, no. 8, pp. 2666–2680, 2010.
[23] D. S. Mehta, S. K. Dubey, M. M. Hossain, and C. Shakher, “Simple multifrequency and phase–shifting fringe–projection system based on two–wavelength lateral shearing interferometry for three–dimensional profilometry,” Applied Optics, vol. 44, no. 35, pp. 7515– 7521, 12 2005.
[12] M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-d object shapes,” Applied Optics, vol. 22, no. 24, pp. 3977–3982, 12 1983.
[24] J.-L. Li, H.-J. Su, and X.-Y. Su, “Two-frequency grating used in phase-measuring profilometry,” Applied Optics, vol. 36, no. 1, pp. 277–280, 1 1997.
[13] T. P. Koninckx and L. V. Gool, “Real-time range acquisition by adaptive structured light,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 3, pp. 432–445, 3 2006.
[25] W.-H. Su and H. Liu, “Calibration-based two-frequency projected fringe profilometry: a robust, accurate, and single-shot measurement for objects with large depth discontinuities,” Optics Express, vol. 14, no. 20, pp. 9178–9187, 2006.
[14] P. S. Huang, Q. Hu, F. Jin, and F.-P. Chiang, “Color– encoded digital fringe projection technique for high– speed three–dimensional surface contouring,” Optical Engineering, vol. 38, no. 6, pp. 1065–1071, 1999. [15] L. Chen, C. Quan, C. J. Tay, and Y. Fu, “Shape measurement using one frame projected sawtooth fringe pattern,” Optics Communications, vol. 246, no. 4-6, pp. 275–284, 2005. [16] S. Y. Chen, Y. F. Li, and J. Zhang, “Vision processing for realtime 3-d data acquisition based on coded structured light,” IEEE Transaction on Image Processing, vol. 17, no. 2, pp. 167–176, 2 2008. [17] M. Schaffer, M. Grosse, and R. Kowarschik, “Highspeed pattern projection for three-dimensional shape measurement using laser speckles,” Applied Optics, vol. 49, no. 18, pp. 3622–3629, 2010. [18] S. Zhang and P. S. Huang, “High–resolution, real–time three–dimensional shape measurement,” Optical Engineering, vol. 45, no. 12, p. 123601, 3 2006. [19] D. L. Lau, K. Liu, and L. G. Hassebrook, “Real-time three-dimensional shape measurement of moving objects without edge errors by time synchronized structured illumination,” Optics Letter, vol. 35, no. 14, pp. 2487–2489, 2010. [20] T. Weise, B. Leibe, and L. V. Gool, “Fast 3d scanning with automatic motion compensation,” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, vol. 0, pp. 1–8, 2007.
[26] E.-H. Kim, J. Hahn, H. Kim, and B. Lee, “Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection,” Optics Express, vol. 17, no. 10, pp. 7818–7830, 5 2009. [27] Y. Wang, K. Liu, Q. Hao, D. L. Lau, and L. G. Hassebrook, “Maximum snr pattern strategy for phase shifting methods in structured light illumination,” Journal of the Optical Society of America A, vol. 27, no. 9, pp. 1962– 1971, 2010. [28] X. Su, G. von Bally, and D. Vukicevic, “Phase-stepping grating profilometry: utilization of intensity modulation analysis in complex objects evaluation,” Optics Communications, vol. 98, no. 1, pp. 141–150, 4 1993. [29] S. Zhang, X. Li, and S.-T. Yau, “Multilevel qualityguided phase unwrapping algorithm for real-time threedimensional shape reconstruction,” Applied Optics, vol. 46, no. 1, pp. 50–57, 1 2007.