Fast surface approximation for volume and surface area measurements using distance transform D. J. Lee Brigham Young University Department of Electrical and Computer Engineering 459 CB Provo, Utah 84602 E-mail:
[email protected] Joseph Eifert Virginia Polytechnic Institute and State University Department of Food Science and Technology Blacksburg, Virginia 24061 Pengcheng Zhan Benjamin Westover* Brigham Young University Department of Electrical and Computer Engineering Provo, Utah 84602
Abstract. The laser triangulation technique has been widely used to obtain three-dimensional (3-D) information because of its accuracy. It is a fast, noncontact method for 3-D measurement. However, 3-D data obtained from triangulation are not dense and usually not complete for surface reconstruction, especially for objects with irregular shapes. As the result of fitting surfaces with these sparse 3-D data, inaccuracy in measuring object surface area and volume is inevitable. Accurate surface reconstruction from incomplete 3-D data points becomes an important step toward accurate noncontact surface area and volume measurements of objects moving at high speed. A novel computer vision technique combining laser triangulation and a distance transform is developed to improve the 3-D measurement accuracy for objects with irregular shapes. The 2-D object image boundary points combined with the 3-D data obtained from laser triangulation are used to generate a 3-D wire frame. The distances from each pixel within the object boundary to its nearest boundary point are then used as the constraints for surface approximation. With this additional information from the distance transform, more accurate surface approximation can be achieved. This novel surface approximation technique is implemented and the measurement accuracy is compared with the accuracy using other surface interpolation techniques for the volume measurement of moving objects. © 2003 Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.1605737]
Subject terms: distance transform; laser triangulation; three-dimensional wireframe model; three-dimensional reconstruction; machine vision; surface approximation; near infrared. Paper 030054 received Jan. 30, 2003; revised manuscript received Apr. 10, 2003; accepted for publication Apr. 14, 2003. This paper is a revision of two papers presented at SPIE conferences on Machine Vision and Three-Dimensional Imaging Systems for Inspection and Metrology, November 2000, Boston, MA, and on Image Reconstruction from Incomplete Data II, July 2002, Seattle, WA. The paper presented in Boston, MA, appears (unrefereed) in SPIE Proc. Vol. 4189 and the paper presented in Seattle, WA, appears (unrefereed) in SPIE Proc. Vol. 4792.
1
Introduction
Recently, machine vision technology has become a very useful method for inspection and measurement in various industries. The advantages of using machine vision technology are that it is nondestructive, noncontact, and consistent and it is capable of performing tasks in real time. In agriculture and food processing industries, measurement of true 3-D volume is critical for many commodities and processes. One example is the volume measurement of oyster meats for quality grading. High grading accuracy determines pricing accuracy and is a very critical process for many food processing industries. Human grading of oyster meats is very subjective. A sampling of commercially handgraded and sorted meats showed that 50 to 80% of the products were out of grade based on calculated volumes.1
*Current address: Department of Computer Science, Washington University in St. Louis, St. Louis, Missouri 63130. Opt. Eng. 42(10) 2947–2955 (October 2003)
0091-3286/2003/$15.00
Oyster shucking and packing companies usually send out product weighing more than the labeled product weight with weights in favor of the purchaser.2 Also, multiple human handling of the product in the grading process may lead to weight loss and contamination. An automated oyster meat grading system integrated with the shucking and packing equipment could completely eliminate human handling. Limited research has been done in solving the highspeed volume measurement problem. In 1990, the geometric and physical properties of raw oyster meat were studied to understand the properties that are related most to grading.1 The investigators wanted to determine how measurable geometric and physical properties related to grading so that an automated measurement system could be developed to grade oyster meats. Weight, volume, 2-D projected area, and height were measured. Statistical correlations between the various properties were found and weight was determined to be the best predictor of volume. However, © 2003 Society of Photo-Optical Instrumentation Engineers 2947
Lee et al.: Fast surface approximation . . .
measuring the weight of each individual piece of product at the production speed is not an easy task and is almost impossible for products with deformable shapes like oyster meats. Development of an automated system using a camera to measure the 2-D area was proposed and developed3 in 1994. This oyster meat grading system photographs the product on a conveyer belt from a camera directly above the products. The system attempts to binarize the images taken from the camera so that the pixels of the product are set to 1 and the pixels of the background are set to 0. The 2-D binary area of the product calculated in number of pixels was then measured to estimate the volume. The 2-D area measurement of the product was shown to be somewhat correlated with its volume.2 Estimation of the actual volume based on this method has an estimation error of ⫾3.9 cm3, which is equivalent to 20% of most product sizes and is not acceptable to the industry’s need. A novel high-speed 3-D volume and surface area measurement technique has been developed and is presented in this paper. Unlike systems using 2-D area projection to estimate the volume and surface area, this technique uses laser triangulation to measure the heights of many sample points on the object surface. These 3-D data points were used, in addition to the 2-D boundary points, to construct a 3-D surface for calculating volume and surface area. However, the available 3-D data points represent a small set of incomplete data of the 3-D surface. Surface approximation is necessary to determine the surface area and volume accurately. In this work, the distance information from the morphological 2-D distance transform was used as the constraints for 3-D surface approximation. This research focused on 3-D measurement of oyster meat volume. However, with minor adjustments, this system can also be used for other food or seafood processing applications that require sorting or quality grading by size. The techniques for converting 2-D images to 3-D data are introduced in Sec. 2. Calibration and image processing techniques are also discussed in this section. Section 3 discusses 3-D model generation, the distance transform, and 3-D surface reconstruction. Section 4 shows the results of using this algorithm for volume measurement of oyster meats. The advantages and the comparisons with other surface approximation techniques are discussed in this section. Finally, the conclusions are drawn in Sec. 5.
Fig. 1 Multiple laser lines. 2948 Optical Engineering, Vol. 42 No. 10, October 2003
Fig. 2 Triangulation and side view.
2
3-D Measurement from a 2-D Image
2.1 Triangulation Parallel laser lines were projected across the product while lieing on a flat surface. The laser lines were projected from above the product at an angle to the vertical, as shown in Fig. 1. The camera was mounted vertically above the product. Without the product, the laser lines appeared as parallel straight lines on the flat surface, as shown in Fig. 1. Laser lines shift to the left as the thickness increases. Figure 2 shows the effect of a laser light projecting on two different sizes of product. The laser light strikes the larger product at a point higher than the point where the laser light strikes the smaller one. In the figure, H is the height of the laser contact point for the large object, h is the height of the laser contact point for the small object, and o is the point on the flat surface where the laser light would strike in the absence of the product. From the perspective of the camera, the laser light is displaced a distance D from o by the large product. The laser light is displaced a distance d from o by the small product. The right triangle formed by sides H and D is similar to the right triangle formed by sides h and d. If the height changes, the displacement must also change by the same factor. As a result, the height of the product can be measured from the observed displacement of the laser light. Figure 3 shows an object with multiple laser lines projected from an approximately 45-deg angle. The image was acquired in the near-IR range and an interference filter was used to filter out the visible light and increase the contrast between the product and the background. As illustrated in Fig. 3, three parallel laser lines were projected on the object and they were not straight lines because of the displacement that resulted from object thickness change. The right half of the object is thicker than the left half because the laser line on the right half has larger shifts than the laser line on the left. The laser line displacement is measured as the distance from each point on the curved laser line to its reference point obtained when projected on a flat reference surface. 2.2 Calibration A near-IR sensitive camera manufactured 共model KP-F2A兲 by Hitachi was selected for this research. The effective
Lee et al.: Fast surface approximation . . .
Fig. 4 Laser lines (a) on the reference surface and (b) on a 1-in. block.
pixel/inch ratio R at the y’th row of the image can be expressed as Fig. 3 Object and laser lines.
R l 共 y 兲 ⫽ 关 Z l 共 y 兲 ⫺T l 共 y 兲兴 /1.0 in., l⫽1,2, and 3 共 laser line number兲 ,
number of pixels for this model is 658⫻496. A 16-mm Computar TV lens was used for imaging. Because the oyster meats will be moving on a conveyor belt and cannot be presented at the fixed location under the camera for imaging, the camera field of view must be slightly larger than the largest oyster meat. For this purpose, the vision system was adjusted to cover 4⫻3 in., which provides an image resolution of approximately 164 pixels/in. Figure 4 shows the image of three laser lines projecting from an angle onto a flat surface. Laser lines do not appear straight because of the geometry distortion on the laser line projector. Figure 4共a兲 shows the laser line position on the reference surface 共zero height兲. Figure 4共b兲 shows the laser lines on the flat surface of a 1-in.-thick block. The displacement measured in number of pixels between each corresponding laser line pair is equivalent to 1 in. For example, if the displacement between the leftmost pair of laser lines is 164 pixels, then the calibrated displacement pixel/inch ratio is 164 pixels/in. With the laser lines projected on the object, the lines will appear displaced from an observer directly above the object looking down, as shown in Fig. 1. This displacement of a laser line is directly proportional to the height of the object at the point the laser light strikes the object surface. In other words, a displacement of 82 pixels means the object is 1/2 in. thick if the calibrated displacement pixel/inch ratio is 164. During the calibration, the system records the x and y coordinates of each point on all three laser lines for both the zero height 共reference flat surface兲 and the test height 共the top surface of the one inch block兲. The displacement
共1兲
where Z l (y) and T l (y) are the laser line centers for the zero height and the test height. To simplify the calibration procedure, each row has its own displacement pixel/inch ratio and the reference point 共zero height兲 to compensate for the geometry distortion caused by the laser line projector. Without losing accuracy 共as proven in later sections兲, the weak-perspective camera model was used and a linear relation between the product height and laser line displacement was assumed. The height of each laser line point on the surface can then be calculated as H l 共 x,y 兲 ⫽ 关 x⫺Z l 共 y 兲兴 /R l 共 y 兲 l⫽1,2, and 3 共 laser line兲 ,
共2兲
where x and y are the coordinates of the laser line point on the object surface.4,5 2.3 Image Processing Figure 3 shows the original image of an oyster with the same background surface as that in Fig. 4共a兲. The oyster appears brighter than the background. A binary image shown in Fig. 5共a兲 was obtained using a single threshold. However, the laser lines were preserved because they were also brighter than the background. A filter removing thin objects was used to remove the thin laser lines on the binary image. Figure 5共b兲 shows the binary image with the laser lines removed. This binary image was used to calculate the 2-D area in pixels. The oyster length and width can
Fig. 5 (a) Binary image, (b) laser lines removed, and (c) the contour. Optical Engineering, Vol. 42 No. 10, October 2003 2949
Lee et al.: Fast surface approximation . . .
Fig. 6 Laser lines and 3-D data.
also be measured. However, the size of the binary image depends heavily on the threshold selected and it is very sensitive to the lighting variation. The use of a near-IR 共NIR兲 camera and an interference filter to block the ambient light significantly improved the consistency of locating the contour points.4,5 A fast eight-neighborhood contour trace algorithm was developed to extract the x and y coordinates of the object boundary, as shown in Fig. 5共c兲. The binary image with laser lines removed was then used as a mask to remove the laser lines in the original image in Fig. 5共a兲. Once the object contour is defined and image noise and laser lines outside the contour were removed, a vertical edge detector was used to detect the laser lines. This was done by using a 5⫻1 kernel described as 关1,1,0,⫺1,⫺1兴. The locations of the three laser lines shown in Fig. 6共a兲 are derived from
L 共 x,y 兲 ⫽
再
255
if I 共 x⫺2,y 兲 ⫹I 共 x⫺1,y 兲 ⫺I 共 x⫹1,y 兲 ⫺I 共 x⫹2,y 兲 ⬎laser threshold
0
共3兲
otherwise,
where I(x,y) is the input image with laser lines removed, as shown in Fig. 5共c兲, and L(x,y) is the output laser line image in Fig. 6共a兲. Depending on the laser threshold chosen and the object surface conditions, laser lines may be discontinued. Linear interpolation was used to link the ‘‘broken’’ laser lines together. Completed traces of the laser lines were obtained. The boundary points and three continuous laser line traces 关Fig. 6共b兲兴 are expressed as 2-D points with x and y coordinates. Equation 共2兲 can then be used to convert the 2-D points into 3-D data, which can be expressed as (x,y,z)⫽ 关 x,y,H l (x,y) 兴 . 3
Surface Approximation
3.1 3-D Wire Frame Model The x, y, and z coordinates of the boundary points were used as the base of the 3-D wire frame model of the object, as shown in Fig. 7. It was assumed that the heights for all the boundary points was zero (z⫽0). The three laser lines projected on the object surface provided three wires that are on the actual 3-D surface where the laser lines strike. The 3-D measurement (x,y,z) of these sparse data points was calculated using Eq. 共2兲. A 3-D wire frame model was constructed based on these 3-D data. The 3-D wire-frame model, although an incomplete 3-D surface, was used to calculate the volume with surface approximation. A fast and accurate surface approximation technique using dis2950 Optical Engineering, Vol. 42 No. 10, October 2003
Fig. 7 3-D wire frame model.
tance transform was developed for this purpose. The details of this novel technique are described in the following two sections. 3.2 Distance Transform The distance transform is an operator normally applied only to binary images. The distance transform assigns to each feature pixel of a binary image a value equals to its distance to the nearest nonfeature pixels.6,7 It can be used to extract the information about the shape and the position of the foreground pixels relative to each other. It has been applied to many fields of research such as medical imaging, pattern recognition, and surface reconstruction. There are different types of distance transforms, depending on how the distance is measured. The distance between pixels can be measured as Euclidean distance, city block, or chessboard. The distance metrics8 used are described as follows: 1. Euclidean distance: The straight-line distance between two pixels D E 关 (i, j),(h,k) 兴 ⫽ 关 (i⫺h) 2 ⫹( j⫺k) 2 兴 1/2 . This definition of the metric is most straight looking, just as what is in use in real life. Based on its good characteristics, which can extract an accurate skeleton that is reversible, rotationally invariant, and minimal, we can apply it in a variety of fields 共e.g., medical imaging or shape-based interpolation兲. 2. City block distance: D 4 关 (i, j),(h,k) 兴 ⫽ 兩 i⫺h 兩 ⫹ 兩 j⫺k 兩 . The city block distance metric measures the path between the pixels based on a four-connected neighborhood, as shown in Fig. 8共a兲. For a pixel p
Fig. 8 (a) Four-connected and (b) eight-connected neighbors.
Lee et al.: Fast surface approximation . . .
with the coordinates (x,y), the set of pixels given by N 4 (p)⫽ 兵 (x⫹1,y),(x⫺1,y),(x,y⫹1),(x,y⫺1) 其 is called its four-neighbors. 3. Chessboard distance: D 8 关 (i, j),(h,k) 兴 ⫽max关兩i⫺h兩, 兩 j⫺k兩兴. The chessboard distance metric measures the path between the pixels based on an eight-connected neighborhood, as shown in Fig. 8共b兲. The set of eight-neighbor pixels is defined as N 8 共 p 兲 ⫽N 4 艛 兵 共 x⫹1,y⫹1 兲 , 共 x⫹1,y⫺1 兲 ,
Fig. 9 Distance transform and its output.
共 x⫺1,y⫹1 兲 , 共 x⫺1,y⫺1 兲 其 .
Depending on the application requirements, any of these distance metrics can be used. Though there are advantages in the Euclidean distance transform 共EDT兲, the complexity and calculation require extended computation time. Therefore, in some applications, for the convenience of fast computation, the linear style of metrics, city block and chessboard, are often used. These two metrics can be used to do fast calculation so that real-time processing can be achieved. Once the distance metric is chosen, different algorithms for computing the distance transform of a binary image can be selected. One simple, but extremely inefficient, method of computing the distance transform is to perform recursive morphological erosions until all feature pixels of the binary image have been eroded. This method works like removing the outer most boundary, one layer at a time, and each feature pixel is labeled with the number of layers that had to be removed before the feature pixel itself is removed. If each point in the interior is labeled with the number of layers that had to be removed before the feature pixel itself is removed, then the distance transform of that region can be obtained. A more efficient algorithm can calculate the distance transform in only two passes.9,10 However, considering the importance of the distance transform, different algorithms have been implemented for different applications. For example, to efficiently compute the shape signature, which is the key point in an online search engine for 3-D models,11 a fast method has been developed to calculate the EDT. Among so many algorithms, there is one sequential algorithm for computing the EDT of a k-dimensional binary image in linear time in the total number of voxels.12 By reducing the dimensionality, the EDT can be performed in the next lower dimension. The other approach applies Breu’s algorithm to compute the EDT of 2-D images in O(N) time.13 Because of the
relative simplicity, easy implementation, and fast calculation, it can greatly improve the performance of the EDT. Figure 9共a兲 illustrates the distance transform output of a simple binary image. All the nonfeature pixels are set to 0. The outer boundary points are assigned a value of 1 because they are all 1 pixel away from nonfeature pixels. For the pixels that are farther away from the binary image edge, the values get higher. The value of each pixel represents the closest distance from that pixel to the edge of the binary image. Figure 9共b兲 shows the distance transform of the binary image shown in Fig. 5共b兲 using the city block distance. The distance transform output looks similar to a contour map. The pixels on the same contour line have the same distance to the nonfeature pixels. Also, the center area is brighter than the boundary because the distance is greater in the center area. 3.3 Surface Reconstruction Surface interpolation techniques must be applied to reconstruct the 3-D surface before the volume or surface area can be measured. Surface interpolation is a computationintensive process. A fast surface approximation method that uses the 3-D wire frame and the output from a morphological distance transform was developed in this research to solve this problem. The distance transform was used as the constraints for 3-D surface approximation. This method is very efficient and the result is more accurate than other surface interpolation techniques. The distance transform output combined with a 3-D wire-frame model is shown in Fig. 10共a兲. The three dark solid lines represent the three laser lines whose 3-D measurements have been calculated using the triangulation method, as described earlier. These three laser lines divide the distance transform output into four regions, as shown in Fig. 10共a兲. The outer most boundary was used as the base
Fig. 10 Distance transform output combined with the 3-D wire-frame model. Optical Engineering, Vol. 42 No. 10, October 2003 2951
Lee et al.: Fast surface approximation . . .
Fig. 11 3-D surface reconstruction using (a) distance transform and (b) bilinear interpolation.
of the 3-D model. A 3-D plot of this arrangement is shown in Fig. 10共b兲. Surface reconstruction was done based on the contour lines and the 3-D information from the three laser lines. All the boundary points on the outer most contour line have a zero height. The height for each pixel in the image was determined as follows: 1. For pixels in regions 1 and 2, heights were determined by the height of the point 共already calculated兲 on the laser line that intersects the contour line. In most cases, in these two regions, the laser line intersects with each contour line at two locations, as shown in Fig. 10共a兲. The height of the intersection point that is closer to that pixel was used. 2. In regions 3 and 4, either one contour line intersecting with two laser lines or one contour line intersecting with one laser line at two locations, the heights are determined by the weighted sum of the two heights on the intersection points. The weight is determined by the distance between that pixel and the intersection points. The height of the closer intersection point was weighted more than the intersection point that is farther away from the pixel. 3. For pixels on the contour lines that have no intersections with any laser line, the next lower contour line was used. A complete 3-D surface reconstruction for each pixel in the feature 共binary兲 area using these rules is shown in Fig. 11共a兲. Figure 11共b兲 shows the surface reconstruction result using bi-linear interpolation with only the 3-D wire-frame model. The accuracy comparison of the newly developed method with the bilinear surface interpolation method is discussed in Sec. 4. Figures 12共a兲 and 12共b兲 show the 3-D plots of the 3-D surface using these two methods.
4 Results Object volumes can be measured or estimated using 共1兲 a 2-D area projection, 共2兲 a bilinear surface interpolation of a 3-D wire-frame model, and 共3兲 a distance transform output as constraints for surface approximation of a 3-D wireframe model. Thirty oysters were collected and their volumes were measured by each of these three techniques. Additionally, the volumes were also measured by a water displacement method and used as the ground truth. The comparison of the measured volumes with the known ground truth 共water displacement method兲 determines the measurement accuracy. 4.1 Data Collection Thirty oysters of various sizes 共5 to 18 g兲 were selected for accuracy evaluation of the 3-D volume measurement techniques using a 2-D area projection, a bilinear surface interpolation, and a distance transform. Oysters were placed in random orientations on the platform of the imaging system to obtain volume information calculated by the computer vision measurement system utilizing all three techniques. Lighting, camera lens filters, and the calibration were all carefully adjusted and tested. The volume of each oyster was then carefully measured using the displacement of water from a standard 25-ml Hubbard pycnometer. A pycnometer is a sealable lidded glass container designed so no air remains trapped in the top when it is filled with liquids. The pycnometer was filled with distilled water and weighed. The volume of the oyster was equal to the volume of water displaced by the oyster. 4.2 Data and Accuracy Analysis The volume estimation models were built by using the linear regression model with the measured volume in cubic centimeters of the oyster as the final output variable and the number of pixels obtained from the machine vision system as the predictor variable. A strong linear relation between the actual volume and the number of pixels obtained from the volume measurement system was observed for all three methods, as shown in the scatter plots in Fig. 13. However, statistical analysis shows significant differences among the three methods. The R 2 value 共squared correlation coefficient兲 between the real volume 共RV兲 using water displacement and the volume measurements 共VM兲 was calculated for each method. The R 2 value can be interpreted as the proportion of the variance in the VM in number of pixels attributable to the variance in the RV in cubic centimeters.
Fig. 12 3-D plot of the reconstructed 3-D surface: (a) distance transform and (b) bilinear interpolation. 2952 Optical Engineering, Vol. 42 No. 10, October 2003
Lee et al.: Fast surface approximation . . .
Fig. 13 Data obtained from all three techniques and the linear regression results.
The higher the R 2 value, the closer the VM are to the RV. From Table 1, the distance transform surface approximation technique had the highest R 2 value 共0.987兲. The second comparison was done using the standard error of the VM in number of pixels for each RV in cubic centimeters in the regression form. The standard error is a measure of the amount of error in the prediction of volume measurement for an RV. The distance transform technique had the lowest standard error. Both of the statistics indicated that the 3-D approaches, the bilinear surface interpolation and the distance transform provided a better volume measurement than the 2-D projection method. Accuracy was also compared by using the average percentage error from measuring all 30 oysters. This was done by first calculating the slope 共m兲 and intercept 共c兲 of the linear regression equation for each method. The VM in number of pixels were then converted to the estimated volume 共EV兲 in cubic centimeters using the equation EV⫽m⫻VM⫹c for all three techniques. The percentage error between EV and RV for each oyster was calculated as: Percentage error⫽
冏
冏
EV⫺RV . RV
pared with the 2-D area projection method for the bilinear interpolation and the distance transform, respectively. Also, at a 95% confidence level, the 2-D approach yielded an average margin of error ⫾2.55 cm3 and the bilinear interpolation yielded an average margin of error of ⫾1.4 cm3, whereas the distance transform technique yielded the lowest margin of error 共⫾1.16 cm3兲. A reduction of 55% in margin of error over the 2-D area projection method was observed if estimating the oyster volume with the linear model using data from distance transform technique developed in this research work. Table 1 summarizes the comparisons described here and it clearly shows that the distance transform technique outperformed the other two methods. Figure 13 shows the measurement data of all three techniques plotted against the real volume obtained using water displacement method. 4.3 Processing Speed Besides the superior performance in measurement accuracy, the distance transform 共DT兲 surface approximation approach also provides the processing speed advantage that is very critical for the industrial applications. Table 2 shows the computation time of each processing step. The benchmarks were obtained using a PC with single 1.6-GHz Pentium IV processor. The software was developed in Microsoft Visual C⫹⫹ development environment with a few key functions optimized using Intel’s MMX technology. All numbers shown in Table 2 are in milliseconds. 2-D projection requires the least number of processing steps and only takes 37.7 ms to calculate the 2-D area. The DT approach is
共4兲
The average of the percentage error for all 30 samples was then calculated and is shown in Table 1. The distance transform technique had the lowest average percentage error of 2.9%, which shows a 68% improvement over the 2-D area projection method 共8.4%兲. This shows that the 3-D approach reduces the measurement error by 55 and 68% com-
Table 1 Accuracy comparisons for predicted oyster volumes. 2-D Projection Area Squared correlation coefficient Standard error (cm3) Average percent error Percent of improvement over 2-D Average margin of error (cm3) Percent of improvement over 2-D
Bilinear Surface Interpolation
Distance Transform
0.85
0.967
0.987
1.21 8.4%
0.565 3.6%
0.46 2.9%
—
55%
68%
⫾2.55 —
⫾1.40 45%
⫾1.16 55%
Optical Engineering, Vol. 42 No. 10, October 2003 2953
Lee et al.: Fast surface approximation . . . Table 2 Computation time for each processing step. Bilinear 2-D Projection Surface Area Interpolation Binarization and blob analysis Contour tracing
37.7 —
References
DT
37.7 0.1
37.7 0.1
Laser lines detection 3-D wire-frame model Bilinear surface interpolation
— — —
1.9 16.8 7666.1
1.9 16.8 —
DT 3-D map generation Volume calculation
— — —
— — 0.4
7.6 130.8 0.4
Total Time (ms)
37.7
7723
195.3
approximately 40 times faster than the bilinear interpolation, which is equivalent to five pieces of oyster meat/s. This is well above the processing speed requirement for the industry. The 7.7 s/piece processing time using the bilinear interpolation is not acceptable. 5 Conclusions A fast, nondestructive volume and surface area measurement technique was developed and presented. The DT was used to achieve fast processing and improved 3-D volume measurement accuracy. An advantage of this 3-D volume measurement is the simplified lighting and camera adjustments for taking images of the objects. With the 2-D area projection measurement, the lighting adjustments and camera calibration must be precise for a range of gray levels in the object. Inevitably, parts of objects are going to be outside of the detectable gray-scale range. Also, the volume calculation equation was modeled by using a twoparameter linear regression model relating the measured area to the volume. With the new 3-D volume measurement technique, the heights of each point on the object surface where the laser lines strike were measured. These continuous line height measurements gave a more accurate volume estimation model than the model built from using only the 2-D area information. Data analysis showed that the new technique presented in this paper outperformed the bilinear surface interpolation and the simple 2-D projection techniques. Although the standard error of the DT method shows slight improvement over the bilinear surface interpolation method, the significant processing speed improvement 共approximately 40 times faster using a Pentium IV 1.6-GHz processor兲 makes the DT an excellent approach for real-time applications. This new technique has great potential in commercial applications for measuring object volume and surface area at high speeds. Future work includes investigation of the optimal number of laser lines for better accuracy and other surface approximation techniques for comparison.
Acknowledgment This research was partially funded through the Cooperative State Research, Education, and Extension Service of the U.S. Department of Agriculture, Special Research Grants Program, Food Safety 共Project No. 99-34382-8463兲 2954 Optical Engineering, Vol. 42 No. 10, October 2003
1. K. C. Diehl, T. W. Awa, R. K. Byler, M. van Gelder, M. Koslav, and C. R. Hackney, ‘‘Geometric and physical properties of raw oyster meat as related to grading,’’ Trans. Am. Soc. Agri. Eng. 33共4兲, 270– 275 共1990兲. 2. R. M. Lane, ‘‘Oysters, locally packed and still of nutritional value,’’ TV Facts 共Dec. 20, 1995兲. 3. M. B. Parr, R. K. Byler, K. C. Diehl, and C. R. Hackney, ‘‘Machine vision based oyster meat grading and sorting machine,’’ J. Aqu. Food Prod. Technol. 3共4兲, 5–24 共1994兲. 4. D. J. Lee, R. M. Lane, and G. H. Chang, ‘‘3-D reconstruction for high-speed volume measurement,’’ in Machine Vision and ThreeDimensional Imaging Systems for Inspection and Metrology, Proc. SPIE 4189, 258 –267 共2000兲. 5. D. J. Lee, B. Westover, and J. D. Eifert, ‘‘Three-dimensional surface approximation from incomplete data using distance transform,’’ in Image Reconstruction from Incomplete Data II, Proc. SPIE 4792-15, 125–134 共2002兲. 6. W. F. Schreiber, Fundamentals of Electronic Imaging Systems, 2nd ed., Springer-Verlag, 共1991兲. 7. G. X. Ritter and J. N. Wilson, Handbook of Computer Vision Algorithms in Image Algebra, CRC Press, Boca Raton, FL 共1996兲. 8. J. C. Russ, The Image Processing Handbook, CRC Press & IEEE Press, 共1999兲. 9. M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision, pp. 682– 685, 744 –750, PWS Publishing, 共1999兲. 10. A. Rosenfeld and J. L. Pfalz, ‘‘Distance functions on digital pictures,’’ Pattern Recogn. 1, 33– 62 共1968兲. 11. P. Min, J. Chen, and T. Funkhouser, ‘‘A 2D sketch interface for 3D model search engine technical sketch,’’ in Proc. SIGGRAPH 2002, San Antonio, Texas 共2002兲. 12. C. R. Maurer, Jr., V. Raghavan, and R. Qi, ‘‘A linear time algorithm for computing the Euclidean distance transform in arbitrary dimensions,’’ in Proc. IPMI 2001, pp. 358 –364, LNCS 共2002兲. 13. H. Breu, J. Gil, D. Kirkpatrick, and M. Werman, ‘‘Linear time Euclidean distance transform algorithms,’’ IEEE Trans. Pattern Anal. Mach. Intell. 17共5兲, 529–533 共1995兲. D. J. Lee received his BSEE degree from the National Taiwan University of Science and Technology in 1984, his MS and PhD degrees in electrical engineering from Texas Tech University in 1987 and 1990, respectively, and his MBA degree from Shenandoah University in 1999. Dr. Lee is currently an associate professor with the Department of Electrical and Computer Engineering at Brigham Young University (BYU), Provo, Utah. He served in the machine vision industry for 12 years prior to joining BYU in 2001. He was a staff scientist with Innovision Corporation, a senior system engineer with Texas Instruments, and R&D manager and vicepresident of R&D with AGRI-TECH, and the director of vision technology with Robotic Vision System, Inc. He has designed and built real-time machine vision systems for various industries including automotive, pharmaceutical, semiconductor, surveillance, and military. His current research is on 3-D reconstruction, medical image analysis, object tracking, shape analysis, and shape-based pattern recognition. Joseph Eifert is currently an assistant professor and extension specialist with the Department of Food Science and Technology of Virginia Tech. His research and extension programs emphasize microbiological safety and quality issues for food processors and food safety education. He also teaches a graduate course on food regulatory affairs. Dr. Eifert received his graduate degrees in food science and technology from Virginia Tech, and his BS degree in biology from Loyola Marymount University. He was previously a laboratory manager for the Nestle´ USA Quality Assurance Laboratory in Columbus, Ohio, and an analytical chemist for the U.S. Food and Drug Administration in Los Angeles, California.
Lee et al.: Fast surface approximation . . . Pengcheng Zhan received his BS degree in electrical engineering and applied electronics from Tsinghua University, Beijing, China, in 2002. His research interests are signal and image processing and computer vision. He is currently a PhD student with the Department of Electrical and Computer Engineering, Brigham Young University (BYU), Provo, Utah.
Benjamin Westover is a recent graduate of Brigham Young University with a BS degree in computer engineering. He has done research work in computer vision with Dr. D. J. Lee at Brigham Young University and Dr. Robert Pless at Washington University in St. Louis. He is currently a doctoral student in computer science at Washington University in St. Louis, and is working on problems related to computational biology with his advisor Dr. Jeremy Buhler.
Optical Engineering, Vol. 42 No. 10, October 2003 2955