Development of a Machine Vision System for

0 downloads 0 Views 242KB Size Report
Jan 20, 2009 - Abstract. In this paper, a machine vision system for determination of surface area and volume of orange is presented. The proposed machine ...
Proceedings of the international Conference on Innovations in Food Processing Technology and Engineering (ICFPTE’08), Asian Institute of Technology, Pathumthani, Thailand 19-20 January 2009

Development of a Machine Vision System for Modeling Surface Area and Volume of Orange M. Omid*, M. Khojastehnazhand and A. Tabatabaeefar Faculty of Biosystems Engineering, University of Tehran, Karaj, Iran. * Corresponding Author's E-mail: [email protected] Abstract. In this paper, a machine vision system for determination of surface area and volume of orange is presented. The proposed machine vision system consists of two CCD cameras, an appropriate lighting system and a personal computer. The cameras are placed at right angles to each other in order to capture two perpendicular views of the image of the orange. Initially, the algorithm removes the background by subtraction technique. The image of the main produce is then divided into a number of frustums of right elliptical cones. The surface area and volume of each frustum are computed by the segmentation method. The total surface area and volume of the orange is then approximated as the sum of all elementary frustums. The difference between the computed surface areas and volumes obtained by the proposed machine vision system and measured by classical methods (tape method and water displacement, respectively), were not statistically significant at the 5% level. The Bland–Altman results indicated the orange size has no effect on the accuracy of estimated surface area and volume of the oranges found by the image processing technique. The characterization results of oranges showed that the computed volume and measured mass parameters are highly correlated with a coefficient of determination of 0.93. Therefore the same algorithm may be used to grade the oranges based on the mass using the estimated volume information that has already been computed by segmentation method. The developed algorithm is quite general and may be readily extended for surface area and volume computation of other axi-symmetric fruits such as melon, kiwifruit, pomegranate, and pear. The proposed method is rotationally invariant and does not require fruit alignment on the conveyor and may easily be integrated in a multi-product sorting system for grading citrus. Keywords: Orange; Surface area; Volume; Machine vision; Segmentation method

Introduction Physical attributes of fresh produce, such as density, weight, surface area and volume have often been used to calculate water loss, heat transfer, quantity of pesticide applications, respiration rates, evaluation of fruit growth and quality, ripeness index to forecast optimum harvest time, grading and so on (Hahn and Sanchez, 2000; Topuz et al., 2005; Wilhelm et al, 2005; Lee et al, 2006; Lorestani and Tabatabaeefar, 2006). Among all these attributes, surface area is one of the most important factors in all these application fields. Research work has been done to determine the relationship between surface area and more easily measured attributes such as weight, volume and 2D measures (Forbes and Tattersfield, 1999; Hahn and Sanchez, 2000; Sabliov et al., 2002; Lee et al., 2006; Wang and Nguang, 2007). In recent years, the search to find rapid and nondestructive techniques for measurement of these physical attributes for size sorting, quality grading, etc. have attracted many researches. Different mathematical models and numerical methods have been applied to extract a representation of surface area and volume. Machine vision and image processing techniques have been found increasingly useful in the fruit industry, especially for applications in quality inspection and shape sorting. Researches in this area indicate the feasibility of using such systems to improve product quality while freeing people from the traditional hand sorting of agricultural materials. Currently, machine vision is the most effective tool for external feature measurements such as color intensity, color homogeneity, bruises, size, shape and stem identification (Forbes and Tattersfield, 1999; Lee et al., 1999; Sabliov et al., 2002; Jafari, et al., 2006; Lorestani et al., 2006). The use of machine vision is

Proceedings of the international Conference on Innovations in Food Processing Technology and Engineering (ICFPTE’08), Asian Institute of Technology, Pathumthani, Thailand 19-20 January 2009

gaining interest for determination of physical attributes of fruits and irregular-shaped objects, because it is a non-destructive method requiring image analyses and image processing operations. Forbes and Tattersfield (1999) developed a combined machine vision and artificial neural network technique for the estimation of pear volume from the 2-D digital images. The RMS percentage error using a single digital image was 3%. This error was reduced to 1.9% when the volume was estimated from sets of four images. Lorestani et al. (2006) developed a fuzzy logic based algorithm for sorting of Golden delicious apples. Features such as color and size were measured through a data acquisition system consisted of apple’s sorter, illumination chamber, webcam and a PC. Grading results so obtained showed 91.2% and 95.2% agreements for off-line and online cases, respectively, with that of human expert. Hahn and Sanchez (2000) developed an imaging algorithm to measure the volume of noncircular shaped agricultural produce such as carrots. Both Sabliov et al. (2002) and Wang and Nguang (2007) used image-processing techniques to compute the volume and surface area of axi-symmetric agricultural products. Bailey et al. (2004) demonstrated an image processing approach which estimated the weight of agricultural products rapidly and accurately. Koc (2007) determined the volume of watermelon using ellipsoid approximation and image processing. He compared the results with water displacement method to determine overall system accuracy. The objective of this study is to develop an efficient algorithm for accurate computation of volume and surface area of oranges based on machine vision.

Materials and Methods A total of 50 randomly selected oranges of various sizes were purchased from a local market. The mass of each orange was measured by a digital balance, with an accuracy of ± 0.01 g. The minimum and maximum masses were 136 and 237g, respectively. In order to determine the volume and surface area of oranges a machine vision system was designed, developed and tested. The design of this system was divided into the following sequential stages: image acquisition, image processing, volume and surface area computation with image processing method and system validation. The proposed system (Fig. 1) consists of two CCD cameras with 510x492 pixels, two capture cards, an appropriate lighting system and a personal computer (PC). Cameras had a CS lens mount, a focal length of 3.5-8 mm with a 510 by 492 pixel resolution and provided a resolution of 480 vertical TV lines. The cameras were mounted about 25 cm above the belt and powered by a 24V power supply. To provide uniform lighting, four fluorescent tubes, placed above the conveyor, were used. The position of the lighting tubes was adjusted to provide uniform, well illuminated and free from shadows images of fruit. The light source and cameras were mounted on a frame and were attached to the measurement table. A white cardboard was used as background surface to facilitate and simplify the segmentation task. Signals from orange samples were captured by the camera, transferred to the PC through the video capture card, digitized, and stored on the PC in RGB (red, green, blue) color space. A program was developed to capture and record the surface images of the orange. Algorithms are implemented using VB 6.0 programming language.

Proceedings of the international Conference on Innovations in Food Processing Technology and Engineering (ICFPTE’08), Asian Institute of Technology, Pathumthani, Thailand 19-20 January 2009

Fig. 1 The proposed machine vision system

Fig. 2 The axi-symmetric geometry: (a) 3-D view of an orange, (b) segmentation into a number of elliptical cone, (c) extracted diameters from the two cameras, and (d) thickness of a frustum and its surface areas The image processing aspect of this study is comprised of three steps: background segmentation, image enhancements (removal of background and shadow in the image) and dimensional calibration. In order to remove the background from fruit images, firstly an image from the background is captured. By determination of R, G and B values for all pixels of background, standard deviation of the images are calculated and stored in the database. Once the image of the fruit is captured, the RGB value of any pixel in the image can be computed by Eq. (1): P ( X , Y ) = B × 216 + G × 2 8 + R (1) The RGB values in the fruit image are then compared with that of the stored information on the background image available in the database. If the difference between the two RGB values is less than 3σ, then it is regarded as the background, else it is fruit: IF |PB - PF | 3σ, THE) Pixel = Background (2) where PB is the RGB values of background image pixel, PF is the RGB values of fruit image pixel and σ is the standard deviation of background image. This method of background segmentation produces slight shadows underneath of the fruit image. To correct for this overestimation we used information about the shadow histogram, by determining upper and lower threshold limits. This simple technique worked satisfactorily for orange as well as other citrus fruits tested. The number of pixels in the foreground (fruit) has to be scaled in order to convert (map) the total number of pixels into a real area / volume value. The constants Csa (scale factor of surface area) and Cvol (scale factor of volume) of C sa = 0.487 × 10 -2 and C vol = 0.295 × 10 -3 were obtained after rationing the real and measured area/volume of a perfect sphere, respectively. The Csa and Cvol are eventually used to convert units of measurement from pixels to cm2 and cm3, respectively. The volume and surface area of a conical frustum can be calculated using equations that are commonly found in mathematics handbooks (Szirtes, 2006). Consider the 3-D representation of the produce (Fig. 2). Conceptually we divide the image of the orange into a number of frustums of right circular cone (Fig. 2b). These elementary cylindrical objects are assumed to be of equal pixel height, δ , as shown in Fig. 2d. The volume of the orange may then be estimated by summing the

Proceedings of the international Conference on Innovations in Food Processing Technology and Engineering (ICFPTE’08), Asian Institute of Technology, Pathumthani, Thailand 19-20 January 2009

elementary volumes of individual cylinders. The required dimensional attributes are the top and bottom diameters and height of the frustum as shown in Fig. 2c and Fig. 2d. The surface area of each side of the frustum can be calculated using the two perpendicular diameters (Fig. 2c) obtained by the cameras. The surface area of each frustum is assumed ellipsoid to increase the system accuracy. To be axi-symmetric in this context, all the cross-sections on the x–y plane should be elliptical. The surface area of each frustum, (Ai, i= 1, 2,…,n), is given by:

Ai = π *

d ii

*

2

d i2

(3)

2

where di1 and di2 are the two perpendicular diameters of surface (Fig. 2c). The accuracy of estimated surface area depends on the position of minimum and maximum diameters of the orange’s surface. The volume of each frustum (Vi, i= 1, 2, …, n) is calculated by (Fig. 2d):

Vi =

Ai + Ai +1 *δ 2

(4)

where Ai and Ai+1 are the surface areas of i and i+1 segments, respectively, and δ is the frustum pixel height (assuming all frustums have equal thicknesses). Once the volume of individual frustums is obtained, the total volume of the orange can be readily calculated by adding them up: n

VIP = ∑ Vi

(5)

i =1

where VIP is total volume of the orange calculated by image processing (IP) method. The surface area of each frustum (Si, i= 1, 2, …, n) is calculated from the following expression:

Si =

π 2

*(

d ii + d i2 2

+

d i1 +1 + d i2 +1 2

) δ 2 +(

d ii + d i2 4



d i1 +1 + d i2 +1 4

)2

(6)

The total surface area of orange using IP method can be determined by adding them up: n

S IP = ∑ S i

(7)

i =1

The actual volume of oranges can measured using the water displacement method (VWDM). In this method, the object is completely submerged in water and the weight of the displaced water measured (Mohsenin, 1970). Even though this method is quite accurate, it is not ideal for objects that absorb water and for some products, this approach might be considered intrusive or destructive. The actual surface area of oranges can be measured using the tape method (STM). In this method, the tape is usually cut into small strips to fully cover the surface of the object, then these strips are peeled off and the total area is measured either by hand or by an area meter (Mohsenin, 1970). The accuracy of this method is heavily dependent on how precisely the object can be covered with tape strips, and also on how exactly the area for these tape pieces can be measured. The TM has been found time-consuming, labor-intensive and prone to human error (Mohsenin, 1970; Sabilov et al., 2002). The paired t-test and the mean difference confidence interval approach are used to compare the volume and surface area determined by the image processing (VIP and SIP) techniques and the actual values (VWDM and STM). Also, the Bland-Altman approach is used to plot the agreement between the calculated and measured orange's volume/area (Bland and Altman, 1999). The statistical analyses were performed using Excel's Analysis Toolpack option (MS Corporation, Redmond, WA).

Results and discussion The volume determined by image processing (IP) technique was compared with the mean volume measured by water displacement method (WDM). The results of comparison between predicted (IP) and experimental (WDM) values were correlated with R2=0.9852 as shown Fig. 3. The mean volume difference between IP and WDM was d1= −0.15 (95% confidence interval: -1.12 ). The standard deviation of the volume differences was sd1= 3.41 . The paired t-test and 0.82 results showed that the volume computed with IP method was not significantly different from the volume measured with WDM (P = 0.7540) (Table 1). The volume differences between the computed and experiments were normally distributed. The 95% of the volume differences are expected to lie

Proceedings of the international Conference on Innovations in Food Processing Technology and Engineering (ICFPTE’08), Asian Institute of Technology, Pathumthani, Thailand 19-20 January 2009

between and (known as 95% limits of agreement, Bland and Altman, 1999). The 95% limits of agreement for comparison of volumes computed with IP method and measured with WDM were -6.67 and 6.98 (Fig. 4). Also from the results shown in Fig. 4, we can conclude that orange’s size has no effect on the accuracy of estimated volume. Recently, Koc (2007) determined the volume of watermelon using IP technique. However, he concluded as the size of the watermelon increases, the IP method overestimates the volume. The improvement achieved in the present study, however, may be attributed to the more sophisticated image processing method used here, i.e. two cameras instead of one and more accurate algorithms.

Table 1. t-test analyses on comparing volume and surface area measurement methods 95% confidence interval for the mean difference Volume* (VWDM &VIP) 0.7540 -1.12; 0.82 Surface area* (STM &SIP) 0.8371 -.097; 1.19 * WDM: Water displacement method; IP: Image processing method; TM: Tape method Paired t-test

Fig. 3 Comparison of computed and measured volume of oranges with image processing (IP) method and water displacement method (WDM)

Fig. 4 Bland–Altman plot for the comparison of orange volumes computed with image processing (IP) method and measured with water displacement (WDM); outer lines indicate the 95% limits of agreement (-6.67; 6.98cm3) and center line shows the average difference (0.15 cm3)

Proceedings of the international Conference on Innovations in Food Processing Technology and Engineering (ICFPTE’08), Asian Institute of Technology, Pathumthani, Thailand 19-20 January 2009

The results of comparison between predicted (IP) and measured (TM) values of surface areas with R2=0.9296 are shown in Fig. 5. The mean surface area difference between the two methods was d2 = 0.11 (95% confidence interval: -0.97 and 1.19 ). The standard deviation of the surface area differences was sd2 = 3.80 . The paired t-test results showed that the surface area measured with IP method is not significantly different than the actual surface area measured by TM (P = 0.8371) (Table 1). The surface area differences between IP method and the TM were also normally distributed and the 95% limits of agreement in comparing these two methods were calculated to be 7.70 and 7.48 (Fig. 6). Fig. 6 shows orange’s size has no effect on the accuracy of estimated surface area. The mass of agricultural produce can be estimated from volume if the density of the produce is known. The characterization results of oranges showed that the volume and mass parameters are highly correlated. The correlation formula derived based on the collected data:

M = 0.68VIP + 44.6

, R 2 = 0.93

(8) where M and VIP are the estimated mass (grams) and computed volume (cm ) using IP technique (Eq. 5), respectively. Hence, this simple formula may be used to grade the oranges based on the mass of oranges using the estimated volume information that was already computed by IP technique. 3

Fig. 5 Comparison of computed and measured surface areas of oranges with image processing (IP) method and tape method (TM)

Proceedings of the international Conference on Innovations in Food Processing Technology and Engineering (ICFPTE’08), Asian Institute of Technology, Pathumthani, Thailand 19-20 January 2009

Fig. 6 Bland–Altman plot for the comparison of orange surface areas computed with image processing (IP) method and measured with tape method (TM) and; outer lines indicate the 95% limits of agreement (-7.70 ; 7.48cm2 ) and center line shows the average difference (-0.11 cm2)

Conclusion Based on the results presented in this paper, we can state that image processing method can accurately compute the volume and surface area of oranges. The difference between volumes (and surface areas) computed using the proposed method and the water displacement (and tape) method is not statistically significant at the 5% level. The Bland–Altman approach shows that the size of orange has no effect on the estimation of volume and surface area of orange. The developed algorithms are quite general and may be readily applied for volume and surface area computation of other axisymmetric fruits such as melon, kiwifruit, pomegranate, and pear. It should be stated that the proposed method is rotationally invariant and does not require fruit alignment on the conveyor. The method used here for background segmentation is not based on threshold values, and therefore it can be used with other fruits, i.e., irrespective of fruit type. Hence, the method may easily be integrated in an online multi-product sorting system for grading citrus.

Acknowledgment. The financial support provided by University of Tehran, Iran, is gratefully acknowledged.

References Bailey D.G., K.A. Mercer, C. Plaw, R. Ball and H. Barraclough, 2004. High speed weight estimation by image analysis. In: Mukhopadhyay, S.C., Browne, R.F., Gupta, G.S. (Eds.), Proc. Nat. Conf. Non Destructive Testing. Palmerston North: New Zealand, July 27–29, pp. 89–96. Bland J.M. and D.G. Altman, 1999. Measuring agreement in method comparison studies. Stat. Methods Med. Res. 8, 135–160. Forbes K.A. and G.M. Tattersfield, 1999. Estimating fruit volume from digital images. Africon, IEEE, 1, 107–112. Hahn F. and S. Sanchez, 2000. Carrot volume evaluation using imaging algorithms. J. Agric. Eng. Res. 75, 243–249. Jafari A.A., S.S. Mohtasebi, H. Eghbali-Jahromi and M. Omid, 2006. Weed detection in Sugar Beet fields using machine vision, Int. J. Agri. Biol., 8(5), 602-605. Koc A.B., 2007. Determination of watermelon volume using ellipsoid approximation and image processing. Postharvest Biol. Technol. 45(3), 366–371. Lee W.S., D.C. Slaughter and D.K. Giles, 1999. Robotic weed control system for tomatoes. Precision Agriculture, 1, 95 –113. Lee, D.J., X. Xu, J. Eifert and P. Zhan, 2006. Area and volume measurements of objects with irregular shapes using multiple silhouettes. Optical Engineering, 45(2), 027202:1-10. Lorestani A.N. and A. Tabatabaeefar, 2006. Modeling the mass of kiwi fruit by geometrical attributes. Int. Agrophysics, 20, 135-139. Lorestani A.N., M. Omid, S. Bagheri-Shooraki, A.M. Borghei and A. Tabatabaeefar, 2006. Design and evaluation of a fuzzy logic based decision support system for grading of Golden Delicious apples, Int. J. Agri. Biol. 8(4), 440-444. Mohsenin, N. N., 1970. Physical properties of plant and animal materials. New York, NY: Gordon and Breach Science Publishers. Sabliov C.M., D. Boldor, K.M. Keener and B.E. Farkas, 2002. Image processing method to determine surface area and volume of axi-symmetric agricultural products. Int. J. Food Prop. 5, 641–653. Szirtes, T., 2006. Applied Dimensional Analysis and Modeling. 2nd Ed., Elsevier Science & Technology Books, Butterworth Heinemann. 856 pages. Topuz A., M. Topakci, M. Canakci, I. Akinci and F. Ozdemir, 2005. Physical and nutritional properties of four orange varieties. J. Food Eng. Res., 66, 519-523. Wang T.Y. and S.K. Nguang, 2007. Low cost sensor for volume and surface area computation of axisymmetric agricultural products. J. Food Eng. 79, 870–877.

Proceedings of the international Conference on Innovations in Food Processing Technology and Engineering (ICFPTE’08), Asian Institute of Technology, Pathumthani, Thailand 19-20 January 2009

Wilhelm L.R., D.A. Suter and G.H. Brusewitz, 2005. Physical Properties of Food Materials. Food and Process Eng. Techno., ASAE, Michigan, USA, 23–52.