MACHINE VISION SYSTEM FOR DETERMINING CITRUS COUNT AND SIZE ON A CANOPY SHAKE AND CATCH HARVESTER R. Chinchuluun, W. S. Lee, R. Ehsani ABSTRACT. A machine vision‐based citrus fruit counting system was developed for a continuous canopy shake and catch harvester. The system consisted of a 3CCD camera, four halogen lamps, an encoder, and a laptop computer. A total of 719 images were taken during an experiment on a test bench at the Citrus Research and Education Center (Lake Alfred, Fla.) and were used for analysis. The system was also tested on a canopy shake and catch harvester at a grove located in Fort Basinger, Florida, where a total of 773 images were acquired and 60 images were used for validation. Fruit weight was measured during image acquisition in 14 test bench experiments and in two field tests with a commercial canopy shake and catch harvester. An image processing algorithm that could identify fruit and measure its size was developed using a Bayesian classifier, morphological operations, and watershed segmentation. From the sets of color images, the number of fruit and total fruit areas were measured using the developed algorithm. Finally, the number of citrus fruits that were identified by the image processing algorithm during the test bench experiment was compared against actual fruit weight. The coefficient of determination (R2) between them was 0.962. To validate the canopy shake and catch harvester experiment, the number of fruit was counted manually from a total of 60 images. Density clustering was used to enhance the result of the Bayesian classifier. The manual count was compared with the image processing algorithm count. The R2 value was 0.891 between the actual and estimated number of fruit. Keywords. Canopy shake and catch harvester, Citrus, Machine vision, Yield mapping.
F
lorida provides about 80% of U.S. citrus produce. The majority of harvested Florida citrus goes into processing for making juice and by‐products. As grove sizes increase, Florida citrus growers identify in‐grove spatial variabilities in tree size, soil type, soil fertil‐ ity, water content, and many other factors for crop produc‐ tion. To manage such spatial variabilities, yield mapping can be used as a first step so that growers can begin to implement site‐specific management. A number of yield monitoring and mapping systems have been studied and developed for vari‐ ous fruits and crops (Schueller and Bae, 1987; Campbell et al., 1994; Vellidis et al., 2001; Lee et al., 2005; Perry et al., 2005). Currently, Goat (previously manufactured by GeoFocus, LLC, Gainesville, Fla.) and CitriTrack (GeoAg Solutions Inc., Lehigh Acres, Fla.) are two citrus yield mapping systems used in Florida. Although the Goat system (Schueller et al., 1999) is capable of producing yield maps for citrus, it requires hand harvesting beforehand to create a map. In this system, fruits are required to be harvested and collected in
Submitted for review in February 2008 as manuscript number PM 7394; approved for publication by the Power & Machinery Division of ASABE in April 2009. Presented at the 2007 ASABE Annual Meeting as Paper No. 073050. The authors are Radnaabazar Chinchuluun, Graduate Research Assistant, Won Suk Lee, ASABE Member Engineer, Associate Professor, Department of Agricultural and Biological Engineering, University of Florida, Gainesville, Florida; and Reza Ehsani, ASABE Member Engineer, Assistant Professor, Citrus Research and Education Center, University of Florida, Lake Alfred, Florida. Corresponding author: Won Suk Lee, Department of Agricultural and Biological Engineering, University of Florida, P.O. Box 110570, Gainesville, FL 32611‐0570; phone: 352‐392‐1864, ext. 227; fax: 352‐392‐4092; e‐mail:
[email protected].
tubs. When the tubs are picked up, the Goat truck driver is required to push a button in order to record the location of every tub, which they sometimes forget to do. As a result, important information is lost that is needed when creating a yield map. Later, Whitney et al. (2001) improved the Goat system by adding a differential GPS receiver and weighing systems to eliminate post‐processing and increase accuracy, since the Goat system defines yield as a number of tubs with the assumption that all tubs are fully loaded. The other system, CitriTrack transfers instantaneous grove data to the main office through wireless communication for payroll management in addition to creating a yield map. However, the yield mapping system still requires hand harvesting. Manual citrus harvesting has been used for many years. However, due to labor shortages and the increasing cost of harvesting operations, the use of mechanical harvesting systems has been increasing in the last several years. At the same time, significant efforts have been devoted to improve productivity and mechanize the harvesting of the Florida citrus crop (Futch et al., 2005). Today, in Florida, two types of mechanical harvesters are being used (Futch and Roka, 2005). One of the harvesting systems is a canopy shake and catch harvester. It shakes tree canopies, causing fruit to fall onto a catch frame. Then fruit is carried through a conveyor system to the goat‐like trucks. It can harvest 200 to 400 trees per hour. The second type of the harvesting system is called a trunk shake harvester, which shakes tree canopies, causing fruit to fall on the ground. Then a fruit picking crew manually collects the fruit. One of the issues for manual citrus harvesting is that it allows growers to know yield from only several trees, since harvested fruit from several trees are put in a same tub and dumped into a truck. By adding a vision system to the mechanical harvesters, yield per single tree and fruit quality
Applied Engineering in Agriculture Vol. 25(4): 451‐458
E 2009 American Society of Agricultural and Biological Engineers ISSN 0883-8542
451
information can be obtained, so that growers could manage their groves on a tree‐by‐tree basis. Numerous machine vision systems were studied to inspect fruit characteristics and quality, and to map yield. Such systems include lentil grading (Shahin and Symons, 2001), citrus grading (Aleixos et al., 2000), apple grading (Leemans et al., 2002), and citrus yield mapping systems (Annamalai et al., 2004; MacArthur et al., 2006; Grift et al., 2006; Kane and Lee, 2006; Chinchuluun and Lee, 2006). Leemans et al. (2002) used color as a discriminating feature to recognize apples from background as well as apples from their stems. They reported that the correct classification rate was 95% and 90% for Golden Delicious and Jonagold apples, respectively. Annamalai et al. (2004) developed a citrus yield mapping system that was able to map when fruit was on trees. They reported that coefficient of determination between their yield prediction model and the number of fruit counted by hand harvesting was 0.53. Chinchuluun and Lee (2006) made improvements to the yield mapping system by adding one more camera and improving uneven illumination. They reported that the coefficient of determination between their yield prediction model and the number of fruit counted by hand harvesting was 0.64. A variety of image segmentation methods have been used to identify fruit from the background in image processing and machine vision applications. Classification methods include neural networks, Bayesian classifiers and discriminant analysis based on different features of fruit surfaces. Slaughter and Harrell (1989) used a Bayesian classification model for discriminating oranges from the natural back‐ ground of an orange grove using color information. Marchant and Onyango (2003) compared a multi‐layer feed‐forward neural network classifier with a Bayesian classifier for classifying color image pixels into plant, weed and soil. They found that the Bayesian classifier outperformed the neural network in the sense of total misclassification error. Howev‐ er, if the number of features increased to more than five or so, the Bayesian classification was not feasible. Thus, they recommended using the Bayesian classifier over a feed‐ forward neural network when the number of features was fewer than five. The long‐term objective of this study was to build a yield mapping system for citrus mechanical harvesting machines. The immediate objective was to investigate the possibility of using a computer imaging technique for quantifying the citrus mass flow rate through a typical citrus mechanical harvesting machine. Specific objectives were: S to build hardware components to count the number of fruit and measure their size as they are carried on a citrus me‐ chanical harvester's fruit delivery conveyor belt, S to develop an image processing algorithm to recognize in‐ dividual fruit and measure fruit size, and S to test the complete system in a commercial citrus grove.
MATERIALS AND METHODS HARDWARE DESIGN OF A MACHINE VISION SYSTEM A hardware system to acquire high quality images for a machine vision‐based citrus yield monitoring system was designed and built. The system consisted of a 3CCD progressive scan digital color camera (HV‐F31, Hitachi Kokusai Electric Inc, Woodbury, N.Y.), four halogen lamps
452
(Master Line Plus 50W GU5.3 12V 38D, Phillips Electron‐ ics, Atlanta, Ga.), a laptop (CF‐51, Panasonic, Secaucus, N.J.), a hall‐effect encoder, an IEEE 1394 card (AFW‐1430V, Adaptec Inc., Milpitas, Calif.) for image acquisition, and a data acquisition card (DAQCard‐6036E, National Instru‐ ments, Austin, Tex.). The camera acquired 24‐bit 800‐ × 600‐pixel images in red, green, and blue color mode at 15 frames per second with a shutter speed of 1/2200 s and had an automatic aperture setting. Each color component was 8 bits. The actual width of one frame was 68.6 cm. Polarizing filters (25CP, Tiffen Co. LLC, Hauppauge, N.Y.) and the camera were placed in front of each lamp to remove glare from the lamps. Multithreading system software was devel‐ oped under Microsoft Visual C++ 6.0 MFC/COM and ImageWarp (BitFlow Inc., Woburn, Mass.). A housing (0.99 m long × 0.41 m wide × 0.97 m high) for the camera and lamps was built with an aluminum sheet (6.4 mm thick) to hold the lamps and the camera for image acquisition. Figure 1a shows the camera and illumination setup inside the housing. The housing was also used to keep sunlight from entering the conveyor belt (fig. 1b). The appropriate height of the camera and lamps was determined such that high quality images could be acquired with uniform illumination. The encoder (fig. 1c) was used to synchronize the speed of conveyor belt with image acquisition. The housing for the camera and illumination was installed on a test bench at the Citrus Research and Education Center (CREC), Lake Alfred, Florida and tested in February and March of 2007. The test bench was built to the exact same dimensions as the conveyor system in commercial canopy shake and catch harvesters according to the specifications shown in table 1. The conveyor speed was constant at 167.6 cm/s. For fruit size measurements, the imaging system was calibrated with objects of known diameters to determine a relationship between their diameters and number of pixels. As shown in figure 2a, the machine vision system was also tested in a commercial canopy shake and catch harvester (Freedom Series 3220, Oxbo International Corp., Clear Lake, Wis.) in March 2007. The housing was installed on the conveyor system of the harvester as shown in figure 2b. Images of individual fruit were acquired when citrus fruit passed through the conveyor system. IMAGE ACQUISITION An algorithm for counting the number of fruit and measuring fruit size was developed and tested on the test bench at the CREC. Different amounts of pre‐harvested and unwashed Valencia oranges were manually fed through the conveyor system of the test bench 14 times. The orange samples were randomly selected, not pre‐sized, not washed nor been stored more than two days.The width of the housing was 41 cm, which required the camera to capture every 41 cm length of the conveyor since the housing was installed on the conveyor system. As a result of the trials, the 41 cm distance Table 1. Conveyor belt specifications. Specification Width of the conveyor belt Speed of the shaft Diameter of gear wheel Speed of the conveyor belt
Scale 86.3 cm 200 rpm 16 cm 167.6 cm/s
APPLIED ENGINEERING IN AGRICULTURE
(a)
(a)
(b)
(b) Figure 2. Field testing setup with the machine vision system installed on a canopy shake and catch harvester: (a) canopy shake and catch harvest‐ er and (b) housing installed on the conveyor system of the harvester.
(c) Figure 1. Machine vision‐based citrus yield mapping system components: (a) camera and lamps, (b) housing installed on a test bench, and (c) Hall‐ effect encoder.
of conveyor movement was equal to 30 teeth of the encoder wheel. Therefore, image acquisition was synchronized with the conveyor belt using 30 teeth of the encoder to avoid any
Vol. 25(4): 451‐458
skip or overlap between successive images. Themovement of the conveyor belt per tooth was calculated as 41 cm/30 teeth = 1.3 cm/tooth. A total of 719 non‐overlapping images were acquired by synchronizing the speed of the conveyor belt using the hall‐effect encoder. Among these, 110 images were used for developing classification algorithms. For each test, total fruit weight was measured with a balance (PS60 Parcel Scale, Mettler Toledo, Columbus, Ohio) with an accuracy of 0.02 kg, and a sequence of fruit images was acquired. For each image, parameters such as number of fruit, fruit diameter (defined as an average of major and minor axes) and fruit area were extracted during image analysis. Subsequently, these parameters were summed from each image in every test to compare with the actual number of fruit and fruit weights. In addition, the system was tested on a commercial canopy shake and catch harvester at a commercial citrus grove (Lykes Bros. Inc., Lake Placid, Fla.) located in Fort Basinger, Florida during two trials in March of 2007. A total of 773 images were taken as well. The experiments were conducted by feeding previously harvested fruit to the conveyor system of the harvester while the harvester was
453
stationary. For each test, fruit weight was measured using a balance (RW‐05S, CAS, Korea) with an accuracy of 0.23 kg. IMAGE PROCESSING ALGORITHM First, image analysis was done on the images taken from the test bench. A total of 60 training images were used for fruit and background pixel sampling. The system software could be run in two modes: real‐time and post‐processing. In the real‐time mode, the software first acquired images, processed them, and then saved both an original image and analyzed image on a hard drive. However, in the post‐ processing mode, the software only saved acquired images on a hard drive, providing the highest running speed of the algorithm. The software was mostly run in post‐processing mode. After the images were acquired, digital images were converted to hue, saturation and intensity (HSI), and luma and chrominance (YIQ) color models (Gonzalez and Woods, 1992). These color models were used in color classification since they were able to separate intensity components from the images. Then, the color classification algorithm was used to classify fruit and background pixels in images. Using the sampled pixel values, a histogram was plotted for fruit and background classes for the `I' component from the YIQ model and the `hue' and `saturation' components from HSI color model. In the histograms (fig. 3), a solid line denotes `fruit' color distribution, while a dotted line indicates `background' color distribution. The histogram of the `I'
component shows very good separation between the two classes. Thus, in the segmentation process, both the `hue' component and the `I' component were chosen together, since the combination gave better discrimination between fruit and background than other color components. The peaks in the background histogram in figure 3b were from metal bars in the background of the images. Morphological operations, including erosion, dilation, and filling gaps, were applied to the color segmented images to correct color segmentation errors. Watershed transform followed to separate touching fruits (Lee and Slaughter, 2004). BAYESIAN CLASSIFIER For the color segmentation, a Bayesian classifier with normal distribution was developed because it is known to be robust. Color segmentation of images classifies each pixel as a `fruit' or `background' based on the pixel values. Choosing the prior probabilities presented a problem during Bayesian classification because the amount of fruit coming on the conveyor was random. The coverage of fruit pixels would change from image to image, hence the priors were not really known for any individual tests. Marchant and Onyango (2003) suggested choosing equal prior probabilities in these situations. However, in randomly sampled images in the tests, some images had no fruit, but some of them had many fruit. Therefore, the prior probabilities in this study were defined as follows: P( fruit ) =
Number of sampled fruit pixels total pixels
P(background ) = 1 − P( fruit )
(1)
Thus, it was found that P(fruit) was 0.22 and P(back‐ ground) was 0.88 using the 60 calibration images from the test bench trials. The discriminant function (F) for the field trial was found on the `I' and `hue' components, and is shown in equation 2. F = 0.318exp(-2x1 2) + 0.118x1 - 4.0564 + 0.692exp(-3x1x2) - 0.585exp(-2x2) (a) `I' component
+ 0.808exp(-4x2 2)
(2)
where x1 = mean of I and hue components for fruit x2 = mean of I and hue components for background.
RESULTS AND DISCUSSION
(b) `hue' component
Figure 3. Histograms of fruit and background pixels.
454
COLOR SEGMENTATION RESULT OF THE TEST BENCH For this experiment, the image processing algorithm was applied to 609 validation images that were taken on the test bench at the CREC. The color segmentation algorithm was started by image acquisition (fig. 4a) and followed by Bayesian classification (fig. 4b). After the Bayesian classifi‐ cation, morphological operations removed non‐fruit pixels that came into fruit regions. Finally, only fruit regions were extracted (fig. 4c). Since some neighboring fruit regions joined each other, the Watershed transform separated them into individual fruit (fig. 4d). The Watershed separation
APPLIED ENGINEERING IN AGRICULTURE
(a)
(b)
(c)
(d)
Figure 4. Image segmentation result: (a) original image, (b) Bayesian classifier result, (c) morphological operation result, and (d) Watershed transform result.
generated incorrect results sometimes when multiple fruit were touching each other. Table 2 summarizes the test bench experiment results of the image analysis. In every trial, actual fruit weight was measured. Regression analyses were conducted on fruit parameters that were found by the image analysis. A comparison between actual weight and sum of fruit diameters produced the highest coefficient of determination (R2), 0.963 (fig. 5). A root mean squared error (RMSE) was calculated between the actual weight and estimated weight from the
regression analysis for the test bench testing. The RMSE was 6.28 kg which equates to 10.7% average error per test. The R2 values between actual weight and sum of fruit area, and between actual weight and number of fruit counted by the algorithm were 0.962 and 0.892, respectively. Thus, fruit area and fruit diameter yielded better relationships to the actual weight than the number of fruit counted by the algorithm. It seemed that a small portion of fruit near the edge of the images (this fruit was counted twice in the two successive images when it was separated by the edge of the
Table 2. Summary of the test bench experiment results. Image Analysis Findings Test
Total Number of Images
Actual Fruit Weight (kg)
Actual Number of Fruit
Sum of Fruit Area (pixels)
Number of Fruit Counted by the Algorithm
Sum of Fruit Diameter (pixels)
1 2 3 4 5 6 7 8 9 10 11 12 13 14
37 27 28 41 29 23 46 40 51 55 51 60 60 61
22.6 25.9 31.2 46.1 49.3 49.0 75.3 71.0 97.6 93.9 91.6 114.3 111.2 120.3
66 85 149 173 218 209 312 300 410 375 384 496 513 456
208915 338612 583687 660859 867118 884879 1245102 1151251 1677222 1524808 1572474 2073615 2202060 1847159
55 82 155 169 305 252 336 331 518 415 547 758 836 623
3488 5784 10272 11544 15365 15281 21636 19601 28890 25891 27323 36345 37738 32469
Vol. 25(4): 451‐458
455
(a)
Figure 5. Regression analysis between the sum of fruit diameter and actu‐ al fruit weight.
image) and incorrect Watershed segmentation caused a lower coefficient of determination when the number of fruit counted by the algorithm was used. COLOR SEGMENTATION RESULTS FROM THE FIELD TEST Figure 6 shows the results of color segmentation of a field trial image. Figure 6a shows a typical color image containing non‐uniform illumination due to the sunlight entering from the bottom of the conveyor system. The bottom of the conveyor was intentionally designed to stay open so that other materials (such as leaves and small branches) could fall to the ground by gravity through the opening.However, the Bayesian classifier (fig. 6b) worked relatively well on the non‐uniform illumination images. As result of morphologi‐ cal operations, some fruit regions were incorrectly decreased due to the non‐uniform illumination. Consequently, after the Watershed transform, incorrect segmentation was generated, which made individual fruit extraction difficult (fig. 6d). Breunig et al. (2000) investigated detecting local outliers from large data sets based on local density. This method can be used to enhance the binary image from the Bayesian classifier. A local outlier is defined as an isolated object with respect to the surrounding neighborhood. The decision for detecting local outlier objects was the following: ⎧Object, if density of object in a circle > Threshold (6) ⎨ ⎩ Local outlier, otherwise
Using this density clustering approach, fruit pixels from the binary image were clustered in order to enhance the segmentation result given by the Bayesian classifier. The circle radius was chosen as two pixels based on several trials. The density threshold value (5 pixels) was also chosen the same way. The result of density clustering for figure 6b is shown in figure 7. Although fruit weight was measured twice, two measure‐ ments were not enough to conduct a regression analysis. Thus, the number of fruit from 60 images was counted for comparison with the result of the image processing algo‐ rithm. The regression analysis was also conducted on the number of fruit obtained by manual counting and algorithm counting. The coefficient of determination was 0.891 (fig. 8) between the actual number of fruit and the number of fruit counted by the algorithm. For those overestimated results, incorrect Watershed segmentation caused the difference in
456
(b)
(c)
(d)
Figure 6. Color segmentation for the field experiment: (a) original image, (b) result of Bayesian classifier, (c) result of morphological operation, and (d) result of the Watershed transform.
APPLIED ENGINEERING IN AGRICULTURE
CONCLUSION
Figure 7. Density clustering result for field experiment.
the number of fruit. Overlapped fruits in the images caused underestimation of the number of fruits. These results yielded a lower R2 than the test bench results; however, the algorithm seemed to have performed well, considering the non‐uniform illumination conditions. The machine vision system has not been tested in a citrus grove when the continuous canopy catch and shake system was used for harvesting. In future work, several obstacles are expected when the system is tested for actual harvesting in a citrus grove. The canopy shake and catch harvester may create vibrations which may cause machine vision system components such as wires and interconnections to malfunc‐ tion due to becoming loose. The system may also create dust while harvesting, which may affect image acquisition by reducing image quality. Another potential problem could be that some foreign materials will also be harvested along with fruit, such as leaves or branches, which will pose some difficulty in identifying fruit. Those leaves and branches may negatively affect image segmentation. Some of the pesticide residues, such as copper (greenish color), or diseases, such as rust mites, could also interfere with color segmentation. These potential problems could be solved using color information and/or shape in subsequent image processing steps, since leaves and branches are different in color and shapes. The pesticide residues and diseases would also exhibit distinct color than mature fruit. In future work, these potential obstacles need to be resolved for more accurate yield estimation.
A citrus fruit counting and size measurement system on a canopy shake and catch harvester was successfully devel‐ oped. The system was tested on a test bench as well as on a commercial canopy shake and catch harvester. The sum of areas, the number of fruit and the sum of fruit diameters were extracted using image analysis from the set of images from the test bench trial. The coefficients of determination of the sum of areas, the number of fruit, and the sum of fruit diameters against actual fruit weight were 0.962, 0.892, and 0.963, respectively. The RMSE was 6.28 kg which could be improved. For the commercial harvester trial, the coefficient of determination between the number of fruit counted by image processing algorithm and human counting was 0.891. The test bench experiments showed promising results for estimating citrus yield at harvesting, although the field trial demonstrated the need for system improvement. ACKNOWLEDGEMENTS The authors would like to thank to Dr. Ganesh Bora and Dr. Kyeong Hwan Lee at the Citrus Research and Education Center, Lake Alfred, Florida; Mr. Michael Zingaro in the Agricultural and Biological Engineering Department at the University of Florida; Mr. Esa Ontermaa at Lykes Bros. Inc., Lake Placid, Florida and Mr. James Colee, UF/IFAS statistical consultant for their assistance.
REFERENCES Aleixos, N., J. Blasco, E. Molto, and F. Navarron. 2000. Assessment of citrus fruit quality using a real‐time machine vision system. Proc. 15th Intl. Conf. on Pattern Recognition 15(1): 482‐485. Barcelona, Spain, IEEE. Annamalai, P., W. S. Lee, and T. Burks. 2004. Color vision system for estimating citrus yield in real‐time. ASAE Paper No. 043054. St. Joseph. Mich.: ASAE. Breunig, M., H. P. Kriegel, R. Ng, and J. Sander. 2000. LOF: Identifying density‐based local outliers. ACM SIGMOD Record 29(2): 93‐104.
70
Actual number of fruit Counted by algorithm
60
R2 = 0.891
Number of fruit
50
40
30
20
10
0 1
6
11
16
21
26
31
36
41
46
51
56
Image number
Figure 8. Comparison between manual counting and algorithm counting for field tests.
Vol. 25(4): 451‐458
457
Campbell, R. H., S. L. Rawlins, and S. Han. 1994. Monitoring methods for potato yield mapping. ASAE Paper No. 941584. St. Joseph, Mich.: ASAE. Chinchuluun, R., and W. S. Lee. 2006. Citrus yield mapping system in natural outdoor scenes using the Watershed transform. ASABE Paper No. 063010. St. Joseph, Mich.: ASABE. Futch, S. H., and F. M. Roka. 2005. Continuous canopy shake mechanical harvesting systems. EDIS. UF/IFAS. Gainesville, Fla. Futch, S. H., J. D. Whitney, J. K. Burns, and F. M. Roka. 2005. Harvesting: From manual to mechanical. EDIS. UF/IFAS. Gainesville, Fla. Gonzalez, R. C., and R. E. Woods. 1992. Digital Image Processing. Reading, Mass. Addison Wesley Publishing Company, Inc. Grift, T., R. Ehsani, K. Nishiwaki, C. Crespi, and M. Min. 2006. Development of a yield monitor for citrus fruits. ASABE Paper No. 061192. St. Joseph, Mich.: ASABE. Kane, K. E., and W. S. Lee. 2006. Spectral sensing of different citrus varieties for precision agriculture. ASABE Paper No. 061065. St. Joseph, Mich.: ASABE. Lee, W. S., and D. C. Slaughter. 2004. Recognition of partially occluded plant leaves using a modified watershed algorithm. Trans. ASABE 47(4): 1269‐1280. Lee, W. S., J. K. Schueller, and T. F. Burks. 2005. Wagon‐based silage yield mapping system. Agric. Eng. Intl: The CIGR E. Journal Vol. VII. Manuscript IT 05 003. Leemans, V., H. Magein, and M. F. Destain. 2002. On‐line fruit grading according to their external quality using machine vision. Biosystems Eng. 83(4): 394‐404.
458
MacArthur, D., J. K. Schueller, and W. S. Lee. 2006. Remotely‐piloted helicopter citrus yield map estimation. ASABE Paper No. 063096. St. Joseph, Mich.: ASABE. Marchant, J. A., and C. M. Onyango. 2003. Comparison of a Bayesian classifier with a multilayer feed‐forward neural network using the example of plant/weed/soil discrimination. Computers and Electronics in Agric. 39(1): 3‐22. Perry, C. D., G. Vellidis, N. Wells, R. Hill, A. Knowlton, E. Hart, and D. Dales. 2005. Instantaneous accuracy of cotton yield monitors, Instantaneous accuracy of cotton yield monitors. In Proc. Beltwide Cotton Conf., 486‐504. New Orleans, La.: National Cotton Council, Memphis, Tenn. Schueller, J. K., and Y. H. Bae. 1987. Spatially attributed automatic combine data acquisition. Computers and Electronics in Agric. 2(2): 119‐127. Schueller, J. K., J. D. Whitney, T. A. Wheaton, W. M. Miller, and A. E. Turner. 1999. Low‐cost automatic yield mapping in hand‐harvested citrus. Computers and Electronics in Agric. 23(2): 145‐153. Shahin, M. A., and S. J. Symons. 2001. A machine vision system for grading lentils. Canadian Biosystems Eng. 43: 7.7‐7.14. Slaughter, D. C., and R. Harrell. 1989. Discriminating fruit for robotic harvest using color in natural outdoor scenes. Trans. ASAE 32(2): 757‐763. Vellidis, G., C. D. Perry, J. S. Durrence, D. L. Thomas, R. W. Hill, C. K. Kvien, T. K. Hamrita, and G. C. Rains. 2001. The peanut yield monitoring system. Trans. ASAE 44(4): 775‐785. Whitney, J. D., Q. Ling, W. M. Miller, and T. A. Wheaton. 2001. A DGPS yield monitoring systems for Florida citrus. Trans. ASAE 17(2): 115‐119.
APPLIED ENGINEERING IN AGRICULTURE