Machine Vision System for Detecting Fluorescent ...

126 downloads 0 Views 2MB Size Report
using three sets of 59 mm x 59 mm white LEDs. (NSPW310BS, Nichia Corporation), each composed of 120. LEDs and 67 mm diameter polarized filters, in order ...
Machine Vision System for Detecting Fluorescent Area of Citrus Using Fluorescence Image Md. Abdul MOMIN*, Naoshi KONDO**, Yuichi OGAWA***, Tomoo SHIIGI****, Mitsutaka Kurita*****, Kazunori NINOMIYA****** 

*PhD student, Graduate School of Agriculture, Kyoto University, Sakyo-ku, Kyoto 606-8502, Japan, (Tel:080-4025-6294; e-mail: mamfpm@ yahoo.com) **Professor, Graduate School of Agriculture, Kyoto University, Sakyo-ku, Kyoto 606-8502, Japan, (e-mail: [email protected]) ***Associate Professor, Graduate School of Agriculture, Kyoto University, Sakyo-ku, Kyoto 606-8502, Japan, (e-mail: [email protected]) ****PhD student, Graduate School of Agriculture, Kyoto University, Sakyo-ku, Kyoto 606-8502, Japan, (e-mail: [email protected]) *****System Engineer, SI-Seiko Co. Ltd 66, Takaoka-cho, Matsuyama, Ehime 791-8036, Japan, (e-mail: [email protected]) ******Section Manager, SI-Seiko Co. Ltd 66, Takaoka-cho, Matsuyama, Ehime 791-8036, Japan, (e-mail: [email protected]) Abstract: This research is carried out to develop a machine vision system which could identify the fluorescence area on injured or defective citrus surfaces. The target objects whose surfaces were injured by needle insertions were acquired by a camera VGA using UV lamps (radiating Blacklight and UV-B wave-lengths) and white LEDs. Because damaged citrus peel includes fluorescent substances, it was easy to discriminate fluorescence parts from healthy parts. The results showed that the blacklighting system is practical and feasible, and that the proposed algorithm of fluorescence area detection is effective for some varieties of citrus. Keywords: Fluorescent area, citrus-defect, machine vision, image processing, fluorescent substance 

1. INTRODUCTION With the rise of public awareness, maintaining food safety from the field to the dinner table has become a great concern of every government in the present world. In light of the growing consumerism, customers have become more discerning of quality and also more demanding of it. Therefore, fruit and vegetable vendors are under pressure and looking for an even distribution of quality within their pool of produce. It’s not just separating good from bad either, but also grading the produce to ensure top-quality items are sold for premium prices. To ensure the reputation of the farmers` brand as well as to supply the quality fruits to the consumers, fruits are automatically being graded in many countries. Algorithms for detecting fruit’ size (Brodie et al. 1994), colour, shape or ripeness (Dı´az et al. 2000; Hahn, 2002), sugar content (Steinmetz et al. 1999), etc. are successfully applied in those grading systems. However, due to the variation in defectpattern, size, position and color and to low camera resolution, successful algorithms for detecting rotten, sunburn or injury parts whose colors are slight different from the healthy colors and small dots whose sizes are 0.1mm or so have not been developed yet. These defects are still being detected manually in combination with other mechanized and automated components of the grading system. Therefore, many workers are still needed to work in fruit grading

facilities in order to discard the fruit that contains defects on sorting conveyors before and after grading operations. Attention should be paid to the development of fruit defect detection algorithms, because defects are one of the most influential factors in the quality and price of fresh fruits. Researchers, worldwide are trying to explore suitable algorithms to solve these problems using different sensing techniques such as X-ray imaging device, magnetic resonance imaging, colour TV camera, NIR, etc. Considering the importance of defect detection much research has been done based on using machine vision systems to detect defects in citrus (Kondo et al. 2009; Kurita et al. 2009; Blasco et al. 2007; Blasco et al. 2007 ), apple (Kleynen et al. 2005; Leemans and Destain 2004; Qingzhong et al. 2002; Li et al. 2002), tomato (kurita et al. 2006), olive (Diaz et al. 2004), potatoes (Lefebvre et al. 1994; Muir et al. 1982), bell peppers (Shearer and Payne 1990), etc. Surface defects such as rotten or injured parts, of a wide variety of citrus can be detected using fluorescent imaging. Uozumi et al. 1987 reported that the rotten or defected part of citrus peel contains fluorescence elements. Recently a study identified flavonoid methyl as one of the major fluorescent substances in citrus fruits and that it has excitation and fluorescent wavelengths at about 360 to 375 nm and 530 to 550 nm, respectively (Kondo et al. 2009). Since fluorescent light from rotten or injured parts is a greenish color, it is considered that a color TV camera used for colour imaging can also be used for fluorescence imaging. Recent

developments of microcomputer-based image processing system have boosted wider applications of machine vision technique in the post harvest food processing. VGA, XGA, SXGA, and UXGA colour cameras with 8 or 10 bit gray levels are starting to be used. These cameras have the potential to detect the defective parts of citrus, by a new algorithm of high resolution fluorescence images which comes close to human eyes performance. The goal of this work was to develop an algorithm which could detect the fluorescent regions present on the injured or defected surface of citrus. The first objective to achieve this goal is to identify the best combination between camera and the illumination system which can be work efficiently to acquire fluorescent image.

respectively. As the fluorescence intensity of rotten or injured regions of citrus is much lower than the colour signals of fresh parts, it is necessary to adjust the camera operation parameters such as shutter speed, gain, and gamma correction to obtain a good quality image. Table 1 shows the adjusted camera parameters used for the different illumination systems. The target objects were oriented manually towards the camera and images captured with a size of 512 x 480 pixels using the camera and a capture board (MTPCI-TL2, MicroTechnica Co., Ltd.) plugged into a PC. Then data were transferred to a PC and saved as images in BMP format.

2. MATERIALS AND METHODS 2.1 Camera selection Machine vision systems utilize imaging cameras ranging from monochrome cameras performing simple shape and size recognition tasks to common aperture multispectral cameras for detection of surface defects and diseases on meat, grains, fruits, and vegetables (Chen et al. 2002). Recently high resolution cameras of 8, 10 or 12 bit gray levels are available in the market. The target objects whose surfaces are injured or rotten were firstly acquired by a VGA camera of 8 bit gray levels and later a XGA camera of 10 bit gray levels. 2.2 Selection of lighting device Recently a study was conducted at Kyoto University to develop a double image acquisition system with visible and UV LEDs which had a 365 nm of peak wavelength for detecting rotten part of citrus (Kurita et al. 2009). Therefore in this experiment, the fluorescence images were acquired using UV lamp (Blacklight blue and UV-B lamps) lighting systems. The blacklighting system composed of four fluorescent lamps ( FL6BLB Sankyo Denki Company) emitted rays at 315-400nm with the peak at 350 nm, while the UV-B lamps also composed of four fluorescent lamps (G15T8E Sankyo Denki Company) emitted rays between 280-360nm at peak 306nm. The colour images were acquired using three sets of 59 mm x 59 mm white LEDs (NSPW310BS, Nichia Corporation), each composed of 120 LEDs and 67 mm diameter polarized filters, in order to remove halation in the scene by cross-polarization. 2.3 Image acquisition system design Image acquisition device was composed of VGA camera, UV lamps (Blacklight blue lamps and UV-B lamps) lighting panel based fluorescence images and white LEDs based colour images for detecting fluorescence regions of citrus. The layout of the proposed machine vision systems is shown in Fig. 1. For uniform illumination, the illumination angle was maintained in between 45° to 55°. The CCD camera (VGA format) fitted with a C mount lens was placed 270 mm (UV lamps acquisition system) or 220 mm (LEDs acquisition system) above from the scene. The distance between center of a fruit object and the Blacklight blue lamps, UV-B lamps, and white LEDs are about 230, 290, and 200 mm,

Fig.1. Blacklighting, UV-B and LEDs image acquisition system Table 1. Capturing parameters for each lighting condition Parameters Iris Shutter speed (sec) Gain (dB) Gamma correction

Blacklight 1.4 1/60 12 0.45

UV-B 1.4 1/60 12 0.45

White LED 1.4 1/1000 0 1

2.4 Other tools The post-processing of the resulting BMP image was done using programming software Visual C++ and OpenCV (Open Source Computer Vision) library functions under a Windows XP platform. In addition, WinROOF version 5.6 (an image processing software developed by Mitani corporation, Japan) was used as auxiliary tools, for verified image processing data. 2.5 Citrus test sets For developing the algorithm to detect the fluorescence regions of the injured parts, experiments were carried out with some common varieties of citrus available in the local market: citron, dekopon, amanatsu kan, hassaku, harumi, iyokan, kiyomi, setoka, and mikan. The fluorescence phenomenon is induced in the essential oil of the peel of the fruits when it is released by some defects (Latz and Ernes 1978). Therefore, in this research a 1 sq. cm area on the fruits skin were injured intentionally by a 0.5 mm diameter needle.

3. RESULTS AND DISCUSSION The captured colour images under white LEDs illumination and fluorescence images under blacklighting illumination of some of the varieties of citrus are shown in Fig. 2 and Fig. 3. The fluorescence images captured by another illumination system (UV-B lamp) were not fluoresced properly, due to the influence of visible radiation. To avoid the effect of visible information, it is necessary to use a 400 nm cut-band filter. Hence at this moment only the blacklighting system fluorescence images are used to develop the algorithm. From Fig. 3, it is revealed that among these varieties, some of them (dekopon, amanatsu, hassaku, mikan) have strong levels of fluorescence.

Fig. 4. Relationship between different colour spaces 3.2 Fluorescence area segmentation

Fig. 2. Colour images acquired by white LED

After channel detection, the G component image was converted to binary image by using the global threshold (Gonzalez and Woods 1992) segmentation method. The aimed of this segmentation was to identify the fluorescence areas which have a greenish colour in the original image. The resulting binary image is considered as a marker image. The threshold limit was chosen from the minimum gray value that was found among several points of the fluorescence regions. Some results are shown in Fig. 5, where segmentation of fluorescence parts appears in green colour. It is seen that the distinction between normal and fluorescence parts is very clear. The segmentation algorithm is able to detect the fluorescence regions of these varieties as shown in the Fig. 5.

Fig. 5. Examples of processed image Fig.3.Fluorescence images acquired by Blacklight blue lamps

3.3 Further research

3.1 Fluorescence Image processing

As commented above, the UV-B illumination system did not work properly, the illumination system will be reconstructed with a visible cut-band filter and used for other varieties. In addition, the VGA camera will replaced by a XGA (2CCD) camera, the whole procedure repeated, and the results compared. Finally the system will tested in real condition.

To detect the fluorescence regions, at first the different colour spaces such as RGB and HSI of the images were studied and analyzed. The RGB (fundamental colour components) value of fluorescence (the greenish colour pixels), halation (purple colour pixels) and normal parts of the fruit surface were measured. Then the values of RGB were converted to HSI using colour space conversion equations described in Kondo et al. 2004. The relationship of different colour values is presented in the Fig. 4. After determining the values of colour spaces, the original fluorescence image channels were split into the constituent RGB channels. Based on the colour information, the suitable colour image channel was selected for further image processing. Fig. 4 revealed that the surface colour is stronger in G component and it seems reasonable to use this component in this phase.

4. CONCLUSIONS Based on acquiring images, it is observed that a blacklighting system is suitable to recognize the injured areas on citrus, especially for dekopon, amanatsu, hassaku, and mikan varieties. An algorithm for detecting fluorescent area on the injured citrus peel using machine vision was developed. The proposed algorithm is able to detect the fluorescence of injured parts of some varieties. The algorithm will hopefully be more robust after completing additional development.

ACKNOWLEDGEMENT We thank Professor John K. Schueller, a visiting professor from University of Florida, for his proof reading of this final full paper. REFERENCES Blasco, J., Aleixos, N., Gomez, J., and Molto, E. (2007). Citrus sorting by identification of the most common defects using multispectral computer vision algorithm. Journal of Food Engineering, 83, 384–391. Blasco, J., Aleixos, N., and Molto, E. (2007). Computer vision detection of peel defects in citrus by means of a region oriented segmentation algorithm. Journal of Food Engineering, 81, 535–543. Brodie, J. R., Hansen, A. C., and Reid, J. F. (1994). Size assessment of stacked logs via the Hough Transform. Transactions of the ASAE,37(1), 303–310. Chen, Y.R., Chao, K., and Kim, M.S. (2002). Machine vision technology for agricultural applications. Computers and Electronics in Agriculture, 36, 173-191. Diaz, R., Gil, L., Serrano, C., Blasco, M., Molto, E., and Blasco, J. (2004). Comparison of three algorithms in the classification of table olives by means of computer vision. Journal of Food Engineering, 61, 101–107. Dıaz, R., Faus, G., Blasco, M., Blasco, J., and Molto , E. (2000). The application of a fast algorithm for the classification of olives by machine vision. Food Research International, 33, 305–309. Gonzalez, R.C. and Woods, R.E. (1992). Digital image processing, 443-460. Addison-Wesley, USA. Hahn, F. (2002). Multi-spectral prediction of unripe tomatoes. Biosystems Engineering, 81(2), 147–155. Kleynen, O., Leemans, V., and Destain, M.F. (2005). Development of a multispectral vision system for the detection of defects on apples. Journal of Food Engineering, 69(1), 41-49. Kondo, N., Kuramoto, M., Shimizu, H., Ogawa, Y., Kurita, M., Nishizu, T., Chong, V.K., and Yamamoto, K. (2009). Identification of fluorescent substance in mandarin orange skin for machine vision system to detect rotten citrus fruits. Journal of Engineering in Agriculture, Environment and Food, 2(2), 54-59. Kondo, N., Monta, M., and Noguchi, S. (2004). AgriRobot(1)-Fundamentals and Theory (in Japanese), 41-47. Corona Sha, Tokyo, Japan. Kurita, M., Kondo, N., Shimizu, H., Ling, P., Falzea, P.D., Shiigi, T., Ninomiya, K., Nishizu, T., and Yamamoto, K. (2009). A double image acquisition system with visible and UV LEDs for citrus fruit. Journal of Robotics and Mechatronics, 21(4), 533-540. Kurita, M., Kondo, N. and Ninomiya, K. (2006). Defect detection for tomato granding by use of six color CCD cameras. Journal of Science and High Technology in Agriculture,18(2), 135-144. Leemans, V., & Destain, M. F. (2004). A real-time grading method of apples based on features extracted from defects. Journal of Food Engineering, 61(1), 83–89. Lefebvre, M., Zimmerman, T., Baur, C., Gugerli, P., and Pun, T. (1994). Potato operation: automatic detection of potato diseases. Proceedings of SPIE, 2345, 2-9.

Li, Q., Wang, M., & Gu, W. (2002). Computer vision based system for apple surface defect detection. Computers and Electronics in Agriculture, 36(2), 215–223. Latz, H. W., and Ernes, D. A. (1978). Selective fluorescence detection of citrus oil. Components separated by highpressure liquid chromatography. Journal of Chromatography, 166, 189–199. Muir, A. Y., Porteus, R. L., & Wastie, R. L. (1982). Experiments in the detection of incipient diseases in potato tubers by optical methods. Journal of Agricultural Engineering Research, 27, 131–138. Qingzhong, L., et al. (2002). Computer vision based system for apple surface defect detection. Computers and Electronics in Agriculture, 36, 215–223. Shearer, S. A., and Payne, F. A. (1990). Color and defect sorting of bell peppers using machine vision. Transactions of the ASAE, 33(6), 2045–2050. Steinmetz, V., Roger, J. M., Molto´ , E., and Blasco, J. (1999). On-line fusion of colour camera and spectrophotometer for sugar content prediction of apples. Journal of Agricultural Engineering Research, 73, 207– 216. Uozumi, J., Kohno, Iwamoto, and Nishinari. (1987). Spectrophotometric system for the quality evaluation of unevenly colored food. Journal of the JSFST, 34(3),163170.

Suggest Documents