Building Automation Tools to Calculate Trichloroethylene Level in ...

3 downloads 36860 Views 731KB Size Report
Building Automation Tools to Calculate Trichloroethylene Level in Human Liver. Using - Case Study: Images of White Mouse Liver. A. Febrian\ Elly Matul IY, ...
Building Automation Tools to Calculate Trichloroethylene Level in Human Liver Using - Case Study: Images of White Mouse Liver A. Febrian\ Elly Matul IY, I Md. Agus sY, M. Fajarl, W. Jatmiko\ D.H. Ramdhan 4, A. Bowolaks05, and P. Mursantol IFaculty of Computer Science, 2Mathematics Department, 3Computer Science Department, 4Faculty of PubJic Health, 5Faculty of Mathematics and Natural Sciences I. 4. 5Universitas Indonesia, 2Universitas Negeri Surabaya, 3Universitas Udayana 1,4,5Depok, West Java, 2Surabaya, East Java, 3Denpasar, Bali I, 2, 3,4,5Indonesia

Abstract: Trichloroethylene (TRI) is chlorinated solvent which has been used in various materials for industrial and daily task, such as dry-cleaners, ink composer, and medicine ingredients. It's has been known that TRI can penetrate human liver, and prolong exposure can evoke permanent damage to liver or cancer. In this research, we tried to create an automation tool that can help us analyzed and predict TRI level in human liver. The prediction will be based on liver images which analyzed using FCM, BPNNs, FLVQ, or FLVQ-PSO. In this research, the images of white mice liver that have been exposed to TRI are used. Our experiments show that the best accuracy achieved by BPNNs and 45 features from images which have been processed with KPCA. This combination accuracy is 99.12%. 1.

INTRODUCTION

Trichloroethylene (TRI) is organic and chlorinated solvent which is carcinogenic and able to pollute the environment [1]. This solvent has been widely used in dry-cleaners, leather tinning, ink composer, and as medicine ingredients. It's widely known that TRI not only can be found in underground water [2], but also in drinking water and surface water [3]. Thankfully the world has shown the declining use of TRI, especially in full industrialized countries [4]. However, this trend doesn't apply in emerging industrialized country, such as in certain Asia countries [5]. A studies which conducted by Ramdhan D.H., et al [1] shows that TRI can penetrate human liver. A long exposure to this chemical in high concentration has been found could affect the immune system [6] and trigger body to produce hepatitis immune [7]. Patient from [3] stated that he suspect that due to TRI long exposure, he and some of his friends had experience Parkinson's disease. Studies conducted on animals reported that TRl can cause renal damage [8] and impaired reproductive function to male mice [9]. All these research shown that TRI cause various damages on human and can't longer be ignored. Moreover, the facts that TRI can be easily found in our lives, increase the possibility of being harm by it.

978-1-4577-1362-0/11/$26.00 ©2011 IEEE

- 373 -

Our target in this research is to build an automation tools that can help predict TRI concentration in human liver. The tools development consists of three major stages, which are data acquisition, pre-processing, and clustering. The data we used in this research acquired by exposing white mice to TRl for certain time. It's already widely known that white mouse liver structure is the closest structure to human liver. In this paper, we will show you our approach to enable the tools to predict TRI degree in the liver by analyzing white mice liver images. There're four approaches of clustering method that we tried, which are Fuzzy C-Means, Back-Propagation Neural Networks, Fuzzy-Neuro Learning Vector Quantization, and Fuzzy-Neuro Learning Vector Quantization - Particle Swarm Optimization. Our target is to build a classification tools that is effective and efficient so we could implement our approach to Field Programmable Gate Array (FPGA) device, Xilinx Spartan 3AN. Until now, there have been so many researches on classification and clustering approach for images, including images of human liver. Mittal, et aI, in [10] for example, they are using neural networks to identify focal liver lesions in ultrasound images. Yali Huang, et ai, in [11] is trying to differentiate normal liver from fatty liver using Wavelet Transform and Probabilistic Neural Network. In this research, they apply their method to ultrasonic images. Another example is the Cruz-Ramirez, et al research in (12], which focus on creating tools that can help matching donor-recipient liver in liver transplantation. Although these examples also used liver images, the data and the research focus are totally different. In our search, we used RGB image that acquired using fluorescence BZ-8000 microscope. Until recently, all researches which associated with TRI are focus in manual classification of TRI level or analyze the effect of different TRI concentration to living being. 2.

DATA ACQUISITION AND FEATURE EXTRACTION

The data required in this research is human liver images which have been infected by TRI. However, it's difficult to find such data and it's non-ethical to conduct experiment in order to produce such data, therefore appropriate substitutes are needed. Since it's widely known that white mouse organ

structures and metabolism highly homologous with human, as stated in Bernhagen, et al research [13] and Chang Kee Lim, et al research [14]. Based on these researches, therefore we expose 18 white mice to 99% of TRI through inhalation. The experiment conducted inside a sealed chamber for seven days, eight hours per day. The chamber has an automatic mechanism to check TRI concentration for every ten seconds. At the end of day seven, anesthetics are given to the mice so their livers can be obtained and analyzed. The composition of the exposure as follow: six mice without exposure, six mice exposed to 1000 ppm of TRl (low exposure), and six mice exposed to 2000 ppm of TRI (high exposure). Please note that this experiment comply with the Animal Experimental Guidelines from Graduate School of Medicine, Nagoya University. After the dehydration, infiltration, embedding, sectioning, deparaffinization, staining, mounting, labeling, and images acquiring process, we then have 148 RGB images in size of 1360*1024 pixels. Few samples of these images can be seen in Figure 1. Based on the image size and encoding, in total we will have 1360*1024*3 features for each image, which is too large to be computed using Xilinx Spartan 3AN or a standard computer. This means that feature selection algorithms are in order.

also introduce unnecessary and hard to predict problems. Not to mention, croping limit our automation tools ability, so that it can only work with images that full fill certain properties; e.g. the CV must be located in the center of image.

Scaling Image Another approach that we could used in order to minimize the number of features used in our computation is by scaling. Scaling will automatically reduce the details of the image, but not also necessary remove possible important features. In this research, we try to reduce the images down to 50% and 12.5% from its original size. However, the 50% size images are still too large, so it's troublesome to do any computational means to the images. Fortunately, it's not the same for 12.5% size images. 2.2.

Kernel Principal Component Analysis (KPCA) Kernel Principal Component Analysis (KPCA) is used to efficiently compute principal components in highly dimensional feature spaces using integral operators and nonlinear kernel functions [15]. KPCA will maps the data 2.3.

)

into a feature space, vectors of