K-nn Algorithm for Fast Infant Pain Detection - IEEE Xplore

2 downloads 0 Views 101KB Size Report
depends on the sufferer being able to self-report pain, which is little use in diagnosing and treating pain in babies. More significant are non-verbal responses, ...
,QWHUQDWLRQDO6\PSRVLXPRQ,QVWUXPHQWDWLRQ 0HDVXUHPHQW6HQVRU1HWZRUNDQG$XWRPDWLRQ ,061$

K-nn Algorithm for Fast Infant Pain Detection Muhammad Naufal Mansor, Shahryull Hi-Fi Syam Mohd Jamil, Mohd Nazri Rejab, Addzrull Hi-Fi Syam Mohd Jamil

Abstract—In this paper, pain assessment is explained and reviewed for detecting facial changes of patient in a hospital in Neonatal Intensive Care Unit (ICU). The facial changes are most widely represented by eyes and mouth movements. The proposed system uses color images and it consists of three modules. The first module implements skin detection to detect the face. Secondly, extracts the features of faces by processing the image and measuring certain dimensions face regions based on the FFT. Finally a knn classifier used to classify the movements. From the experiments, it is found that the identification rate of reaches 90.12%. Keywords-Detection of facial changes; NICU patient; Fast Fourier Transform, knn classifier

I. INTRODUCTION The classical International Association for the Study of Pain definition of pain [1] as a subjective, emotional experience that is described in terms of tissue damage, depends on the sufferer being able to self-report pain, which is little use in diagnosing and treating pain in babies. More significant are non-verbal responses, of which there are two kinds: gross physical movements and physiological response measurements. The former are simple direct observation, while the latter requires specific equipment to monitor blood pressure and stress hormone levels. The cry response is increasingly important, as researchers are now able to differentiate between different kinds of cry: classed as “hungry”, “angry”, and “fearful or in pain” [2]. Interpretation is difficult, however, depending on the sensitivity of the listener, and varies significantly between observers [3]. Studies have sought additional, visible and easily definable indicators of pain. Combinations of crying with facial expressions, posture and movements, aided by physiological measurements, have been tested and found to be reliable indicators. A number of such observational scales have been published and verified [4]. Despite the importance of pain recognition, most neonatal intensive care units and nurseries have limited resources for Manuscript received February 12, 2012. This work was supported in part by the Fundamental Research Grant Scheme (FRGS) which contributed by Ministry of Higher Education Malaysia which contributed by Ministry of Higher Education Malaysia Muhammad Naufal Mansor, was with the Intelligent Signal Processing Group (ISP), 01000, Blok A, Taman Pertiwi Indah, Seriab, Universiti Malaysia Perlis, Malaysia (e-mail: [email protected]). Shahryull Hi-Fi Ahmad Jamil, Nazri Rejab, Addzrull Hi-Fi Ahmad Jamil, are from Department of Electrical, Politeknik Tuanku Syed Sirajuddin, 02600 Pauh Putra (e-mail: [email protected], [email protected], [email protected]).

嘋,(((

pain identification. Neonatal pains are often brief and may not be recognized since nurses and physicians cannot provide continuous surveillance of all infants at risk for clinical pains. These factors illustrate the clear need for improved pain surveillance methods that supplement direct observation by nurses and physicians, and that are practical and economically feasible II. SYSTEM OVERVIEW The long-term goal of the study outlined in this paper is the development of a stand-alone automated system that could be used as a supplement in the neonatal intensive care unit to: (1) provide 24-h a day non-invasive monitoring of infants at risk for pains, and (2) facilitate the analysis and characterization of videotaped neonatal pains by physicians during retrospective review. The development of such a system requires automated procedures for extracting quantitative motion information from video recordings of infants monitored for pains as shown in Figure 1. The study described in this paper involved short video recordings of neonatal pain based on database [17] Image Capturing

Data Collection

Face Detection

Feature Extraction

knn Classifier

Classifier

Figure 1 General block diagram of Pain detection.

III. FACE DETECTION A good level of research has been documented in the area of human face detection [5-9]. The authors have used skin filter method to detect the face [9-13]. The face detection is performed in three steps. The first step is to classify whether a test pixel in the given image is a skin pixel. The second step is to identify different skin regions in the skin detected image by using connectivity analysis [9-13]. The last step is to decide whether each of the skin regions identified is a face skin. They are the height to width ratio of the skin region and the percentage of skin in the rectangle defined by the height and width



,QWHUQDWLRQDO6\PSRVLXPRQ,QVWUXPHQWDWLRQ 0HDVXUHPHQW6HQVRU1HWZRUNDQG$XWRPDWLRQ ,061$

are stored as floats. Furthermore, the dynamic range of the Fourier coefficients are too large to be displayed on the screen, and hence, these values are scaled (usually by dividing by Height xWidth of the image) to bring them within the range of values that can be displayed [16].

Figure 2 Face detection Method.

IV. FACIAL DETECTION AND VALIDATION MODULE The facial detection and validation module is based on the estimation of the facial features of the eyes and mouth. Various approaches and discussions in facial estimation can be found in [13-15]. Instead of each frame, this system only observed selected frames since successive frames have same information. Feature extractions can be performed only on the crop image portion of eyes and mouth, in order to check whether in pain or not we use k-NN. Fast Fourier Transform is given as an input to the k-NN module for calculating the motion probability for the system. A. Fast Fourier Transform Fourier Transform decomposes an image into its real and imaginary components which is a representation of the image in the frequency domain. If the input signal is an image then the number of frequencies in the frequency domain is equal to the number of pixels in the image or spatial domain. The inverse transform re-transforms the frequencies to the image in the spatial domain. The FFT and its inverse of a 2D image are given by the following equations: N −1

F ( x ) = ¦ f ( n )e

− j 2π ( ×

n ) N

E. k-NN Classifier K-nearest neighbor (k-NN) is a simple classification model that exploits lazy learning [13]. It is a supervised learning algorithm by classifying the new instances query based on majority of k-nearest neighbor category. Minimum distance between query instance and the training samples is calculated to determine the k-NN category. The k-NN prediction of the query instance is determined based on majority voting of the nearest neighbor category. Since query instance (test signal) will compare against all training signal, k-NN encounters high response time [13]. In this works, for each test signal (to be predicted), minimum distance from the test signal to the training set is calculated to locate the k-NN category of the training data set. A Euclidean Distance measure is used to calculate how close each member of the training set is to the test class that is being examined. Euclidean Distance measuring: N

d E ( x, y ) = ¦ xi2 − y i2

From this k-NN category, class label of the test signal is determined by applying majority voting. V. EXPERIMENTAL RESULT The proposed algorithm was evaluated on a hundred and twenty subjects with different race, gender and age. The average size of each image is 400-500 pixels. The entire subjects were tested for ten trials. Ten images were taken for each subject. Average accuracy for all subjects was shown in table1. Table1 Average accuracy measures 10 trial.

Trial 1 2 3 4 5 6 7 8 9 10

(1)

n=0

f ( n) =

1 N

N −1

¦ f ( x )e

j 2π ( ×

n ) N

(2)

n=0

The FFT that's implemented in the application here requires that the dimensions of the image are a power of two. Another interesting property of the FFT is that the transform of N points can be rewritten as the sum of two N/2 transforms (divide and conquer). This is important because some of the computations can be reused thus eliminating expensive operations. The output of the Fourier Transform is a complex number and has a much greater range than the image in the spatial domain. Therefore to accurately represent these values, they

(4)

i =1

Accuracy % 89.10 90.12 83.99 82.88 81.25 81.21 80.12 80.33 81.44 81.76

VI. CONCLUSION In this paper, an infant pain monitoring system is proposed, and designed which detects the condition of the infant through continuously monitoring the infant. The basis of the method used is based on the skin color that robust for different type of skin. Based on the fact that, the facial are significantly different from the normal condition until the pain condition can easily be detected. The system is very



,QWHUQDWLRQDO6\PSRVLXPRQ,QVWUXPHQWDWLRQ 0HDVXUHPHQW6HQVRU1HWZRUNDQG$XWRPDWLRQ ,061$

simple, fast and effective to alert the medical staff. REFERENCES [1] Harold Merskey's. An unpleasant experience that we primarily associate with tissue damage or describe in terms of tissue damage or both. (1964). [2] Koeslag J. The Human Lifecycle, Part 19. Development Of Communication. Division of Medical Physiology, Department of Biomedical Sciences, University of Stellenbosch. Online version [3] ZESKIND PS. CROSS-CULTURAL DIFFERENCES IN MATERNAL PERCEPTIONS OF CRIES OF LOW- AND HIGH-RISK INFANTS. CHILD DEVELOPMENT, 1983; VOL. 54, NO. 5, 1119-1128. DOI 10.2307/1129668 [4] Yang, M., Kriegman, D.J., Ahuja, N.: “Detecting faces in images: A survey”. IEEE Trans Pattern Analysis and Machine Intell. 24, 34–58 (2002). [5] Singh, S., Chauhan, D.S., Mayank, V., Singh, R.: “A Robust Skin Color Based Face Detection Algorithm”. Tamkang Journal of Science and Engineering 6(4), 227–234 (2003). [6] Hus, R.L., Mottaleb, M.A., Jain, A.K.: Face detection in color images. IEEE Trans. Pattern Analysis and Machine Intell 24, 696–706 (2003) [7] J. Geoffrey Chase, “Quantifying agitation in sedated ICU patients using digital imaging,” Computer Methods and Programs in Biomedicine (2004) 76, 131—141 [8] Behnood Gholami, “Agitation and Pain Assessment Using Digital Imaging Pain,” Manag. Nurs., vol. 7, pp. 44–52, 2006. [9] American Society of Anesthesiologists., “Standards for basic anesthetic monitoring”, 1993. 2 Tinker JH, Dull DL, Caplan RA, et al. Role of

monitoring devices in prevention of anesthetic mishaps: a closed claims analysis. Anesthesiology 1989; 71:541– 6. [10] Gong, Shaogang, Stephen J. McKenna, Alexandra Psarrou. Dynamic Vision: “From Images to Face Recognition,” Imperial College Press, 2000. [11] Ilias Maglogiannis u.a.. “Face detection and recognition of natural human emotion using Markov random fields”. In Personal and Ubiquitous Computing. Springer London, 2007. [12] Mansor MN, Yaacob S, Nagarajan R, Muthusamy H: “Coma Patients Expression Analysis under Different Lighting using k-NN and LDA.” International Journal of Signal and Image Processing (IJSIP), Vol 1, No 4, pp, 249-254, Hyper Sciences, (2010) [13] Mansor MN, Yaacob S, Nagarajan R, Muthusamy H: “Patients Tremble Analysis under different Different Camera Placement in Critical Care” International Journal of Research and Reviews in Soft and Intelligence Computing (IJRRSIC), Vol 1, No 1, pp, 1-4, Science Academy Publisher, (2011) [14] Mansor MN, Yaacob S, Nagarajan R, Muthusamy H: “Patients Facial Expressions Analysis under different angle in Intensive Care Unit” Signal & Image Processing: An International Journal (SIPIJ), NnN Net Solution Private Ltd (2011) [15] Mansor MN, Sazali Yaacob, R.Nagarajan, M Hariharan. “Patient Monitoring for Hospital ICU Patients under unstructed lighting” in IEEE Symposium on Industrial Electronics & Applications (ISIEA 2010), Penang, Malaysia, October, 2010 [16] Jason L. Mitchell, Marwan Y. Ansari and Evan Hart "Advanced Image Processing with DirectX® 9 Pixel Shaders" - From ShaderX2 - Shader Programming Tips and Tricks with DirectX 9, 2003 [17] http://www.google.com//photos-pain images/neonatal.html