Robust Skin Segmentation using Color Space Switching

1 downloads 0 Views 1MB Size Report
Computation (CEC), 2012, pp. 1-7. 28. Yogarajah, P., Joan Condell, Kevin Curran, Abbas Cheddad, and. Paul McKevitt. "A dynamic threshold approach for skin.
Robust Skin Segmentation using Color Space Switching Ankur Gupta

Ankit Chaudhary

Dept. of Computer Science BITS Pilani RJ India [email protected]

Dept. of Computer Science Truman State University MO USA [email protected]

Abstract- S kin detection is very popular and has vast applications among researchers in computer vision and human computer interaction. The skin-color changes beyond comparable limits with considerable change in the nature of the light source. Different properties are taken into account when the colors are represented in different color spaces. However, a unique color space has not been found yet to adjust the needs of all illumination changes that can occur to practically similar objects. Therefore a dynamic skin color model must be constructed for robust skin pixel detection, which can cope with natural changes in illumination. This paper purposes that skin detection in a digital color image can be significantly improved by employing automated color space switching. A system with three robust algorithms has been built based on different color spaces towards automatic skin classification in a 2D image. These algorithms are based on the statistical mean of value of the skin pixels in the image. We also take Bayesian approaches to discriminate between skin-alike and non-skin pixels to avoid noise. This work is tested on a set of images which was captured in varying light conditions from highly illuminated to almost dark.

segmentation is very effective because it usually involves a small amount of computation and can be done regardless of pose [5]. In color images, additional color informat ion available can be utilized. The HSV model closely resembles the human visual perception system. This model emp loys the polar co-ordinate space. The RGB model is based on physical interpretation of co lor. Various other models like YCb Cr and CM Y models are also popular among researchers. Most existing skin segmentation techniques involve the classification of individual image pixels into skin and non-skin categories on the basis of pixel color. The rationale behind this approach is that the human skin has very consistent color which is distinct fro m the colors of many other objects. Color is a powerful cue that can be used as a first step in skin detection because of its advantages: low computational cost, robustness against illu mination changing and geometrical transformation. However, skin color changes with change in illu mination [9-10] and failures are certain if a fixed skin model is used. One of the solutions is color model adaption as discussed in [11]. Th is model was used for tracking skin color under vary ing illu mination, v iewing geometry and camera parameters. Observed loglikelihood measurements were used to perform selective adaptation. The object’s color distributions were modeled using Bayesian mixtu re models [12] in hue-saturation space and an adaptive learning algorith m was used to update these models [13]. This work proposes three different algorith ms for robust skin segmentation fro m a co lor image, for simp licity we named them as algorithm1, algorithm2, algorithm3. In the algorithm1, a suitable color space would be chosen to detect skin-like pixels in given image, whereby considering only RGB, HSV and YCb Cr color spaces. An artificial neural network is constructed to find the suitable target color space in which the skin colo r in the ambient conditions prevalent while taking the picture in question is calculated. The details are given in III-A.

Keywords – Skin Segmentation, Color Space Switching, Light Invariant Detection, Neural Networks, Automated Skin Detection

I.

INTRODUCTION

The wide use of skin segmentation techniques in image processing and its applications is facilitating the gain in importance of skin segmentation. Image segmentation is usually employed at the pre-processing stage to reduce the region of interest into a subspace of interest. Image segmentation is a crucial step for object recognition, image registration, co mpression etc where human is in the scene. Skin segmentation is commonly used in algorithms for face detection [1-4], hand gesture recognition [5-7] and objectionable image filtering [8]. In these applications, the search space for region of interest (such as faces or hands ) can be reduced through the detection of skin region segmentation. At this time in research, the skin

In the algorithm2, Bayesianroutine was applied in the three color spaces and then calculates the number of pixels in the largest blob of output from each of the color spaces. The color space in which largest of these blobs is found is taken as the most suited color space for detecting skin-like colors. The assumption was that human is the closest object to the camera and thereby the largest connected convex hull. This algorith m in this work focuses on single largest skin like object; it is robust to the changes in image temperature and amb ient lighting conditions. Details are given in section III-B. In the algorithm3, we apply Bayesian routine to all color spaces to given input image and with uniform weights, addition of all the logical images wh ich is output from Bayesian routine. This account for the effects of light source and the high-brightness regions in color images is the assumptive method, used by [14] in order to compensate the images with light interference. A lso using dual-color reflect ion model [15] co mbined with reflection theory, the determination of whether the images have light interference has been made in [16]. If the image has light interference, light-compensation should be adopted to compensate for varying illu mination colors [17]. Detailed algorithm is given in section III-C . II. RELAT ED WORK In the past few years, a large number of co mparative studies of color space selection have been reported. Among various types of skin detection methods, the one that make use of the skin color as a tool for detection of skin is considered to be the most effective and have been studied extensively [18-19]. Zarit [20] compares five of the most prominent techniques of skin segmentation based on skin color. Human skins have a characteristic color and it was a commonly accepted to design a method based on skin color identification [21-22]. A lthough the robust skin segmentation is possible with Kinect with certain constraints [23] but here we are trying to work in 2D space with simp le camera to make system affordable. The Ant Colony–Fuzzy C-means Hybrid Algorith m (AFHA) adaptively clusters image pixels viewed as three dimensional data pieces in the RGB color space [24] and in itializes cluster centroid distribution and centroid numbers. AFHA can be a feasible preprocessing approach for operations such as image semantic and pattern recognition. Wang [25] uses pixel wise SVM and FCM classification for image segmentation, trained on pixel level co lor feature and texture feature using Gabor Filters . However this algorith ms lack enough robustness to noise. An approach for hand gesture recognition uses dominant

points based finger counting using skin color segmentation [26]. In this method, skin segmentation based on HSV was used as compromise between effectiveness for segmentation and computational complexity. Lee [27] proposes a combination of YCb Cr and GA for skin segmentation on low-resolution images and region estimation with morphological operation with blob analysis. This setup detects the hand regions under various illu mination conditions and extracts the finger informat ion with size and rotation invariance. Yogarajah [28] applied fixed decision boundaries classification to segment human skin. Th is method is robust to imaging conditions and not biased by human ethnicity. However, if there are no eyes detected in the image then this method cannot be applied to skin segmentation problems. A color pixel clustering model for skin segmentation is used to reduce the sensitivity to variations in lighting conditions and complex backgrounds [29]. Skin regions are ext racted using four skin color clustering models (1) standard-skin, (2) shadow-skin, (3) light-skin and (4) h igh-red-skin models. Mult i-skin color clustering may be used for skin color segmentation in uncontrolled conditions. It also overcomes the limitations of pixel based segmentation by combining it with region based segmentation iteratively considering neighborhood pixels. Nine skin modeling approaches which include AdaBoost, Bayesian network, J48, Multilayer Perceptron, Naive Bayesian, Rando m Forest, RBF network, SVM and six color spaces histogram approach with the presence or absence of the lu minance co mponent are compared in [30]. The authors are convinced towards the end of the paper that the selection of an appropriate skin color modeling is important for robust skin segmentation. The main problem co mes with the provision of different colors human skin found in different parts of the world. A nu mber of published researches included various skin models and detection techniques such as [20][31]. Even though Naji [29] emp loys a very sophisticated method of dynamic thresholding and achieves very good results under some constraints , however none came up with co mplete accuracy. Huang [32] used ASSM (Adaptive Skin-Color Model Switching Method) which used possible combinations of three skin mode. ASSM is constructed using the possible comb inations of three skin color models i.e. the YCb Cr model, So riano’s [33] model and Bayesian mixtu re model. Peru mal [1] used different probabilistic filter models in six color spaces and came up with a proposed range of values that the ‘relevant’ planes of the model should lie for detecting skin pixels [34].

III. SWIT CHING M ODEL An important issue in skin color segmentation is the building of a model that could cover different skin appearances in different light conditions. If the skin model is too general, it may yield a large nu mber of false positives. On the other hand, if the skin model is tight, it may yield nu merous false negatives. This section describes three algorithms that are used to switch between color spaces. These algorithms depend largely on properties of colo r spaces HSV, RGB, YCb Cr and their functions. Bayesian Routine [35] is a prerequisite to each of these algorithms. A naive Bayesian model is constructed to ensue at initial stage of any of the algorithms. An image is taken as an input and converted to the corresponding image map. Now individual pixels are picked and analyzed to fit in the test range. Newton’s S-R method is emp loyed to reach to an optimal range for the color space in consideration. The ranges for the color spaces obtained and will be referred as FilterRGB, FilterHSV and FilterYC r Cb respectively. The range filter obtained is applied to each pixel in a plane and its corresponding pixels in other two color planes. The pixels that pass the filter are set and all other are reset. Thus a logical image formed where white portion roughly maps to the skin portion of the input image. TABLE 1 PARAMETERS OF THE BAYESIAN ROUTINE

Co lo r Sp a ce

Seg m ent at ion Cu t- off R: 95-255 G: 40- 255 B:2 0-2 55 H:0.04 – 0.088 2 S:0.11 – 0.68 V:0.38 – 0.112

RGB

HS V

C b : 100 –125 Cr: 135 - 170

YCb Cr

hereby called as MaxConnected. All the pixels in the original logical image are assigned a value of 0 except those that belong to the MaxConnected. After extensive experiment and careful analysis of a large dataset of images, probability ranges employed in the Bayesian routine. These are shown in Table 1 for different color spaces. An abstraction for the Bayesian routine for the three color spaces is given in Figure 1. A. Algorithm1 A large image database of human hand is collected and filter is applied for all the color spaces. The best suited color spaces inspected with human supervision which is stored as the target color space for the corresponding image. A neural network is constructed with input variable as a function of variables f (Hi , Si ,Vi , Ri , Gi , Bi , Yi , Cbi , Cri ) Where I N and N is size of training dataset. Also N is the best suited color space as the target variable. Multilayer networks solve the classification problem for non-linear sets by employing hidden layers, whose neurons are not directly connected to the output. The additional hidden layers can be interpreted geometrically as additional hyper-planes, wh ich enhance the separation capacity of the network. Typical multilayer network arch itecture, to calculate the weight, changes in the hidden layer. The erro r in the output layer is back-propagated to these layers according to the connecting weights. This process is repeated for each sample in the training set. One cycle through the training set is called an epoch. The number of epochs needed to train the network depends on various parameters, especially on the error calculated in the output layer. The output is calculated as [36]:

Input Image

(1) RGB Filter

Filter Output[1]

HSV Filter

Filter Output [2]

YCbCr Filter

Filter Output [3]

Figure 1. Abstraction of Bayesian routine

The salt and pepper filtering was applied on logical image to reduce noise. The image is now eroded. It is then bridged followed by dilation. The connected components in the image are labeled and any unlabeled component deleted. This leaves a set of sub-matrices of pixels which are fully connected. The label of the sub matrix mapping to the largest of connected matrix is

Where the set of all weight and bias parameters have been grouped together into a vector w. In our case, D=9 is the number of neurons in the hidden layer and M=1 is the target variable. Here wkj are presumed to be the weight of an imaginative neuron called bias for each j in each layer. Thus the neural network model is simp ly a nonlinear function fro m a set of input variables {xi } to a set of output variables {y k } controlled by a vector w of adjustable parameters. The activation function used at the output layer is softmax function for classifying the given image in to one of the three classes taken into account. The softmax function is defined as [36]:

Pi =

-0.609967447485691

8

(2)

0.876402575028212

9

TABLE 3

WEIGHTS FOR NEURONS IN THE HIDDEN LAYER OF ANN

Where P is the value of an output node, q is the net input to an output node and also is the number of output nodes. The output is assigned the class with maximu m probability given the input. The under given is the algorithm fo r classifying a given image into a suitable color space for segmenting skin class pixels. 1. A test image is input and f(H i,S i,V i,R i,G i,B i,Y i,C bi,C ri) is found. 2. The weights obtained by the neural network applied in database described above are matrix multiplied by the matrix f. 3. The result is the class label i.e. the suitable color space for this image and skin segmentation in this color space is applied. 4. The output is hereby called output[0].

Bias

1

2.66851896481526

2

2.53485229118306

3

1.86154206393997

4

-1.01662264490851

5

1.40914165639875

6

-1.81171451262425

7

0.479291597566260

Weight

1

-0.224382870313882

2

-0.224382870313882

3

-0.224382870313882

4

-0.224382870313882

5

-0.224382870313882

6

-0.224382870313882

7

-0.224382870313882

8

-0.224382870313882

9

-0.224382870313882

Tables 2, 3, 4 provide informat ion about the ANN architecture in Algorithm 1. Table 2 shows the weights for bias neuron in the input layer. The ith row cell represents weight of the bias in making the ith neuron where i represent a neuron in the hidden layer. Table 3 shows a matrix of weights originating fro m hidden layer of the neural network where each row represents a neuron center. Table 4 below shows a similar set of weight values originating fro m the input layer of the ANN. The weight of the bias neuron in the hidden layer is calculated to be 3.65031x10-1 approximately.

TABLE 2 WEIGHTS OF BIAS NEURON IN I NPUT LAYER OF ANN

Neuron#

Neuron #

TABLE 4 WEIGHT S AT T HE INPUT LAYER Neuron# Weight#

1 2 3 4 5 6 7 8 9

Training Sample# 1 2 3 4

1

2

-0.357 0.330 -0.182 -1.293 0.468 1.088 1.182 -0.444 -2.351

Mean Hue 0.162068 0.575144 0.241863 0.493929

3

4

5

6

7

8

9

-0.667

1.123

-0.288

-0.504

-1.042

-0.673

-0.471

-0.743

-3.582 -0.758 1.052 0.925 1.220 2.195 -0.691 1.151

-1.113 0.210 2.001 -1.402 1.872 1.659 2.596 1.319

-0.897 -0.005 -1.830 0.740 1.126 0.581 0.583 -0.867

-0.237 -0.026 0.064 -1.534 0.009 0.298 0.386 1.298

1.441 -1.035 -0.088 -0.659 -0.533 -0.113 0.268 0.326

-0.778 0.858 -0.769 -1.173 0.674 0.733 -0.878 -1.630

-1.788 -0.028 -0.199 0.501 -0.249 0.198 0.392 -1.063

-1.582 0.872 -0.014 -0.222 0.536 1.226 1.555 0.580

Mean Saturation 0.340032 0.244795 0.578858 0.241863

TABLE 5 I NPUT PARAMETERS FOR SOME OF THE TRAINING DATA POINTS Mean Mean Y Mean Cb Mean Cr Mean Red Value 0.372549 110.34945 117.01452 139.58895 128.38988 0.403258 73.01415 126.95497 133.14684 74.659666 121.31361 132.11358 125.73741 118.97631 122.86689 0.578858 121.31361 132.11358 125.73741 118.97631

Mean Green 104.7 62.58538 130.94496 122.86689

Mean Blue 87.5811 64.266852 109.23422 130.94496

B. Algorithm2 Algorith m 2 extracts the sub image after applying the Bayesian filter in each color space and then chooses the maximu m connected of them all. Algorith m2 works robustly in a special scenario when the skin-like is the closest object to the image receptor and there is only one human like object in the frame. Although this assumption seems absurd at first but it can immed iately be related to since this is a very common scenario if the receptor is installed on a personal computer or on a hand held device whose primary purpose is to capture human imagery. Working under the above assumption, it can be safely assumed that, if the algorithm does not throw false negatives, our region of interest has to lie in the consecutive planes nearest to the first true positive detection point in 3D space. This implies that in 2D logical image, hu man skin has to lie inside the convex hull produced by the maximally connected sub matrix. The under given is a algorith m for the same. 1. BayesianRoutine given above is applied for each of the three color spaces. 2. MaxConnected[i] is extracted as output where i=0,1,2 & { 0->RGB,1->HSV,2->YCbCr } 3. For exactly one “i” in {0,1,2} i_mPlexed = i multiplexed such that MaxConnected[i]. NoOfPixels is maximum of all the MaxConnected. 4. Output=BayesianRoutine color space mapped by i, where the mapping is given by 1. 5. The output[1] = Output.

C. Algorithm3 In this, Bayesian routine is applied to each of the three color spaces and the sum of all the logical sub images is calculated and logical matrix obtained. This logical image is masked with the original image. Algorith m 3 works under the quantitative assumption that there are multip le skin-like objects. It considers all of them at once. It leverages the fact that the three filters (RGB-Filter, HSVFilter and YCb Cr-Filter) rarely throw false negatives and therefore minimizes false error during weighted addition of results from indiv idually fro m each one of them. The under given is the algorithm for the same. 1. BayesianRoutine given above is applied for each of the three color spaces. 2. for each i in {0,1,2} MaxConnected_Master = ∑MaxConnected[i]. 3. MaxColor=MaxConected_Mastermapped back on to the original colored Image map. 4. return output[2]= MaxColor.

IV. EXPERIM ENTA L RESULTS All algorith ms and components were imp lemented in

MATLAB®. To test aforementioned techniques on live feeded data, a standard webcam installed on a laptop was used with an initiation time of 10µs so that the camera aperture can adjust to the ambient lighting condition. ANN was applied on the training dataset and the following were the input parameters ordered tuple over all the skin p ixels. {Mean Hue, Mean Saturation, Mean Value, Mean Y, Mean Cb , Mean Cr, Mean Red, Mean Green, and Mean Blue}

A database of 6500 images in was taken which was used to create the Bayesian routine and to feed the mentioned input parameters into the three algorithms. The input parameters for these images are given in Table 5. Apart fro m these, a around 500 images was manually captured in varied lighting conditions from a web camera and a standard digital camera. TABLE 6 CONFUSION MATRIX FOR THE TEST CASES OF ALGORITHM1 Ta rg et Class R GB H SV YC b C r O utput Cla ss R GB 33.0% 5.5% 4.4% 3.3% 24% 4.4% H SV YC bCr

5.5%

1.1%

16.5%

TABLE 7 CONFUSION MATRIX FOR THE TEST CASES OF ALGORITHM2 Ta rg et Class R GB H SV YC b C r O utput Cla ss 15.3% 18.5% 3.4% R GB H SV 18.7% 12.6% 5.4% YC bCr

10.6%

7.0%

8.5%

A subset of the dataset was used for training the algorith ms using 0.632 bootstrapping and the rest was used for validating the model. The three algorith ms were applied to the test set and observations were recorded. Table 6 and 7 shows the confusion matrix of the test with ANN, where each cell has the percentage of inputs correctly classified. The output for some of our test cases has been shown in Figure 2. Figure 2(a) shows one of the images given as input to all the algorith ms. Figure 2(b) shows the output image when the input was given to algorith m1. The inset image in figure 2(b) rep resent the binary image as given by Bayesian routine on the color space chosen by ANN, after noise reduction and the corresponding color image respectively. Figure 2(c) depicts the output when algorithm2 is applied to the input image. Figure 2(d) shows the algorithm3 output. Upon analysis and thereby confirmed by the sample results, it can be seen that all algorith ms perform as expected. In Fig 2 since the skin object is only one hand with arm, algorith m1 and algorith m2 perform well. In case of figure

2, however, it is seen that algorithm 3 outperforms than other two algorith ms and detects both the subjects

considerably well. More results for different images are shown in figure 3.

Fig 2.Outputs after applying all the Algorithms (a) Input Image (b) Neural Network (c)Algorithm2 (d) Algorithm 3:∑MaxConnected[i]

V. CONCLUSIONS This paper proposes a robust method for skin pixel identification and classification that overcomes challenges of lighting changes. A preliminary part of this has been presented at [37]. Three different techniques were emp loyed to adaptively choose skin color model. Algorith m2, wh ich was conceived earliest, extracts all the pixels that were found to be in skin class by Bayesian routine and extracts the maximu m connected area under different and unique convex hulls of sets and resets in logical representation of an input image. Algorithm3, a logical alternative to the earlier algorithm, employs a uniformly weighted addition of the maximu m connected sub images over all the three color spaces that were provided by the MaxConnected. Eventually Algorithm1 with the help o f ANN would perform the best as shown by tables 6 and 7. Therefore based on experiments it can be firmly said that the neural network technique outperformed the Max Connected matrix to accomplish the task of adaptive color space switching for skin segmentation. In the future, a computationally low cost version of this model can be emp loyed on a standard hand held commercial dig ital camera enhancing its features of human detection in varied lighting conditions among camouflaging backgrounds towards a better segmentation and adding to further image processing as required by the us er. ACKNOW LEDGEM ENT Authors would like to exp ress their sincere thanks to Dr. Prag Sharma, Clique Cluster Manager, CASL UCD, Dublin Ireland for provid ing face dataset. The authors would also like to acknowledge the support provided by Microsoft ® Research India to carry out this study.

REFERENCES 1. K. N. Perumal, and et at., Skin detection using color pixel classification with application to face detection: a comparative study, Proceedings of Int conf on Computational Intelligence and Multimedia Applications, 2007, pp. 436-441. 2. B. Wang, C.Xiuying, and L. Cuixiang, A Robust Method for Skin Detection and Segmentation of Human Face,Procedings forSecond International Conference on Intelligent Networks and Intelligent Systems, 2009, pp. 290-293. 3. C. Habisand F.Krsmanovic, Explicit Image Filter, CS229 Final Project, Stanfort University, 2005. 4. Y.C. Chang, C.T . Yung, and H.C Hong, Adaptive color space switching based approach for face tracking, Neural Information Processing, 2006, pp. 244-252. 5.

A. Chaudhary, J.L. Raheja, K. Das, “A Vision based Real T ime System to Control Remote Robotic hand Fingers”, In proceedings of the IEEE International Conference on Computer Control and Automation, South Korea, 1-3 May, 2011, pp. 118-122.

6. S.J. Mckenna, and M. Kenny, A comparison of skin history and trajectory-based representation schemes for the recognition of userspecified gestures, Pattern Recognition, Vol. 37, no. 5, 2004, pp. 999-1009. 7.

A. Chaudhary, J.L. Raheja, K. Singal, S. Raheja, “ An ANN based Approach to Calculate Robotic fingers positions”, Published as Book Chapter in “Advances in Computing and Communications”, CCIS, Springer Berlin Heidelberg, Vol. 192, pp. 488-496, 2011.

8. M.J. Jones and J. M. Rehg, Statistical color models with application to skin detection,IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1999. 9. A. Chaudhary, J.L. Raheja, K. Das, S. Raheja, A Survey on Hand Gesture Recognition in context of Soft Computing, Published as Book Chapter in Advanced Computing CCIS, Springer Berlin Heidelberg, Vol. 133, pp. 46-55, 2011. 10. A. Chaudhary, Manasa M. B. L., J.L. Raheja, Light invariant NeuroVision System for Elder/Sick people to express their needs into

Lingual Description, Microsoft Research annual research symposium TechVista 2012, Jan, 2012. 11. S.J. McKenna., R. Yogesh, and G. Shaogang, T racking colour objects using adaptive mixture models, Image and vision computing, Vol. 17, no. 3,1999, pp. 225-231. 12. D. Chai, and B. Abdesselam, A Bayesian approach to skin color classification in YCbCr color space, Proceedings of TENCON 2000, pp. 421-424. 13. S. S. Kamath, and J.R. Joel, Color Image Segmentation in RGB using Vector Angle and absolute difference Measures, Proceedings of 14th European Signal Processing Conference, Florence, Italy. 2006. 14. W. Jianguo, W. Jiangtao, Y. Jingyu, Rotation-invariant FaceDetection in Color Images with Complex Background, Computer Engineering, Vol. 34, 2008, pp. 210–212. 15. Z. Shuzhen, S. Hailong, X. Xiaoyan, Face Detection Based on SkinSegmentation and Features Location, Computer Engineering andApplications, Vol. 14, 2008, pp. 82~84. 16. C.C. Chiang, C.J. Huang, A robust method for detecting Abitrarily tilted human faces in color images, Science, Vol.26, 2005, pp. 2518– 2536. 17. C. Maoyuan, Design of Color Image Skin Area SegmentationSystem in the Matlab Environment, Computer Applications, Vol. 4,Nov 2007, pp. 128–130. 18. J.L. Raheja, M.B.L. Manasa, A. Chaudhary, S. Raheja, ABHIVYAKTI: Hand gesture recognition using orientation histogram in different light conditions, Proceedings of the 5th Indian International Conference on Artificial Intelligence, India, 2011, pp1687-1698.

26. Meng, Zhenyu, Jeng-Shyang Pan, Kuo-Kun T seng, and WeiminZheng. "Dominant Points Based Hand Finger Counting for Recognition under Skin Color Extraction in Hand Gesture Control System." In Genetic and Evolutionary Computing (ICGEC), 2012 Sixth International Conference on, pp. 364-367. IEEE, 2012. 27. Lee, Lae-Kyoung, Su-Yong An, and Se-Young Oh. "Robust fingertip extraction with improved skin color segmentation for finger gesture recognition in Human-robot interaction." In Evolutionary Computation (CEC), 2012, pp. 1-7 28. Yogarajah, P., Joan Condell, Kevin Curran, Abbas Cheddad, and Paul McKevitt. "A dynamic threshold approach for skin segmentation in color images." In Image Processing, 2010 17th IEEE International Conference on Image Processing, IEEE, 2010, pp. 2225-2228 29. Naji, Sinan A., RoziatiZainuddin, and Hamid A. Jalab. "Skin segmentation based on multi pixel color clustering models." Digital Signal Processing,2012. 30. Khan, Rehanullah, Allan Hanbury, Julian Stöttinger, and Abdul Bais. "Color based skin classification." Pattern Recognition Letters 33, no. 2, 2012,pp: 157-163. 31. J.C. T errillon, N.S. Mahdad, H.Fukamachi, and S.Akamatsu, Comparative performance of different skin chrominance models and chrominance spaces for the automatic detection of human faces in color images, 4th IEEE International Conference on Automatic Face and Gesture Recognition, 2000, pp. 54-61. 32. D.Y Huang, W.C Hu, S.H. Chang, Gabor filter-based hand-pose angle estimation for hand gesture recognition under varying illumination, Expert Systems with Applications, Volume 38, Issue 5, May 2011, pp. 6031-6042.

19. M.R. Tabassum and et al., Comparative study of statistical skin detection algorithms for sub-continental human images, Information Technology Journal, Vol 9, no. 4, 2010, pp. 811-817.

33. M. Soriano, M. Birgitta, H. Sami, and L. Mika, Skin detection in video under changing illumination conditions, Proceedings 15 th International Conference on Pattern Recognition, 2000,pp. 839-842.

20. B.D. Zarit, J.S. Boaz, and F.K.H Quek, Comparison of five color models in skin pixel classification,International Workshop onRecognition, Analysis, and Tracking of Faces and Gestures in Real-T ime Systems, 1999, pp. 58-63.

34. J. Brand and J. Mason, A Comparative Assessment of Three Approaches to Pixel-Level Human Skin, Proceedings of 15 th International Conference on Pattern Recognition,2000, pp. 10561059.

21. J. Brand, and J. S. Mason, A comparative assessment of three approaches to pixel-level human skin-detection,Proceedings of 15 th International Conference on Pattern Recognition, 2000, pp. 10561059.

35. S.L. Phung, A. BouzerdoumSr, and D. Chai Sr, Skin segmentation using color pixel classification: analysis and comparison, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, no. 1, 2005, pp. 148-154.

22. M.H Yang, and N.Ahuja, Gaussian mixture model for human skin color and its application in image and video databases, In Proc. SPIE: Storage and Retrieval for Image and Video Databases VII, Vol. 3656, 1999, pp. 458-466.

36. C.M. Bishop,Pattern Recognition and Machine Learning, New York: Springer Science and Business Media, 2006, pp 225-239.

23. J.L. Raheja, A. Chaudhary, K. Singal, Tracking of Fingertips and Centre of Palm using KINECT, In proceedings of the 3 rd IEEE International Conference on Computational Intelligence, Modelling and Simulation, Malaysia, 20-22 Sep, 2011, pp. 248-252. 24. Zhiding Yu, Oscar C. Au, RuobingZou, Weiyu Yu, Jing T ian, An adaptive unsupervised approach toward pixel clustering and color image segmentation, Pattern Recognition, Volume 43, Issue 5, May 2010, Pages 1889-1906 25. Wang, Xiang-Yang, Ting Wang, and Juan Bu. "Color image segmentation using pixel wise support vector machine classification." Pattern Recognition 44, no. 4 (2011): 777-787.

37. A. Chaudhary, A. Gupta, Automated Switching System for Skin Pixel Segmentation in Varied Lighting, 19 th IEEE International Conference on Mechatronics and Machine Vision in Practice, Auckland, New Zealand, 28-30 Nov, 2012, pp. 26-31. 38. J. Brand and J. Mason, A Comparative Assessment of Three Approaches to Pixel-Level Human Skin, Proceedings of 15 th International Conference on Pattern Recognition,2000, pp. 10561059. 39. Liu, Qiong, and Guang-zhengPeng. "A robust skin color based face detection algorithm." In Informatics in Control, Automation and Robotics (CAR), 2010 2nd International Asia Conference on, vol. 2, pp. 525-528. IEEE, 2010.

Fig 3: Outputs after applying all the Algorithms (a) Input Image(b) Neural Network. (c)Algorithm2 (d) Algorithm 3:∑MaxConnected[i]

Ankur Gupta did his bachelors of engineering in Computer Science from BITS Pilani and has worked with VeriSign in past. Currently he is with Tondo Imaging, Bengaluru. His research interests are in vision based robotics, light invariant color segmentations and steganography. He holds two US patents also.

Dr. Chaudhary is major in Computer Engineering and received his PhD in Computer Vision. Currently he is Assistant Professor at Dept. of Computer Science, Truman State University, USA. His current research interests are in Vision based applications, Intelligent Systems and Graph Algorithms. He has more than fifty publications, authored one book and also has been guest editor for CAEE, Elsevier. He is on the Editorial Board of several International Journals and serves as Program Chair/TPC in many Conferences. He is also reviewer for Journals including IEEE Transactions. In past, he has been associated with BITS Pilani, University of Iowa and also has been visiting faculty/researcher to many Universities. He has also worked with CITRIX R&D and AVAYA INC as System Programmer.

Suggest Documents