Face Detection Using Color Based Segmentation

1 downloads 0 Views 916KB Size Report
either RGB or HSI or YCbCr values [10,11], but in this algorithm we take four different values from two color spaces i.e H from HSI model and Y, Cb and Cr from.
Face Detection Using Color Based Segmentation and Morphological Processing – A Case Study Dr. Arti Khaparde*, Sowmya Reddy.Y Swetha Ravipudi *Professor of ECE, Bharath Institute of Science and Technology Aurora’s Technological and Research Institute, Hyderabad Abstract Face is our primary focus of attention for conveying identity. Human face detection by computer systems has become a major field of interest. Detection of faces in a digital image has gained much importance in the last decade, with application in many fields. Automating the process to a computer requires the use of various image processing techniques. In the present paper, a face detection method based on color based segmentation and morphological processing is proposed with detailed experimental results. The algorithm designed uses color based segmentation and some morphological processing like open, closed and connected operation. The color based segmentation takes the advantage of patterns developed in only two color spaces i.e HSI and YCrCb, instead of three color spaces, followed by the morphological operations and a template matching. The present algorithm is applied to Indian faces. Experimental results show that the proposed algorithm is good enough to detect the human face taken through video with an accuracy of 90% and recognition of the detected face with an accuracy of 86-90%.

Keywords: Face recognition, PCA, Connected Components, Opening, Template matching,

1. Introduction Computer vision is one out of many areas that wants to understand the process of human functionality and copy that process with intension to complement human life with intelligent machines. Face detection and localization is usually the first step in many biometric applications like, face recognition, video surveillance, human computer interface etc. Face detection in general terms is defined as to isolate human faces from their background and exactly locate their position in an image. Different algorithms based on the approaches, like Knowledge based [5], template matching [6, 7], neural networks [8], Eigen face decomposition [9], support vector machine [10], pattern recognition [10] etc. are effectively applied in detection of faces. Each algorithm has its own advantages and disadvantage in terms of accuracy, speed, complexity based on the prior information and

knowledge. The present paper uses a mixed approach that is, color based segmentation followed by template matching which takes care of spatial coordinates instead of shape [6] and local features of face like eye nose etc [10]. The color based approached usually take care of either RGB or HSI or YCbCr values [10,11], but in this algorithm we take four different values from two color spaces i.e H from HSI model and Y, Cb and Cr from YCbCr model. In order to see the accuracy of face detected output, face recognition was done on the cropped images, which are the outputs from face detection algorithm, using simple Principle component analysis (PCA). The algorithm developed was found to be reasonably fast taking on an average of 25 to 40 seconds (including image capture from video) to run with the accuracy of nearly 90%. The paper is organized as follows; Section 2 deals with the color based segmentation where different color spaces, HSI and YCrCb , are considered to determine the skin/not-skin area. Section 3 deals with the morphological operation i.e opening to perform the connected component analysis, section 4 deals with the design of template and its matching. Section 5 gives the overview of algorithm with results and last section 6 ends the paper with conclusion and future scope.

2. Color based segmentation Segmentation is subdividing an image into its constituent regions or object. The level up to which the subdivision is carried out depends on the problem being solved. While different ethnic groups have different levels of the melanin and pigmentation, the range of colors that human facial skin takes on is clearly a subspace of the total color space, assuming that a person framed is not having face with any unnatural color.. The algorithm takes the advantage of face color correlation to limit face search to areas of an input image that have at least the correct color components. In the literature [10, 11] there are many color based face detection algorithm, but the proposed algorithm uses the two color spaces only namely, HSI and YCbCr. The bounding ranges calculated for the values of H, Y, Cb and Cr were used to generate the binary images.

2.1 HSI color space While RGB may be the most commonly used basis for color description, it has a negative aspect that each of the coordinates is subject to luminance effects. Moreover the red, green and blue components are highly correlated. Hence cannot necessarily provide relevant information about whether a particular pixel is skin or not. Since hue, saturation and intensity, can be effectively used to describe the color, the HSI color space, however is much more intuitive and provides color information in a manner more in line how human thinks of colors. The algorithm uses the range of Hue for the skin detection. Using the reference images (taken through Camera interface to computer system), different plots for H,S and I values for face and non-face pixels were plotted and studied to find the range of H value for face pixels. The most noticeable range which was used by algorithm to detect the skin for H value is 0.05 < H < 0.07 ----------- (1) Otherwise it is non skin pixel. By applying the mask based on this rule, HSI segmentation was achieved. (Fig 4-Fig 7)

2.2 YCbCr color space YcbCr space segments the image into a luminosity component and chrominance components. The main advantage is that influence of luminosity can be removed during processing a image. Using the reference images different plots for Y, Cb and Cr values for face and non-face pixels were plotted and studied to find the range of Y, Cb and Cr values for face pixels. After experimenting with various threshold the best result were found by using the following rule for detecting the skin pixel

connection property of skin regions and identify the face, which are described by rectangular boxes. Ideally each face is a connected region separated from each other as shown in Fig 14-Fig 17. However, in some circumstances, two or even three faces can be connected by ears or high luminance hairs. In addition, pseudo-skin pixels are scattered and generate hundreds of connected components, which costs unnecessary computations if they are identified as face candidates. However, the connection is thin compared to the inside regions of the face and it can be broken by image morphology operations. After color segmentation the left over noise in the background can be smoothened using morphological processing. It is necessary to remove the unwanted specs in order to speed future processing. Hence the open (erode followed by dialate) operation was performed using a structuring element. It was observed that the open operation has resulted in huge reduction in the number of small noisy specs. Erode shrinks the selected area and expands the background, whereas dilate operation does the reverse of this. In particular, one row direction and one column direction image erosion operations are applied so that more pixels are eroded in column directions. This is based on the observations that faces are usually connected more horizontal. In addition, within a face, connections between the parts above and below the eyes are fragile and it is desired not to erode this connection. As erosion operation act similar to median filter, and can remove pseudo-skin pixels because of their scattered and weak connection property. Between first and second level erosions, holes are filled so that later erosions only happen at edges of the connected components and will not cause regions inside face to fall apart. The final output of the opening is shown in Fig. 13

135 < Y < 145 100 < Cb < 110 ------------ (2)

4. Template matching

140 < Cr < 150

Once the face images are separated, template matching is used as not only a final detection scheme, but also for locating the centroid of the faces. The idea of template matching is to perform cross-covariance with the given image and a template that is representative of the image. Therefore, in application of face detection, the template should be representative face being either an average face of all faces in the training images, or an Eigen face. The template matching measures the level of similarity and concludes whether it is human face or not. The algorithm presented here uses a template face to determine if the segmented region that has at least one hole and a height to width

The results are shown in Fig 8-Fig11. Fig 12 shows the final binary segmented output using all the condition on H Y Cb and Cr

3. Morphological Operations The color segmentation generates a binary mask with the same size of the original image. However some regions similar to skin also appear white: pseudo color pixels like clothes, floors, building etc. The goal of the connected component algorithm is to analyze the

ratio in the range of 0.6 to 1.2. These parameters are calculated as follows: Using binary image with white regions showing skin regions and the black area showing non-skin regions, each skin region must be labeled. The segmented skin region is now evaluated assuming that the face would consists of at least one hole as it consists of eyes, nose and a mouth. If the hole is absent then it can be treated pseudo-skin pixel or area.

The flow chart of the algorithm developed for face detection is given in Fig 2. Fig 3 to Fig 17 gives the intermediate outputs of the algorithm and Fig 18 shows the final face detected output. Start

Input test image

The number of holes in a region is determined using the following equation: E=C-H-------- (3) Where E is Euler number determined by bweuler function in MATLAB, C is number of connected component. It value is equal to 1, as only one segmented region is considered at a time. H is number of holes.

HSV & YCbCr color mode

Calculate skin ranges for H, Cb, Cr

Obtained color segmented image

If the hole is detected, the height and width of the segmented portion is calculated. It uses the fact that if the ratio of height to width for a face is usually about 1. The present algorithm considers the component as face if this ratio varies from 0.6 to 1.2. Once the template face is placed inside the segmented image, it is necessary to see how well the template fits inside the region. For this two dimensional correlation were used. After repetitive testing a good correlation value was found to be 0.6. If the correlation between the test face matrix and the segmented region is 0.6 or higher it is treated as face.

Morphological processing

i=1

Region considered as face

N

Correlation >=1.2 PCA

The basic principle behind the design is to find whether a particular pixel belongs to skin or not skin range of Hue (H), luminance (Y), difference between the blue component and a reference Value (Cb) and difference between the red component and a reference Value (Cr). Fig 1 gives the steps involved from the acquiring of the image till its recognition. Color based segmentation

Find number of connected components and Label(1,2,….N)

N

5. Design approach and results

Test image

Apply open operation on the segmented image

i=1.2

Connected component Analysis

Fig. 2. Flow Chart for Algorithm Face recognition

PCA

Face detection

Fig 1: Block Diagram

Template matching

Fig 3.Input Image

Fig 4.HSV image

Fig.17. Fourth Connected Component

Fig 18.Face Detected Output

6. Conclusion and future scope

Fig 5.H Plane

Fig 6.S Plane

Fig 7.V Plane

Fig 8.YCbCr Image

Fig 9.Y Plane

Fig 11.Cr Plane

10.Cb Plane

Fig 12.Segmented Binary Image

The above algorithm was implemented in MATLAB on more than 50 such test images which were given as an input to the algorithm directly from the camera output interface to the system. As an extension to the detection, recognition of faces was also done using the crop images from the detected output, using PCA analysis. The efficiency of the face detection was found to be near about 90% while that of face recognition using the detected crop images was found to be between 80-90%. The run time error for face detection was 35 to 40 second while for face recognition it was between 60 to 90 seconds. Hence it can be concluded that the present algorithm demonstrate the super performance with respect to speed, zero repeats, low false positive rate and high accuracy. One of the problems faced during the implementation of the algorithm is that when two persons are very close to one another, sometimes the region is given the same label as a result. In order to correct this type of problem, neural network can be used for rigorous training. The present algorithm works well on front view of the faces and hence as a future scope it can also be expanded for side face detection as well. This would be relatively simple to achieve, the only extra work would be to create a template for average side view of a person and repeat the process.

7. References

Fig 13.After Opening

Fig 15.Second Connected Component

Fig 14.First Connected Component

Fig 16. Third Connected Component

[1] R.Gonzalez and R.Woods, Digital image Processingsecond edition, Prentice hall, 2002 [2] M.Elad et al. “Rejection based classifier for face detection”, Pattern recognition letters, 23, 2002 [3] Ming-Hsuana Yang and Narendra Ahuja, “Detecting Human faces in color Images”, Beckman Institute, university of Illinois,2002. [4] Sirovich and M Kirby, “Low dimensional procedure for the characterization of human faces”, Journal of optical society of America, Vol 4, 519, 1987. [5] G.Yang and T..Huang,”Human face detection in complex Backgriund”, Pattern recognition, vol.27, no.1, pp 5363,1994.

[6]

I.Craw, d.Tock, and a.Bennett, ”Finding face features”,Proc.second European conf. Computer vision, pp92-96,1992. [7] A.lanitis, C.J.Taylor and t.f.Cootes,”An automatic face identifaction systems using flexible appearance models”, Image and vision computing, vol. 13, no.5, pp393401,1995. [8] H.Rowley, s.Baluja, and T.Kanade,”Neural Network based Face detection”, Proc. IEEE conf. Computer vision and Patern recognition, pp203-208,1996.

[9] M.Turk and a.Pentland,”Eigenfaces for recognition”, J.Cognitive Neuroscience, vol. 3, no. 1,pp71-86,1991 [10] Ming-Hsuan Yang et.al. “Detecting faces in Images: a survey”, IEEE transaction on Pattern analysis and machine intelligene, vol. 24, no.1 2002. [11] Sanjay singh et.al, “A robust skin color based face detection algorithm”, Tamkang Journal of Science and Engineering vol.6, no.4,pp227-234, 2003

Suggest Documents