Fast and Fully Automatic Ear Detection Using ... - Semantic Scholar

5 downloads 1634 Views 598KB Size Report
information was provided including the description of the ... Authorized licensed use limited to: University of Western Australia. ..... Security Technology, 2005., pages 145 – 148, Oct. 2005. ... [9] D. J. Hurley, M. S. Nixon, and J. N. Carter.
Fast and Fully Automatic Ear Detection Using Cascaded AdaBoost S. M. S. Islam, M. Bennamoun and R. Davies School of Computer Science and Software Engineering, The University of Western Australia 35, Stirling Hwy, Crawley, WA 6009 {shams, bennamou, rowan}@csse.uwa.edu.au

Abstract Ear detection from a profile face image is an important step in many applications including biometric recognition. But accurate and rapid detection of the ear for real-time applications is a challenging task, particularly in the presence of occlusions. In this work, a cascaded AdaBoost based ear detection approach is proposed. In an experiment with a test set of 203 profile face images, all the ears were accurately detected by the proposed detector with a very low (5 × 10−6 ) false positive rate. It is also very fast and relatively robust to the presence of occlusions and degradation of the ear images (e.g. motion blur). The detection process is fully automatic and does not require any manual intervention.

1. Introduction The use of biometrics in human recognition is rapidly gaining in popularity. The ear is a promising biometric trait, the shape of which is unique to individuals and generally unaffected by changing facial expressions, use of cosmetics or eye glasses and aging between 8 years and 70 years [10]. It also has reduced spatial resolution and uniform distribution of colour. It can be used separately or in combination with the face for effective recognition or tracking in many applications including some national IDs, security, surveillance and law enforcement applications. But in any case, a pre-requisite is to detect the ear correctly from the captured image(s). Detecting 2-D ears from arbitrary side face images is a challenging problem due to the fact that ear images can vary in appearance under different viewing and illumination conditions. Most of the current research on ear biometrics assumes that the ear is already correctly detected and there are only few reported methods for accurately detecting the ear from the side face images [11]. One of the earliest such methods is to use Canny edge maps to detect the ear contour [4]. A somewhat different approach was proposed by Hurley et al. [9] using the “force field transformation”. Another

technique based on a modified active contour algorithm and Ovoid model is proposed by Alvarez et al. [3]. Yan and Bowyer [23] proposed using a two-line landmark, with one line along the border between the ear and the face, and the other from the top of the ear to the bottom, in order to detect and crop the ear. In their recent work [24], they proposed taking a predefined sector from the nose tip to locate the ear region. They cropped out the non-ear portion from that sector by skin detection and detected the ear pit using Gaussian smoothing and curvature estimation. Then, they applied an active contour algorithm to extract the ear contour. The system is automatic but fails if the ear pit is not visible. Most of the approaches mentioned above localize the ear from a small portion of the side of the face around the ear. But it is often required, particularly for non-intrusive applications, to detect the ear from a whole side face image. Also, the presence of occlusions due to hair and ear-rings for instance, has not been properly addressed. Moreover, many of the above approaches also fail in the presence of poor quality input images. Finally, most of them are not completely automatic and fast enough to be applied in realtime applications. To overcome similar problems, the AdaBoost algorithm [8, 18] has successfully been used in the case of face detection. A near real-time implementation of this algorithm with a cascaded structure [22] was found to be a fast and accurate method for face detection. This paved the way for its use for the detection and recognition of various other objects as well. It has been used for detecting or tracking the ball in soccer game [20], places in 2D maps [17, 15], pedestrians [14], eyes [16], mouths [12] and hands [5]. But to the best of our knowledge, no serious attempt has been made to use this algorithm for ear detection. In a paper on ear recognition [7], the authors simply mentioned that they used the Haar based object detector provided by the OpenCV library for ear region detection. No further information was provided including the description of the training set, features and parameters selection and the performance of the detection. Although OpenCV [1] and other available implementations of AdaBoost, such as VIPBase [2], claim that their implementations can be used for ob-

Authorized licensed use limited to: University of Western Australia. Downloaded on December 4, 2008 at 22:26 from IEEE Xplore. Restrictions apply.

jects other than faces, this has not been demonstrated and our tests on ears did not produce satisfactory performance. Hence, we were motivated to determine the right way to instantiate the general AdaBoost approach with the specifics required in order to specialize it for ear detection. In this work, we modified and adapted the cascaded AdaBoost [22] approach to detect the ear from 2D profile face (side view) images. We used Haar-like rectangular features representing the grey-level differences as the weak classifiers and computed them effectively by using the Integral Image [22] representation of the input images. AdaBoost is used as a method for selecting the best weak classifiers and then combining them into strong classifiers. A cascade of classifiers is built as the final detector, where a classifier is used only when all previous classifiers accept a particular image (or a sub-window of an image). We achieved a 100% detection rate with a False Positive Rate (FPR) of 5 × 10−6 while testing with a set of 203 profile face images. The system is also automatic in that it requires no manual intervention for the detection process. It is also robust to occlusions and degradations to a large extent and fast enough to be used for real-time applications. The paper is organized as follows. After the description of the general framework of the proposed ear detection approach in the next section, the implementation and training set up is detailed in Section 3. Results are reported and discussed in Section 4. Section 5 concludes.

2. Proposed Ear Detection Framework In the field of boosting, a weak classifier is defined as a “rough and moderately inaccurate” predictor, but one that can predict better than a chance. When several of such classifiers are strategically combined together the resultant more discriminatory classifier is called a strong classifier. In this section, we will describe these classifiers and how they are formed using the AdaBoost algorithm for building our proposed cascaded ear detector.

2.1. Weak Classifiers In our proposed ear detection framework, weak classifiers are built up based on some rectangular features. These features are derived from the idea of Haar wavelets, a natural set of basis functions that compute the difference of intensity in neighbouring regions [6]. Various types of rectangular features has been proposed for AdaBoost based face detectors. Considering the difference between the shape of the ear and the face, specifically the curves of the helix and the anti-helix and also the ear pit, we have chosen eight types of features shown in Figure 1. Various numbers of horizontally and vertically adjacent rectangular regions of the same size and shape constitute the features. The features with two rectangles are used to

detect horizontal and vertical edges. Similar features with three and four rectangles are used for different types of lines and curves. Finally, the centre-surround feature (F in Figure 1) is used to detect the ear pit.

A

B

E

F

C

D

G

H

Figure 1. The features used in training the AdaBoost.

The chosen seven types of features are shifted horizontally and vertically along a pre-defined window (to which all the input samples are normalized) and a feature is numbered for each location and type. Thus, a set of 96413 features were created in this work for a 24 by 16 window size and a shift of one pixel. The value of a feature is computed by subtracting the sum of the pixels in the grey region(s) from that of the dark region(s) (except in the case of C, E and F in Figure 1 where the sum in the dark region is multiplied by 2, 2 and 8 respectively before the subtraction to make the area of this region equal to that of the grey region(s)). To efficiently compute the feature value, all the input images (during training) and the sub-windows (during testing) are represented as “Integral images” [22] where each pixel holds the value of the sum of the pixels above and to the left. Also all the features are represented as “box car images” with a value ’1’ within the rectangle of interest and ’0’ outside. Finally, the feature value is obtained by making a dot product of these two according to the “boxlets” work described in [19].

2.2. Strong Classifiers A ‘strong’ classifier is constructed by combining a set of selected ‘weak’ classifiers using the AdaBoost (Adaptive Boost) algorithm. The algorithm uses supervised learning with the wrapper method of feature selection. It selects the best weak classifier with respect to a given weighted error of the input samples at each iteration. Later classifiers are tuned in favour of those samples misclassified by previous classifiers (this is why it is called adaptive). Samples that are misclassified in an iteration get higher weights in the subsequent iteration. For selecting the best weak classifier, an optimum threshold (Θ) is computed based on the minimization of the classification error that can occur due to the selection of a particular feature value (f (s), where s is a positive or negative sample or a sub-window of a large image). The classifica-

Authorized licensed use limited to: University of Western Australia. Downloaded on December 4, 2008 at 22:26 from IEEE Xplore. Restrictions apply.

tion is made according to the following equation:  1 if pf (s) < pΘ c(s, f, p, Θ) = 0 otherwise where p is a polarity to indicate the direction of inequality. Finally, the strong classifier is formed by taking a weighted combination of the selected weak classifiers followed by a threshold as shown in the following equation:    1 if ni=1 αi ci (s) ≥ 12 ni=1 αi C(s) = 0 otherwise  n where 12 i=1 αi is the AdaBoost threshold and αi is computed based on the classification error in each of the n stages.

2.3. Cascade of Classifiers To speed up detection by using only a small number of features to reject most of the negative sub-windows, Viola and Jones proposed a training algorithm for building a cascaded detector [22]. The optimization of the number of stages, the number of features per stage and the threshold for each stage for a given detection and false positive rate is a tremendously difficult problem. However, good results have been reported in [22] by simply aiming for a fixed maximum FPR (fm ) and minimum detection rate (dmin ) for each stage. These can be computed from the overall target FPR, Ft < (fm )n and detection rate D > (dmin )n , where n is the number of stages, typically 10-50. Performance of the strong classifier being built in a stage is evaluated on a validation set (that includes samples not used in the training) after successive addition of a predefined number of weak classifiers. The process continues until the target rates of that stage are met. The threshold of the stage is adjusted at each step to maintain the dmin . The negative sets for training and validation for the subsequent stages are collected from the false positives found by the strong classifier of the previous stage.

2.4. Ear Detection with the Cascaded Classifiers Once the training was performed, we used the trained classifiers of all the stages in a cascaded manner to build our ear detector. The detector is scanned over the profile faces used as test images in different sizes and locations. A classifier in the cascade was only used when a sub-window in the test image was detected as positive (ear) by the classifier of the previous stage and accepted finally only when it passed through all of them. To detect various sizes of ears, instead of resizing the given test image, we scaled up the detector along with the corresponding features. This is more time-efficient than the conventional pyramid approach as illustrated in [22], particularly in conjunction with the Integral Image method.

The two ears of a subject are essentially bilaterally symmetric (e.g., Yan and Bowyer [23] report 90% accuracy for symmetric ear recognition). Therefore, instead of training the system for both ears, only left ears were used for training. The features constituting the detector can be flipped to detect right ears. If the rectangular detector (24 by 16 or its scaled-up size) matches any sub-window of the image, a rectangle is drawn to show the detection. The detected region can then be cropped or extracted for further processing as appropriate for recognition or other purposes.

2.5. Multi-detection Integration Since the detector scans over a region in the test image with different scales and shift sizes, there is the possibility of multiple detections of the ear or ear-like regions. Such multiple detections are integrated using our clustering algorithm based on percentage of overlap (Figure 2). In this algorithm, pairs of rectangles representing the detected subwindows are clustered together if the mutual area shared between them is larger than a pre-defined threshold, minOv (0 < minOv < 1). Based on the observation that the number of true detections over the ear region is larger than the false detection on ear-like region(s) (if any), we added an option in the algorithm to avoid such false positive(s) by only taking the one that clusters the maximum number of rectangles. This is appropriate when only one ear needs to be detected which is the case for most recognition applications. 0. (Input) Given a set of detected rectangles rects, the minimum percentage of overlap required minOv and option for avoiding false detection opt. 1. (Initialize) Set the intermediate rectangle set tempRects empty. 2. (Multi-detection integration procedure) 2.a. Compute number of rectangles N in rects. 2.b. If N>1 i. Find areas of intersection of the first rectangle in rects with all. ii. Find percentages of the area overlapped perOV. iii. Find the rectangles combRects and their number intN for whose perOv>=minOv. iv. Find the mean of combRects as mRects. v. Store mRects and intN in tempRects. vi. Remove the rectangles in combRects from rects. vii. Go to step 2.a. End if 2.c. If intN>1 and opt == ‘yes’ i. Find the rectangle fRect in tempRects for which intN is maximum. ii. Remove all the rectangle(s) except fRect from tempRects. End if 3. (output) Output the rectangle in tempRects.

Figure 2. Algorithm for the integration of multiple detections.

3. Training In this section, the training data set to build the proposed ear detector is described. We also discuss the preprocessing stage, the training parameters chosen and other implementation aspects.

Authorized licensed use limited to: University of Western Australia. Downloaded on December 4, 2008 at 22:26 from IEEE Xplore. Restrictions apply.

3.1. Data Set

ing testing.

The positive training set is built with 5000 ear images cropped from the profile face images of different databases covering a wide range of races, sexes, appearances, orientations and illuminations. This set includes 429 images of the University of Notre Dame (UND) biometrics database [21, 24], 659 of the NIST Mugshot Identification Database (MID), 573 of XM2VTSDB [13], 201 images of the USTB, 15 of the MIT-CBL, and 188 of the UMIST database. It also includes around 3000 images synthesized by rotating -15 to +15 degrees of some images from the USTB, the UND and the XM2VTSDB databases. Our negative training set for the first stage of the cascade includes 10,000 images randomly chosen from a set of around 65,000 non-ear images. These images were mostly cropped from profile face images excluding the ear area. We also included some images of trees, birds and landscapes randomly downloaded from the web. Examples of the positive and negative image set are shown in Figure 3. Another set of 6000 such images without ear were used to find the false positives at the end of each stage of the cascade. A set of 5000 sub-windows were randomly chosen (if more than this number) from the available false positives as negative samples for the second and the subsequent stages.

3.3. Structure of the Cascade The structure of the cascade in terms of number of stages and number of features (i.e. weak classifiers) per stage depends on the value of the pre-defined parameters as mentioned in Section 2.3. However, to hand optimize the critical stages one can change the requirements and ultimately come up with a different size of the cascade. The parameters chosen in our AdaBoost training are f m = 0.7079, Ft = 0.001, dmin = 0.99899 and D = 0.95. But we defined first four stages to be completed with 10, 20, 60 and 80 features for quickly rejecting most of the false positives using small number of features during testing with profile faces. To make the training faster, the validation was performed after adding ten features for the first ten stages and then, adding that of 25 for the remaining stages. The detection and false positives rates computed during validation of each stage followed a gradual decrease to the target. The training finished at stage 18 with a total of 5035 rectangular features including 1425 features in the last stage.

3.4. Training Time The training process involves huge repetitive computations due to the large training set and also for the very low target FPR, taking several weeks on a single PC. To speed up the process, we distributed the job over a network of around 30 PCs using MATLABMPI. As a result, the training time was reduced to an order of days.

Figure 3. Examples of some ear (top) and non-ear (bottom) images used in the training.

4. Performance Evaluation

The validation set used for computing the rates of detection and false positives during the training process includes 5000 positives (cropped and synthesized ear images) and 6000 negatives (non-ear images). The negatives for the first stage are randomly chosen from a set of 12000 images not included in the training set. For the second and the subsequent stages, negatives are randomly chosen from the false positives found by the classifier of the previous stage and not used in the negative training set.

In this section, the results of the ear classification and detection are reported and discussed. The performance is measured using the Detection Rate and the corresponding FPR or False Acceptance Rate(FAR). The Receiver Operator Characteristics (ROC) curve is used to illustrate the relationship between these two rates.

3.2. Preprocessing As mentioned earlier, input images are collected from different sources with varying size and intensity values. Therefore, all the input images are scale normalized to the chosen input pattern size. Viola and Jones reported a square input pattern of size 24 by 24 as the most suitable for detecting frontal faces [22]. Considering the rectangular shape of the ear, we instead used an input pattern of size 24 by 16. The variance of the intensity values of images are also normalized to minimize the effect of lighting. Similar normalization is performed for each sub-window scanned dur-

4.1. Classification Performance The classification performance of the selected strong classifiers for all the stages was evaluated with a set of ear and non-ear images cropped from some profile images that were not used in the training. Some images were also synthesized from the cropped images by a rotation of -5 to +5 degrees. In total, there were 4423 ear and 5000 non-ear images. The FPR was computed by dividing the number of negative samples detected as positives by the total number of negative samples. The result of the classification is shown in the ROC curve in Figure 4. The curve shows that although the classifiers of the earlier stages allow a number of false positives, almost all of them are eliminated by the classifiers of the later

Authorized licensed use limited to: University of Western Australia. Downloaded on December 4, 2008 at 22:26 from IEEE Xplore. Restrictions apply.

0.0045 0.004

False Positive Rate

stages. This is due to the fact that the classifiers of the subsequent stages are trained to classify correctly the samples misclassified by the previous stages. The full cascade of 18 stages achieves a detection rate of 97.11% with no false positive.

0.0035 0.003 0.0025 0.002 0.0015 0.001 0.0005

Correct Detection Rate

0 1

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18

Number of Stages

0.995

0 .9 9 9 8

0.99

0 .9 9 9 4

0.985

0 .9 9 9

Figure 5. The relationship between false positive rate and the number of stages in the cascade.

0 .9 9 8 6

0.98

0 .9 9 8 2

0.975

0

0 .0 5

0 .1

0 .15

0 .2

0.97 0

0.1

0.2

0.3

0.4

0.5

False Positive Rate

Figure 4. The ROC curve showing the classification performance of the trained classifiers.

4.2. Detection Performance The detection performance of the cascaded detector was tested against 203 profile images from the UND face database each with a size of 640 by 480 pixels. The images were carefully separated from the training and validation set. The FPR is computed by dividing the number of false detections by the total number of sub-windows scanned in all the test images. The accuracy and speed of the detection are reported and discussed below.

Figure 6. A sample of detections using our proposed Ear Detector. Single detections are shown at the left and a case of multi-detection and the corresponding integration is shown at the right (best seen in color)

(with 990288 sub-windows) from the XM2VTSDB face database that were not used in the training, 54 were correctly detected (with only six false detections). The detector failed only in the cases where the majority of the ear was occluded as shown in Figure 8. However, such occluded images are likely not to be as useful for applications such as biometric recognition.

4.2.1 Accuracy Our proposed cascaded ear detector correctly detected all the test images taken from the UND database, i.e. the detection rate was 100%. Since the average ear size in this database is much larger than that of our detector, we started the scanning with a scale of 5, a scale factor of 1.25 and a step size of 1.5. The ear detector window scanned a total of 1308335 sub-windows in all the test images. After clustering the multi-detections, only seven sub-windows were found falsely detected as ears, resulting in a FPR of 5 × 10−6 . Those seven false positives were easily eliminated by using the option of avoiding false detection during the multi-detection integration as mentioned in Section 2.5. The ROC curve is not very meaningful for the UND test data set as we achieved a 100% detection rate for this test set. Instead, the relationship between the FPR and the number of stages in the cascade is shown in Figure 5. The FPR decreases exponentially with an increase in the number of stages. Some results of detection using our detector are shown in Figure 6. Each detection is shown by the rectangles in yellow lines while the integrated detection is shown in bold dotted cyan lines in case of multi-detection. The proposed ear detector also works well in the presence of partial occlusions involving hair and earrings as shown in Figure 7. Out of 104 selected occluded images

Figure 7. Detection of the occluded ear images by the proposed detector (inset image is the enlargement of the corresponding detected ear)

The detector is also robust to degradation of images such as the motion blur shown in the bottom right example in Figure 7.

Figure 8. Examples of the test images (mostly occluded) for which the detector failed.

4.2.2 Speed of the Detector The speed of the detector depends on the step size, shift and scale factor and the first scale. With an initial step size of 1.5 and scale of 5 with a scale factor of 1.25, the proposed ear detector scans a 480 by 640 test image in about 26.4 seconds by the current implementation with MATLAB version

Authorized licensed use limited to: University of Western Australia. Downloaded on December 4, 2008 at 22:26 from IEEE Xplore. Restrictions apply.

R2006a on a P entiumIV, 2.8GHz PC. The speed would likely improve considerably by recoding the detection program in C.

[9] D. J. Hurley, M. S. Nixon, and J. N. Carter. Force field feature extraction for ear biometrics. Computer Vision and Image Understanding, 98(3):491–512, June 2005. [10] A. Iannarelli. Ear Identification. Forensic Identification Series . Paramount Publishing Company, Fremont, California, 5. Conclusion 1989. Our method for ear detection based on the cascaded Ad[11] S. Islam, M. Bennamoun, R. Owens, and R. Davies. Biometric Approaches of 2D-3D Ear and Face: A Survey. Proc. aBoost algorithm is fast, accurate and robust. While the of Int’l Conf. on Systems, Computing Sciences and Software training phase is time consuming due to the large number of Engineering, SCSS 2007 (Accepted), Dec. 2007. samples and the features, the resulting detector is very fast. [12] R. Lienhart, L. Liang, and A. Kuranov. A detector tree of Training with a large variety of ear images from various boosted classifiers for real-time object detection and trackear databases made the detector robust to some rotations, ing. In Proc. of the Int’l Conf. on Multimedia and Expo, occlusions and also to the degradation of image quality to 2003. ICME ’03, Vol. 2:277–80, July 2003. a significant extent. It also does not require any manual [13] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre. intervention. Thus, it is suitable for real-time surveillance Xm2vtsbd: The extended m2vts database. In Proc. of the and non-intrusive biometric applications. The performance 2nd Conf. on Audio and Video-base Biometric Personal Verof the detector might further be enhanced by training with ification, Springer Verlag, New York, pages 1 – 6, May 1999. samples normalized to an input pattern size common to the [14] G. Monteiro, P. Peixoto, and U. Nunes. Vision-based pedestarget application. This will reduce the amount of re-scaling trian detection using haar-like features. Robotica 2006 of the trained features that affects the generalization perfor- Scientific meeting of the 6th Robotics Portuguese Festimance of AdaBoost. The FPR might be further reduced by val,Portugal, April-May 2006. adding more negative images to the set to be used to find [15] O. Mozos, C. Stachniss, and W. Burgard. Place classification of indoor environments with mobile robots using boostfalse positives during training. ing. In Proc. of the National Conf. on Artificial Intelligence, pages 1306–1311, 2005. Acknowledgements [16] Z. Niu, S. Shan, S. Yan, X. Chen, and W. Gao. 2d cascaded The authors would like to thank A. S. Mian, Prof R. adaboost for eye localization. In Proc. of ICPR 2006, Vol. 2:1216 – 1219, 2006. Owen and Prof W. Snyder for their helpful discussions and [17] A. Rottmann, O. Mozos, C. Stachniss, and W. Burgard. SuK. M. Tracey, J. Wan and A. Chew for their technical aspervised learning of places from range data using adaboost. sistance. They also acknowledge the use of the NIST, In Proc. of the IEEE Int’l Conf. on Robotics & Automation, the XM2VTSDB, the UMIST, the USTB and the UND pages 1742–1747, April 2005. face databases. This research is sponsored by ARC grant [18] R. Schapire and Y. Singer. Improved boosting algorithms DP0344338. using confidence-rated predictions. Mach. Learn., Vol. 37, No. 3:297–336, 1999. References [19] P. Simard, L. Bottou, P. Haffner, and Y. LeCun. A fast convolution algorithm for signal processing and neural networks, [1] OpenCV Library. http://sourceforge.net/projects/opencvlibrary/ . m. kearns, s. solla, and d. cohn (eds.). Advances in Neu[2] VIPBase. http://vipbase.net/. ral Information Processing Systems,, Vol. 11:571– 577, July [3] L. Alvarez, E. Gonzalez, and L. Mazorra. Fitting ear contour 1999. using an ovoid model. In Proc. of Int’l Carnahan Conf. on [20] A. Treptow and A. Zell. Real-time object tracking for soccerSecurity Technology, 2005., pages 145 – 148, Oct. 2005. robots without color information. Robotics and Autonomous [4] M. Burge and W. Burger. Ear biometrics in computer vision. Systems, Vol. 48, No. 1:41–48, 2004. In Proc. of the ICPR’00, pages 822 – 826, Sept 2000. [21] University of Notre Dame Biometrics Database. [5] Q. Chen, N. D. Georganas, and E. M. Petriu. Real-time http://www.nd.edu/∼cvrl/UNDBiometricsDatabase.html. vision-based hand gesture recognition using haar-like fea[22] P. Viola and M. Jones. Robust Real-Time Face Detection. tures. In Proc. of the IEEE Instrumentation and MeasureInt’l Journal of Compter Vision, 57(2):137–154, 2004. ment Technology Conf., pages 1 – 6, May 2007. [23] P. Yan and K. W. Bowyer. Empirical evaluation of advanced [6] C. K. Chui. An Introduction to Wavelets. Academic Press, ear biometrics. In Proc. of Conf. on Empirical Evaluation San Diego, 1992. Methods in Computer Vision, June 2005. [7] A. Fabate, M. Nappi, D. Riccio, and S. Ricciardi. Ear recognition by means of a rotation invariant descriptor. In Proc. of [24] P. Yan and K. W. Bowyer. Biometric recognition using 3d ear ICPR 2006, 4:437 – 440, Aug 2006. shape. IEEE Trans. on PAMI, Vol. 29, No. 8:1297 – 1308, Aug. 2007. [8] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In Proc. of the 2nd European Conf. on Computational Learning Theory, March 1995.

Authorized licensed use limited to: University of Western Australia. Downloaded on December 4, 2008 at 22:26 from IEEE Xplore. Restrictions apply.

Suggest Documents