Kernel Visual Keyword Description for Object and ...

3 downloads 15686 Views 512KB Size Report
techniques for learning objects to obtain best possible recognition and as well ... applications such as face recognition, object recognition, object detection, and object tracking. ..... Weka software is used to verify the results of the KNN and SVM.
Chapter: Kernel Visual Keyword Description for Object and Place Recognition Abbas M. Ali · Tarik A. Rashid · Book: Advances in Signal Processing and Intelligent Recognition Systems, Part I, Volume 425 edited by Sabu M. Thampi, Sanghamitra Bandyopadhyay, Sri Krishnan, Kuan-Ching Li, Sergey Mosin, Maode Ma, 12/2015: chapter Proceeding of Intelligent Recognition Systems (SIRS-2015): pages 27-38; Springer International Publishing., ISBN: 2194-5357

Kernel Visual Keyword Description for Object and Place Recognition Abstract: The most important aspects in computer and mobile robotics are both visual object and place recognition; they have been used to tackle numerous applications via different techniques as established previously in the literature, however, combining the machine learning techniques for learning objects to obtain best possible recognition and as well as to obtain its image descriptors for describing the content of the image fully is considered as another vital way which can be used in computer vision. Thus, in this manner, the system is able to learn and describe the structural features of objects or places more effectively, which in turn; it leads to a correct recognition of objects. This paper introduces a method that uses Naive Base to combine the Kernel Principle Component (KPCA) features with HOG features from the visual scene. According to this approach, a set of SURF features and Histogram of Gradient (HOG) are extracted from a given image. The minimum Euclidean Distance between all SURF features is computed from the visual codebook which was constructed by K-means previously to be combined with HOG features. A classification method such as Support Vector Machine (SVM) was used for data analysis and the results indicate that KPCA with HOG method significantly outperforms bag of visual keyword (BOW) approach on Caltech-101 object dataset and IDOL visual place dataset. Keywords: SURF, K-means, BOW, KPCA, HOG.

1. Introduction The visual recognition of both objects and places has been central in the area of computer vision and robotics and has attracted many researchers for last several years. It can be stated that in visual recognition of objects and places there are some features which are informative, notable and very useful for fully describing the content of visual object and place, and they ultimately will determine the accuracy rate of the recognition system. Both SIFT and SURF as local feature descriptor techniques were used by many researchers to describe images via collections of local feature vectors. These descriptors have been broadly used for visual object and place recognition [1, 2]. It is worth noticing that these local descriptors in an image can present large representations of feature, which, might be difficult to be tackled via machine learning techniques. Thus, in [3, 4] the BOW representations has been suggested. It is noticed that BOW produced reliable performance which has influenced researchers to construct further relationships between different visual keywords to enhance some particular goals [5, 6, 7, 8]. In [9] the researchers used the histogram of the less organized order of visual keyword which is known as the hard bag-offeatures (HBOF) or the Hard Assignment. It is indicated that when every single feature in the image is used to represent winning cluster centroids or to represent a keyword in the visual codebook, the other cluster centroids can be ignored and are not used for describing the image content. Research works in [10, 11, 12] produced several fresh and innovation models to enhance the hard bag of features of visual words. The models used soft assignment in which all cluster centroids are taken into account for better improvement in describing the contents of images. The computation time was increased via minimising the features in such a way that can display invariance against various conditions on the sights, these conditions are translation, scaling, noise and illumination changes. Therefore, an automated learning process of low dimensional model from some training objects is generated via the concept of reducing features. In this regards, principle component analysis has been used to tackle the problems in the field of computer vision for various applications such as face recognition, object recognition, object detection, and object tracking. Principle component analysis has also showed its success in robot localisation [4]. It is equally considered as a suitable model for data which is generated through Gaussian distribution, in other words, the data that is best described via correlation relationship of the second order. Though, it is indicated that the distribution of natural images is highly non Gaussian. Sch¨olkopf introduced the concept of Kernel Principle Analysis as a generalisation of principle component analysis for the investigation purpose [13, 4], the idea has been successfully used in tasks such as image processing, face recognition, image denoising, texture classification and others. In [14], the linear discrimination analysis is used successfully for tackling the classification problems. Yet, this approach cannot tackle the nonlinear problems, as a consequence, it was expanded to kernel based approaches in Baudat [14] which is called GDA.

In this paper, Naïve Base technique is used for combining the Kernel Principle Component features with histogram of gradient features from the visual scene. Based on this technique, a set of SURF features and histogram of gradient are extracted from a given image, and then, the minimum Euclidean Distance between all local features is computed from the visual codebook which was constructed by K-means approach. The kernel analysis is applied to the distance result and also HOG is extracted from the same image, then, these features are combined through Naive base technique to improve the accuracy rate. SVM is used to analyse the data. 2. Related Works

The key problem in any visual recognition model is the unknown image query which can be triggered from its search, in addition, the user can repossess images incorrectly from the image query. The problem can become more complicated when the images are distributed by transformations such as translation, rotation, scaling and disturbances due to variations, as a result, more than a few approaches were recommended for extracting local features for visual objects or places. Both principle component analysis and its Kernel were used for recognition the query image scene in the field of robot localisation and navigation. It was proven that KPCA is better than PCA in recognition performance of places for the purpose of localization, in addition, PCA is also used for the same purpose. The impact of illumination on the PCA and the use of principle analysis to filter invariant features without referencing to original image are shown in [14, 3, 15]. In [16] KPCA was used for Gabor features and SVM is used to classify objects, besides, the main purpose of using KPCA is to provide the scheme of nonlinearity for classifying objects. In [13, 17, 18], the use of the concept of incremental PCA and the investigation on issue of batch learning were introduced. The main idea of this research work is to tackle the on-line learning approach for the robot landmarks in the hope of avoiding the redundant calculation of the PCA for all samples. In [19, 20], KPCA showed better performance than PCA, the study established an evaluation way among different vision based robot localisation approaches. Moreover, it is more vigorous and precise than other approaches such as edge density based. Additionally, the study demonstrated that PCA needs extra computational power. There are two approaches for robot localization using PCA, these are local and global based approaches on the applied feature extraction. In the local based approach, a set of landmarks are first selected from the image and transformed into vectors to be further handled by PCA. PCA based methods have demonstrated their success in the field of robot localization as well as in face recognition, data compression, and many other applications. Alternatively, PCA is suitable for data generated by a Gaussian distribution, or data best described by a second order correlation. Though, it is apparent that the distribution of natural images is highly non-Gaussian [4, 17]. This paper aims at improving the accuracy recognition of PCA features in visual place to make better localization for the mobile robotics. 3. SURF and PCA The content of images will mainly determine the characteristics of local features. These features or local features are discriminative, can straightforwardly be calculated and not influenced via the rotation or limited lighting changes of the content of the image. Basically, the technique involves clustering features such as SURF via K-means clustering approach in the bag of visual keywords, which established good results in image scene recognition applications [8, 9, 21]. The approach K-Means clustering has been widely used to cluster features which are similar to bag of visual keywords [10]. HOBF algorithm is essentially used to relate the features to the cluster centroids which match up to the minimum Euclidian Distance and results in a feature vector that is used to label the specific feature. This is expressed in equation (3) [3]. n 1 HBOF ( w)  i 1  0

if w  arg min c (dis(c, ri )) Otherwise

(1)

Where n is the number of regions in the image, w is the visual word, dist is the Euclidean Distance between the feature vector r and cluster centroid c. The Hard assignment for bags of visual keywords varies from the soft assignment arrangement, where the latter uses multiple arrangements of visual keywords for describing each image feature that tolerates the whole description of an input image. It also provides a particular weight to many nearby clusters rather than winning cluster. In [4, 5], it can be noticed that weights were used for each cluster centroid and the non-linear distribution is required for increasing the precision of the soft assignment scheme, as a consequence, KPCA is used for this reason. In this way, a non-linear distribution for the visual key points will be given which occasionally it provides more accurate results for object categorization. Alternatively, the required time to compute KPCA is luxurious due to the computational cost to calculate Eigen values for the whole data set. Evidently, the computation cost is augmented when the number of samples in the data set is increased. Accordingly, the clustering used on the SURF features extracted from the image scene so that to reduce the training samples, as it is stated that clustering is more effective than sparse KPCA. In this research paper, the visual words are built through the computation of the minimum distance of the local features from cluster centres which are used as training samples for KPCA of each category. 4. Surf Features Speeded Up Robust Features is a robust image detector & descriptor, proposed by Herbert Bay et al. in 2006, that can be used in computer vision tasks like object recognition or 3D reconstruction. It is partially the procedure steps of extracting the features inspired by the SURF descriptor. The standard version of SURF is faster than SURF since it uses integral of images and its competing with SIFT in robustness. SURF algorithm use’s Hessian Matrix, it has three major steps: interest point extraction, repeatable angle computation and descriptor computation.

1) Interest point extraction: in this step, the algorithm starts with computing the determinant of the Hessian matrix and extracting local maxima. The Hessian matrix computation is approximated with a combination of Haar basis filters in successively larger levels. Therefore, this step is O(mnlog2(max(m,n))) for a m×n size image. At each scale, interest points are those points that simultaneously are local extrema of both the determinant and trace of the Hessian Matrix, at location x, y, and scale σ the Hessian Matrix is defined as.

H(x,y,σ)= [

𝐿𝑥𝑥 (𝑥, 𝑦, 𝜎) 𝐿𝑦𝑥 (𝑥, 𝑦, 𝜎)

𝐿𝑥𝑦 (𝑥, 𝑦, 𝜎) ] 𝐿𝑦𝑦 (𝑥, 𝑦, 𝜎)

(2)

Where Lxx(x,y, σ) is the convolution of the Gaussian second order derivative with the image I at pixel x, y ., . The result of this step will be interest points and their scales. 2) Repeatable Angle Computation: A repeatable angle is extracted for each interest point prior to computing the feature descriptor. This step computes the angle of the gradients surrounding the interest point and the maximum angular response is chosen as the direction of the feature. In this work the clustering of SURF features naively combined with HOG which is used to construct more invariant features to recognize multi class visual places. The idea is to have the combination of many features of the same image which will give more robust feature and will be more reliable for these applications. 5. Histogram of Gradient (HOG) The image is divided into image windows (cells) or small spatial regions (.cells.), then for each region a local histogram of gradient directions will be accumulated. The combination of these histograms for each region will form the representation of the whole image. To form better features invariance to illumination, shadowing, ,.., etc. The normalization of these histograms may be used or accumulating local histogram measurements to contrast the local responses is considered. This leads to larger spatial region that can be used for normalized descriptors like Histogram of Oriented Gradient. The technique has been used by [13, 4, 5], in addition to SIFT approach. The promising results and success of these descriptors led the authors to use it and combining it naively with SURF features after some process to make more distinct descriptor for visual place recognition. 6. Combining KPCA & HOG Naively Learning visual recognition is considered as a process of clustering image features for some types of structural image contents. Essentially, this approach provides unsupervised semantic for increasing the reliability of image classification and the matching process. Therefore, a set of local patches (I1….n) is used, where each patch is expressed via a 64-bin of SURF features, the SURF grid approach is used in this paper to extract the local features fs for the images, where it extracts more features than the standard SURF approach (notice: the matlab code for Lazebnik is used) [22]. Research investigation demonstrated that SURF grid produced a more informative description of points for the scalable space. Note that each image Ij={f1,f2,…..fm} and each feature fi contains 64 elements. A distance vector and its size is the length of the codebook (sb) that is used, which is the distance for each feature fi of any given image from the codebook (B), this can be expressed as follows:-

D( x, Bi1:sb ) 



sd

l 1

( B(i)l  x l ) 2

(3)

Keeping in mind that sd is the features’ length used, and as SURF has been used, thus, sd=64. However, for SIFT 128 is used. The distance for all features of any selected image from the codebook is characterised via a distance table containing m of distance vectors of size (sb) and it can be expressed as follows:-

D( x j 1:m , Bi1:sb ) 



sd

l 1

( B(i)l  xl ) 2

(4)

The minimum distance (mD) for the table Dm, sb presents a row of minimum values for each column in the table, consequently, for image I, the mD for the table Dm, sb can be expressed as follows :-

mDi  min( Dm , sb)  {min( D1 : sb,1),{min( D1 : sb,2),........ .{min( D1 : sb, sb)}

(5)

KPCA can be derived using a known fact that PCA can be carried out on the dot product matrix instead of the covariance matrix [16, 19, 23, 24]. Assume {d1 ,...... d N } , is a set of distance features data, let each di belongs to one image in the data set and the dimension of this data set is the number of clusters used. The nonlinear mapping Φ(d) is used, d is mapped into a nonlinear feature space F, and at that point via applying standard linear PCA on the mapped data, nonlinear principal components can be achieved. Thus, the covariance matrix (C) for the mapped data is computed in order to compute the Eigen values as :-

C

1 N



N

1

 ( d i ) ( d i )

T

( 6)

Eigen values and eigenvectors of C can be found via solving the eigenvalue problem. Notice that the kernel matrix (K) of N × N is represented as follows:-

K Assuming that 

1 1  ( d j ) ( d i )  d T d N N

 {1, 2, 3, ..... p, } ,

u  {u1, u2, u3, .....u p, }

is a set of nonzero eigenvalues of K,

(7 ) λ, is sorted in descend order where

relates to eigenvectors. Note that C has the same values of a one-to-one correspondence eigenvalues

and eigenvectors. Polynomial kernels and Gaussian are verified as a non-linear kernel in this work, furthermore to linear approach, several evaluation and choosing processes are conducted to select the optimum. Despite the fact that the effectiveness of image version of BOW is intended to signify images via a set of features using the number of their repetition, nonetheless, these globalizations of features are not adequate to characterize the spatial environment as they have no order. Consequently, it is required to decrease the error of unordered features matching especially in image scene for place recognition for enhancing BOW performance. Thus, undesired features are removed via using the PCA scheme. The features are combined together to give more variant features as expressed in equation (8):-

NV ( x, Bi 1:sb )  {KPCMD (1 : sb), Hg (1 : sb)} (8) Where KPCMD kernel principle component of minimum distance for the image and Hg is the histogram of oriented gradient for the same image. The image’s minimum distance features in both datasets Caltech101 and IDOL have great disparity, this is simply caused by the minimum distance in each image; in return, this reduces the PCA performance. The features have been separated by its spatial norm of the resulted features for each 10 bins from the length of the features in advance, before using the PCA or KPCA for processing them. Evidently, the implementation tests demonstrate that the normalized features would improve the classification results more than the non- normalized. 7. Classification and Optimization The recent research studies about SVM show that SVM produces good performance for object classification [12]. This paper uses SVM and KNN for evaluating the classification performance and similarly for analysing the results for BOW and KPCA of minimum distance features. The process of PCA space optimization is carried out via deciding on the best number of eigenvectors which provides the best filtering of the invariant features to help make the image scene more discriminate from each other. The trial and error methods is used to conduct the process of PCA space prediction. Then, the calculation of the average precision is achieved. The precision (p) of the first N retrieved images for the query Q is described as follows:-

p(Q, N ) 

Ir | Rank(Q, Ir )  N such that Ir  g (Q)} | N

(9)

g(Q) represents the group category for the query image and Ir is the retrieved image. The acquired results for all created feature values are sorted; then, the minimum values are taken to be the best matching visual place. This is called as K nearest neighbour (KNN). 7. Simulation and Implementation Several experiments are carried out on two different types of data sets namely; IDOL and Caltech101. 7.1 IDOL Dataset The first part of the experiments is conducted using the dataset of IDOL which is introduced by Pronobis et al., [25, 26]. The SURF features were extracted using a SURF grid algorithm. The size of each frame image is 230×340. All the experiments were carried out by a laptop computer with these specifications: speed 2.2 GHz core 2 Duo and 3GB memory. The feature vectors are used for machine learning using K-means algorithm where a set of different cluster numbers namely; 260, 275, 350, 400, 450, 500, 520, 600, 650 are employed. Next, the best KB is used to classify the features for the test images for different groups of the environmental navigation places, these places namely are; a one-person office, a corridor, a twopeople office, a kitchen and a printer area.

The best projection space for the PCA is selected to filter the invariant features for the classification process (see Figure 1), the figure shows the effect of Eigen Vectors for optimum selected KB on the results of the classification, the results conducted by Weka software. Details of the experiments are presented in the following subsections:7.1.1

Experimental Setup

The practical experiment is implemented using SURF grid algorithm. The feature vectors are quantized using KB=260 clustering and the best Eigen Vectors used is 72. The whole images are divided in IDOL data set into two groups to evaluate the proposed approaches. Different running tests are used in 5 times. Then, the performances for the two data subsets are reported using the average of the obtained classification results.

Figure 1: Feature vector for input image 7.1.2

Results on IDOL

Visual place recognition is implemented in IDOL for the proposed approaches which consistently show on-line performances for recognizing environments when using KPCA approach. To demonstrate the performance, the algorithm is implemented to recognize further 15 places including one-person office, corridor, two-people office, and kitchen and printer area respectively. Compared with the other approaches, Table 1 shows the experiment results of the proposed approach which is implemented on IDOL dataset using Weka software. The groups in the table correspond to places; the total of successive frames used. The results in the table are constructed according to the nearest neighbour (KNN) and SVM for the PCA, KPCA and KPCHOG approaches. Three kernel functions (Linear, Polynomial, and RBF) are used for the KPCA. Polynomial and RBF as nonlinear approaches are outperformed the linear approaches. Table 1: shows different groups of Idol. Classification BOW min dist PCA KPCA KPCHOG(NV)

KNN 50.8 91.35 93.4498 93.8048 94.882

SVM 58.6 90.7 90 90.6 91.62

7.2 Caltech101 Dataset The Caltech dataset images are more difficult to analyse than those of the IDOL dataset due to the difference of background texture which are used with the same object, particularly in the case of objects with different intensities and sizes. The experiment is done on 10, 20 classes from Caltech101. These classes are selected as a total of 300 jpeg images for 10 classes, 600 images for 20 classes. The images for groups consisted of airplane, cameras, cars, cell phones, cups, helicopters, motorbikes, scissors, umbrellas, and so on. 7.2.1 Experimental Setup The experiments implemented the SURF grid algorithm with all scales of the features for evaluating the proposed approaches. The feature vectors are quantized using K-means clustering. Weka software is used to verify the results of the KNN and SVM classification for 10 and 20 classes. 15 training and 15 testing images, for each image class are used via 5-times of different

running tests. The results are then reported using mean and standard deviation to verify significances of the obtained classification results. 7.2.2 Results of Caltech101 Several K’s have been used for 101 classes. In this work, the best one, 1024, was used. Table 2 shows the results of the 5 different runs of the groups. Table 2: Classes of Caltech10. Classification

KNN

SVM

BOW min dist PCA mind KPCA mind KPCHOG(NV)

36.32 50.6 54.221 56.92 57.341

45.27 57.92 64.486 65.686 66.932

The above table shows that the performances of KPCHOG, KPCA and PCA approaches for visual object recognition are better than the other approaches, but the performance of the proposed approach using KNN is more than SVM. The implementations give good performances for object and place recognition purposes with more accurate results. 8. Discussion For Idol dataset, KPCA with HOG and the polynomial KPCA for the minimum distance approach give better performance, which are better than BOW or other approaches. The aim of using a grid SURF, non-standard one is to take more informative features for the image scene to construct the cloud of the features. The used image scene was a 320×240 jpg format, having low quality but a good rate of correct recognition was close to 96%, depending on the specific environment difficulties. The features became more informative with less dimensionality when represented by the PCA and KPCA feature vectors with the spatial edge histogram. One important issue to achieve the best place recognition is the way in which a codebook for the training features is built or the way in which K is optimized. To do so, in this work, a trial and error process is implemented on BOW algorithm. The best performance of BOW indicated that K is the best cluster for building a codebook. According to this criterion, K is selected to be used in the proposed algorithm. The error for place recognition was increased and decreased due to the value of the selected K and the best projection PCA space. The best K and the best value of Eigen vectors gave the best performance of place recognition. The criterion used for decision making for the image scene recognition is determined by the majority of retrieved image for each group. In visual object recognition, SURF grid features are used for describing the objects with texture that is more complicated to recognize than visual place, since for each object, there are various characteristics such as size, intensity and texture. Caltech101 is used for this purpose to verify the proposed approach. Several K’s were used for 101 classes. It is clear that the approach KPCA with HOG is suitable for both visual place recognition and object recognition. 9. Conclusion Extracting KPCA or PCA features for the minimum distance vectors and then combined with HOG for the same images gives a decent method for visual place and object recognition in comparison with the other approaches. The experimental results show that spatial Histogram of oriented gradient with minimum distance using KPCA to recognize the visual object and place significantly outperforms linear KPCA, PCA and BOW approaches. The approach can be used with other features to increase the speed of recognition. The results also show that the soft assignment features approach is better than HBOF, the two approaches are mainly depended on the optimized clustering features where the best K gives the best result of recognition. It is an establishment of an algorithm to conceptualize the environment using spatial clouds of features with SURF techniques. References [1] Wenyu C., Wenzhi X. and Ru Z., Method of item recognition based on SIFT and SURF, Mathematical Structures in Computer Science ; Vol. 24, No. 05, 2014. [2] Suaib N. M., Marhaban M. H. , Saripan M. I., Ahmad S. A,“Performance evaluation of feature detection and feature matching for stereo visual odometry using SIFT and SURF”, Region 10 Symposium, 2014 IEEE, pp. 200 – 203, 2014. [3] Sivic J. and Zisserman A, “Video google: A text retrieval approach to object matching in videos,” in ICCV ’03: Proceedings of the Ninth IEEE International Conference on Computer Vision, pp. 1470, 2003. [4] Jiang Y.-G., Ngo, C.-W. and Yang J., “Towards optimal bag-of-features for object categorization and semantic video retrieval,” in CIVR ’07:Proceedings of the 6th ACM international conference on Image and video retrieval, 2007, pp. 494– 501, 2007.

[5] Philbin J., Chum O., Isard M., Sivic J., and Zisserman A., “Lost in quantization: Improving particular object retrieval in large scale image databases,” in Proceedings of the IEEE Conference on Computer Visionand Pattern Recognition, 2008. [6] Huang J., Kumar, S.R., Mitra M., Zhu W. J., and Zabih R., “Image indexingusing color correlograms”. In computer vision and pattern Recognition, IEEE computer society conference Pages762, 1997. [7] Gandhali P. S., and Debasis M. , “Correlogram Method for Comparing Bio-Sequences”, Technical Report FIT-CS-2006-01, Master’s Thesis, Florida Institute of Technology, 2006. [8] Csurka G., Dance C., Fan L. and Bray C, “Visual categorization with bag of keypoints,” in The 8th European Conference on Computer Vision, pp.513–516, 2004. [9] Perronnin F., Dance C, Csurka G, and Bressan M, “Adapted vocabulariesfor generic visual categorization,” European Conference on Computer Vision (ECCV 2006), pp.464–475, 2006. [10] Jain A. K., Murty M. N., and Flynn P. J., “Data clustering: a review,” ACM Computing Surveys, Vol. 31, No. 3, pp. 264–323, 1999. [11] Forstner W., Moonen B. “A metric for covariance matrices”. Technical report, Dept. of Geodesy and Geoinformatics, Stuttgart University, 1999. [12] TIAN J. , Qiuxia H., Xiaoyi M, and Mingyu H., “An Improved KPCA/GA-SVM Classification Model for Plant Leaf Disease Recognition”, Journal of Computational Information SystemsVol 8, No. 18, pp. 7737-7745, 2012. [13] Sch¨olkopf, B., Smola A.J., and M¨uller K.-R., “Nonlinear component analysis as a kernel eigenvalue problem”,Neural Computation, Vol. No. 5, Pages 1299–1319, 1998. [14] Baudat G. and Anouar F., “Generalized Discriminant Analysis Using a Kernel Approach,” Neural Computation, vol. 12, no. 10, pp. 2385 – 2404, 2000. [15] Artaˇc M., Jogan M., and Leonardis A., “Mobile robot localization using an incremental eigenspace model”. In IEEE International Conferenceon Robotics and Automation, Washington, D. C, pp. 1025–1030, 2002. [16] Dzati AR, Salwani I, Haryati J. Robust Palm Print Verification System Based On Evolution Kernel Principal Component Analysis, IEEE International Conference on Control System, Computing and Engineering 2014 (ICCSCE 2014), 2014. [17] Jogan M., Leonardis A., Wildenauer H., and Bischof H., “Mobile robot localization under varying illumination”. In the 16th International, pp. 2385-2404, 2000. [18] Kr¨ose B. and Bunschoten R., “Probabilistic localization by appearance models and active vision”. In Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2255–2260, 1999. [19] Hong M L, Dong M Z, Ren C N, Xiang L, Hai Y D. “Face Recognition Using KPCA and KFDA”, AMM, pp. 380384:3850-3853, 2013. [20] Sim, R and Dudek G., “Learning landmarks for robot localization”. In Proceedings of the National Conference on Artificial Intelligence SIGART/AAAI Doctoral Consortium, Austin, TX,. SIGART/AAAI, AAAI Press, pp. 1110–1111, 2000. [21] Phiwmal N. and Sanguansat P., “An Improved Feature Extraction and Combination of Multiple Classifiers for Queryby-Humming”The International Arab Journal of Information and Technology, Vol. 11, No. 1, pp. 103-110, 2014. [22] Lazebnik S., Schmid, C. and Ponce J. “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories”. Proc. of CVPR’06, 2006. [23] Suvi T, Kai N, Mikko T, Antti K, Tapio S, ECG-derived respiration methods: Adapted ICA and PCA. Medical Engineering & Physics,2015. [24] Vipsita S, Shee B K, Rath S K. “Protein superfamily classification using Kernel Principal Component Analysis and Probabilistic Neural Networks”, India Conference (INDICON), 2011 Annual IEEE, 2011. [25] Pronobis A., Caputo B., Jensfelt P. and Christensen I., “A realistic benchmark for visual indoor place recognition”, Robotics and Autonomous System , Vol 58, No, 1, pp.81-96, 2009. [26] Le Lu, Jianhua Y, Evrim T, Ronald MS. Multilevel Image Recognition using Discriminative Patches and Kernel Covariance, SPIE Medical Imaging, 2014.

Suggest Documents