Document not found! Please try again

A Review of Optimization Method in Face Recognition

3 downloads 0 Views 338KB Size Report
eka.legya.f@mail.ugm.ac.id. Igi Ardiyanto. Department of Electrical. Engineering and Information. Technology, UGM. Yogyakarta, Indonesia [email protected].
2017 9th International Conference on Information Technology and Electrical Engineering (ICITEE), Phuket, Thailand

A Review of Optimization Method in Face Recognition: Comparison Deep Learning and Non-Deep Learning Methods Sulis Setiowati

Zulfanahri

Eka Legya Franita

Igi Ardiyanto

Department of Electrical Engineering and Information Technology, UGM Yogyakarta, Indonesia [email protected]

Department of Electrical Engineering and Information Technology, UGM Yogyakarta, Indonesia [email protected]

Department of Electrical Engineering and Information Technology, UGM Yogyakarta, Indonesia [email protected]

Department of Electrical Engineering and Information Technology, UGM Yogyakarta, Indonesia [email protected]

Abstract— Currently, face recognition system is growing sustainably on a larger scope. A few years ago, face recognition was used as a personal identification with a limited scope, now this technology has grown in the field of security, in terms of preventing fraudsters, criminals, and terrorists. In addition, face recognition is also used in detecting how tired a driver is, reducing the occurrence of road accidents, as well as in marketing, advertising, health, and others. Many method are developed to give the best accuracy in face recognition. Deep learning approach become trend in this field because of stunning results, and fast computation. However, the problem about accuracy, complexity, and scalability become a challenges in face recognition. This paper focus on recognizing the importance of this technology, how to achieve high accuracy with low complexity. Deep learning and non-deep learning methods are discussed and compared to analyze their advantages and disadvantages. From critical analysis using experiment with YALE dataset, non-deep learning algorithm can reach up to 90.6% for low-high complexity and 94.67% in deep learning method for low-high complexity. Genetic algorithm combining with CNN and SVM was an optimization method for overcome accuracy and complexity problems. Keywords—Face Recognition; Biometrics; Traditional Methods; Deep Learning; Optimization; Genetic Algorithm

I. INTRODUCTION Face recognition is a computer technology that determines the location and size of a human face in a digital image, which is a key technology in facial information processing. It has been widely applied to pattern recognition, identity, authentication, computer and human interface, automated video surveillance, electronic commerce, health and finance, and others[1]. There are two common facial recognition applications that are identification and verification. Face identification means facial images can be used to define a person's identity. While the face verification, given the face image and identity estimation then the system must be able to state that the estimation is true or false [2]. Although many applications and systems have applied face recognition technology, this topic is still a particularly challenging problem in determining best methods to provide

high accuracy and low computational. Many researchers have been working on face recognition for years, but still, have some challenges to solve. To date, many methods have been employed from traditional methods such as Eigenface and Fisherman by applying algorithms such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Independent Component Analysis (ICA), and so on [2]. Classification method of Support Vector Machine (SVM) [3][4] and Local Binary Pattern (LBP) approach[5][6][7] are also widely used for face recognition. In addition, neural network approach [8] or known as deep learning approaches such as the convolutional neural network (CNN) has been used in facial recognition for overcome the challenges. In few years, Deep Learning is a new area in the field of computer vision and machine learning, which has been successfully used for image dimensional reduction and recognition. Deep Learning is a machine learning that adapts neural network architecture and consists of multi-layer perceptron (MLP) from multi-hidden layers. By using a model architecture consisting of several nonlinear transformations, deep learning is able to find high-level features in the data. This feature is derived from the lower level to form a hierarchical representation [9]. The development of deep learning approaches and newest method give a best result comparing traditional method based on accuracy [10]. However, with the development of technology, it is expected that face recognition technology will be a wider scope of its application in the future, which encourages researchers to examine more about the optimization of face recognition method. Moving on from this idea, researchers may combine existing methods, for example combining traditional one with traditional methods only, also incorporating traditional methods with deep learning to achieve optimization solutions [8] [10]. In this paper, Section II will describe non-deep learning methods by analyzing the advantages and disadvantages, Section III provides an explanation of Deep Learning approach which used in face recognition by analyzing the advantages and disadvantages, then Section IV will provide a comparison of Traditional Methods and Deep Learning with

978-1-5090-6477-9/17/$31.00 ©2017 IEEE

2017 9th International Conference on Information Technology and Electrical Engineering (ICITEE), Phuket, Thailand

Rate of recognition (RoR) parameters and its complexity, in Section V describes optimization analysis and finally in Section VI will convey the conclusions of this paper. II. NON-DEEP EARNING METHODS Face recognition system consist of three part [11], as shown in the image below:

represented in a large matrix [13]. Here are the stages of this method : [14] 1. The M vector of size N (image line x image column) represents the set of the sample image. 2. The center average is reduced by the average image of each image vector to know the eigenvector and eigenvalue of the covariance matrix. 3. Set back the eigenvectors and eigenvalue and calculate the cumulative energy content for each eigenvector. 4. Select a subset of the eigenvector as the base vector. 5. Project the data z scores into the new base.

Fig 1. Stages of Face Recognition

The main function of the face detection step are (1) To determine the image is a face image, and (2) The location of the face in the picture. The expected output of this step is the group of images that have been sorted from the input image. While the feature extraction step include dimension reduction, salience extraction, and noise cleaning. After that, the image is transformed into a certain dimension vector according to the number of extracted features. Some literature reveals that feature extraction goes into the steps of face recognition[11]. The various classifier is used in classification step to identification and verification. To achieve recognition automatically, the face dataset needs to be created, for each person consists of several images with different poses and then its features are executed and stored in the dataset [11]. In face recognition, the performance of an algorithm depends on the conditions of training data and circumstances changes. High accuracy is not only factor in determining the performance of an algorithm. Eigenface and Fisherman methods are the methods used in face recognition. The methods have advantages and disadvantages which are further developed with a combination of Principal Component Analysis (PCA) algorithms, Linear Discriminant Analysis (LDA), Independent Component Analysis (ICA) and so on. Various existing studies will help in determining the traditional method of optimal for face recognition. A. Eigenface Method The aim of face recognition is to distinguish the input signal in the form of image data to several classes (person). The input signal has a high noise caused by differences in lighting, pose and so forth. Each face does have different characteristics but has a similar pattern that can be detected, for example with the eye, mouth, nose, and range of distance between the object. Facial recognition based on characteristics of this feature is known as Eigenface. The objects can be extracted by mathematically using Principal Component Analysis (PCA). The PCA is tasked with transforming each original image into its corresponding Eigenface [12]. Eigenface is a method that is included in appearance-based approach. The basic principle of this face recognition is to cite the unique information from face then encoded and compared with the previous decoded results. In the eigenface method, decoding is done by calculating the eigenvector and then

Fig 2. Eigenfaces Method

B. Fisherface Method Fisherface is particularly useful when facial images have great variations in their illumination and expression. The Fisherface method is an improvement of the Eigenface method using Fisher's Linear Discriminant Analysis which is abbreviated as FLDA or LDA for dimensional reduction[15]. FLD is one example of class-specific method, because this method forms the distance (scatter) between classes and intra classes so as to produce a better classification [16]. A fisherfaces method is a holistic approach used in the facial recognition stage. The holistic stage is an identification based on the local features of the face (eyes, nose, and mouth). Fisherface approach is a widely used method for feature extraction in face images. This approach tries to find the direction of projection in which images belonging to different classes are separated maximally. Mathematically, it tries to find the projection matrix (weight) in such a way that the ratio

2017 9th International Conference on Information Technology and Electrical Engineering (ICITEE), Phuket, Thailand

between the scatter matrix class and the image in it can be maximally projected. The calculation stages on Fisherfaces are as follows: 1. Determine between-class scatter matrix

…………………(1) 2.

Determine within class scatter matrix

.........................(2) 3.

Perform a projection matrix to identify the distance on the scatter

……………………..(3) A suitable face if the minimum distance is less than the applied boundary value. The smaller a minimum distance obtained, a greater the similarity of the input image with the image pair in the training set. Set the values obtained from the results of the experiments until found a satisfactory value. C. Support Vector Machine (SVM) Support Vector Machine is a learning algorithm that analyzes data for classification and regression analysis purpose. SVM is a separation or discrimination between two classes. SVM classifies by finding a hyper plane that maximizes the margin between the two classes. Stages of the SVM algorithm can be seen in Figure 3[17]. The larger amounts of data from the training set, the rate of recognition of the SVM method will be smaller because it affects the length of computation and the error rate that makes this method does not produce satisfactory output for large data [18]. So SVM is suitable to be applied to small and irregular sets for faster computing and better accuracy[19].

Eigenfaces, Fisherfaces, and SVM are very simple algorithms with low complexity in training and testing. The advantages and disadvantages of traditional algorithms among others [10] [14][20][21]: Advantages: 1. It is a simple and efficient method (Eigenface and Fisherface). 2. Traditional methods can result in high rate of recognition such as SVM. 3. Has a low computational complexity so that traditional methods are known for their speed in recognizing faces. Disadvantages: 1. In complex cases, traditional methods will result in high computational complexity with a long time consuming. 2. Setting parameters is not simple. 3. The rate of recognition decreases for a variety of poses and illuminations (Eigenface) - but this problem is solved by Fisherface[22]. 4. The level of recognition on the Eigenface and Fisherface methods is very limited 5. SVM works well with small datasets D. Local Binary Pattern (LBP) Local Binary Pattern (LBP) is the well known approach and has produced great accuracy in detecting face. In LBP, every pixel is assigned a texture value, which can be naturally combined with target for tracking thermo graphic and monochromatic video. The major uniform LBP patterns are used to recognize the key points in the target region and the form a mask for joint color-texture feature selection [23]. The value of the LBP code of a pixel (xc,yc) is given by Equation 4.

………………… ………………

(4) (5)

Where gc is the gray value of the center pixel, gp corresponds to gray values of P equally spaced pixels on a circle of radius R, and s defines a thresholding function as Equation 5. Fig. 4 shows and example of LBP calculation.

Fig 3. The Example of LBP calculation[24]

Fig 3. Support Vector Machine Method

2017 9th International Conference on Information Technology and Electrical Engineering (ICITEE), Phuket, Thailand

III. DEEP LEARNING APPROACH Deep learning is a new area of computer vision research and machine learning that has successfully introduced the processing and identification of image dimensions. Deep learning generates a lot of best accuracies when applying Artificial Neural Network consisting of multi-layer perceptron (MLP). By using an architecture model consisting of several nonlinear transformations, deep learning will be very helpful in solving problems that have a large amount of data [9]. Based on some research, deep learning architecture is better than traditional methods to be applied in modern cases with complex problems such as computer vision and human language understanding. The research also mentioned that deep learning can solve complex problems by utilizing multilayer architectures, so the problem-solving process becomes shorter and the results are more accurate. Multilayer is an implementation of subsampling process found in deep learning architecture. This makes deep learning very efficient in solving complex problems[25]. Deep learning is very efficient to use in predicting for known or unknown data. Deep learning works well on a wide range of large datasets. Deep learning has been widely implemented in visual scenes, speech recognition, face recognition, finger print recognition, iris recognition and so on[8]. Face recognition can be developed with a deep learning approach. One of them using the Convolution Neural Network or commonly abbreviated as CNN. CNN approach is one method of a neural network with the ability to identify characteristics of the nature of the image into the input. This method serves to extract the raw image into a classified image. Convolution Neural Network (CNN) is a variant of distortion and geometric transformation. Layer that serves to extract the image is called a convolutional layer[26]. Convolution Neural Network (CNN) has four layer architectures to check the degrees of the shift, scale, and distortion. The four layers are[27]: A. Convolutional Layer This layer is the main layer that underlies the CNN process. The convolution process is the process of applying a function into the output of another function repeatedly. In image processing techniques, the purpose of convolution is to extract features from the inserted image. The result of the extraction process is linear transformation data. B. Subsampling Layer Subsampling is the process of reducing the size of an image data. Most of CNN, the widely used subsampling method is max pooling. This max pooling technique works by dividing the output from the convolution layer into several small grids. Then the highest value of each grid is arranged in a matrix. C. Full Connection Layer Full connection layer is a layer that serves to perform the transformation on the data dimension so that data can be classified linearly. Each neuron in the convolution layer needs

to be transformed first so that the information contained is not lost. D. Output Layer The output layer is the last layer as a result of the Convolution Neural Network (CNN) process. Below is a workflow chart of Convolution Neural Network (CNN)[26]:

Figure 4. Architecture of CNN

As one method, the Convolution Neural Network (CNN) also has several advantages and disadvantages. Here are the advantages and disadvantages of the Convolution Neural Network (CNN)[8]: Advantages: 1. Can be implemented in various image resolutions. 2. Computing is so detailed that the rate of error is likely small. 3. Convolution Neural Network (CNN) is able to solve problems that have a high complexity that has many parameters to be computed. 4. Can classify the face shape of known and unknown data. Disadvantages: 1. Not suitable to be simple 2. Process long enough 3. Computing is very complex, directly proportional to the complexity of the problems encountered. 4. Can’t describe on the face with a certain position. The accuracy level of the Convolution Neural Network (CNN) is quite high. There are some researchers who measure the rate of recognition in his research on Face recognition using Convolution Neural Network (CNN) approach [8]: TABLE I. DATA OF CNN ACCURACY No. Name Rate of Recognition 1. Manisha 85.1% 2. Hurieh 83.62% 3. Xiaozheng 70%

From the table, we know that deep learning is not necessarily the most optimal method to solve the problem because the rate of recognition is not too accurate. This is strongly influenced by the dataset used. There are some commonly-provided dataset used by researchers regarding face recognition. Research on different datasets can result in a different rate of recognition. Therefore further development is necessary to obtain an accurate method. IV. COMPARISON METHOD Non-deep learning and deep learning approaches have their respective advantages and disadvantages that have been discussed in the previous section. Comparison of these

2017 9th International Conference on Information Technology and Electrical Engineering (ICITEE), Phuket, Thailand

methods using publicly available dataset is ORL which consist of 400 images containing 40 different faces of people with different lighting, different facial expressions (eye closed/open, smiling) and face detail (glasses/not) and from YALE database consisting of 15 different subjects, there are 11 images for each subject with different facial expressions. The comparison is done by taking into the rate of recognition (RoR) and the level of complexity of the algorithm used. The higher RoR value proves that the method has high accuracy as well. On the other hand, the complexity of the algorithm is directly proportional to the high complexity, so with the application of certain methods, it takes a simple algorithm so that the complexity of the method is low. Here is the comparison between deep learning and non-deep learning methods: TABLE II. RATE OF RECOGNITION AND COMPLEXITY IN DEEP LEARNING AND NON-DEEP LEARNING METHOD Rate of Recognition (%) Method Complexity ORL YALE Eigenface [10] 89.7 77.9 Low Fisherface [10] 87.7 85.2 Low SVM [28] 95.3 90.6 Low-High CNN [10] 92.6 93.3 Low-High LBP [29] 88.11 85.75 Low-High

V. OPTIMIZATION METHOD As a method with good performance, deep learning still has some weaknesses. One of them is the high complexity. Other issues that always appear on face recognition are varied facial pose, illumination and facial expressions to be recognized. Based on these problems required a new method. One of the methods developed with the purpose of optimization can be called Hybrid Face Recognition System that combines capabilities of Convolutional Neural Network (CNN), Support Vector Machine (SVM) and Genetic Algorithm (GA). The purpose of this optimization method is to obtain results of high accuracy and low complexity. In this optimization method genetic algorithm is used to find the optimal structure of CNN, then the last layer of the CNN structure is replaced by SVM [30]. Here is the optimization phase of Hybrid Face Recognition System using GA, CNN, and SVM: A. Determine of Parameters on CNN Using Genetic Algorithm (GA) Genetic Algorithm (GA) is a meta heuristic search technique based on the mechanisms of natural selection, genetics, and evolution. Genetic algorithms are commonly used to produce solutions that have high quality for troubleshooting purposes that require optimization [31]. In this study genetic algorithm is used to determine the structure on CNN. CNN generally uses exhaustive search. When a network has few parameters (the number of neurons in the hidden layer), exhaustive search makes it possible to use. But when a network has many parameters that make CNN structures complex, exhaustive search is not possible to use, as is the study done by [25] which uses 120 million parameters which require large databases and computational power. So

this limitation is answered with the application of Genetic Algorithm that replaces exhaustive search.

Fig 5. Architecture of Genetic Algorithm

Based on the diagram above there are three main stages used to determine the structure of CNN, the three stages are: 1) Selection, is an operator used to determine which individual is suitable for the system. This operator works like a human biological system, where the operator will evaluate each individual. Unhealthy individuals will be eliminated. 2) Crossover, an operator used to exchange information between individuals. This operator works like a human reproductive system, in which individuals are combined to produce a new generation. 3) Mutation, an operation used to change the characteristic of the individual to form a new individual [32]. Individuals in the stages above are represented as an existing parameter on CNN. Based on three stages above the number of parameters that many then simplified to proceed to the next stage. The optimization of this parameter includes the size of receptive fields on each layer, the number of receptive fields in each layer, the relationships between layers in sequence. B. Optimization of CNN and SVM In the previous stage, the proposed method produced parameters that used to construct CNN structures. It need to choose how many layers of CNN structure. After the structure of CNN is created, the process will be started with process on the first layer. There are some process in CNN architecture. First, the first layer is used as a place as input in the form of raw image pixels. Second, the second and fourth layers of the network are alternator convolution layers with sub-sampling layers, which are used to extract the combined map as input. This causes the system to perform feature extraction with many and varied features. They work as a hidden layer in CNN architecture [32]. After the hidden layer of CNN get the result, this result will distribute in the last layer. The last layer is replaced by SVM classification. The principal aim of SVM is to find the decision boundary which is most appropriate and to maximize the distance between the data classes. Result of hidden layer are taken by SVM as a feature vector for training process.

2017 9th International Conference on Information Technology and Electrical Engineering (ICITEE), Phuket, Thailand

This training stage is done continuously until get good condition. Thus testing in the data set can be done by SVM classifier with features extracted automatically [32]. Here is the structure of the optimization of the GA-CNN-SVM hybrid model.

[9] [10] [11] [12] [13] [14] [15]

Fig 6. Structure of Hybrid Method

[16]

VI. CONCLUSION From the results of studies deep learning and non-deep learning methods in face recognition can be concluded that non-deep learning has a lower rate of recognition than in deep learning. It proves that the ability of deep learning method in face recognition is higher for large dataset with high dimensionality. In terms of complexity, when non-deep learning methods are not able to solve problems with high complexity, deep learning becomes the right solution. The combination of non-deep learning and deep learning methods is an optimization solution that addresses the shortcomings of deep learning in simplifying the high complexity. ACKNOWLEDGMENT This work was supported by Indonesia Endowment Fund for Education (LPDP) and Universitas Gadjah Mada.

[17]

[18] [19] [20] [21] [22] [23] [24] [25]

REFERENCES [1]

[2] [3]

[4]

[5]

[6] [7] [8]

Y. Cheng, Z. Jin, T. Gao, H. Chen, and N. Kasabov, “An Improved Collaborative Representation based Classification with Regularized Least Square (CRC-RLS) Method for Robust Face Recognition,” Neurocomputing, vol. 215, p. , 2016. C. Beumier, “Face Recognition,” Natl. Sci. Technol. Counc., pp. 92–99, 2001. B. Heisele, P. Ho, and T. Poggio, “Face Recognition with Support Vector Machines: Global versus Component-based Approach,” Proc. the18th IEEE Int. Conf. Comput. Vis., vol. 2, no. July, pp. 688–694, 2001. B. Mellal, B. Mellal, B. Mellal, and B. Mellal, “A new approach for Face Recognition Based on PCA & Double LDA Treatment combined with SVM,” IOSR J. Eng., vol. 2, no. 4, pp. 685–691, 2012. A. Alapati and D. Kang, “An Efficient Approach to Face Recognition Using a Modified Center-Symmetric Local Binary Pattern ( MCS-LBP ),” Int. J. Multimed. Ubiquitos Eng., vol. 10, no. 8, pp. 13–22, 2015. I. Journal and C. Trends, “Study of Musical Influence on Face Using the Local Binary Pattern ( LBP ) Approach,” Int. J. Comput. Trends Technol., vol. 3, pp. 150–153, 2012. P. B. Patinge, “Local Binary Pattern Base Face Recognition System,” Int. J. Sci. Eng. Technol. Res., vol. 4, no. 5, pp. 1356– 1361, 2015. M. M. Kasar, D. Bhattacharyya, and T. Kim, “Face Recognition Using Neural Network : A Review,” vol. 10, no. 3, pp. 81–100,

[26] [27] [28]

[29] [30] [31] [32]

2016. M. Li, C. Yu, F. Nian, and X. Li, “A Face Detection Algorithm Based on Deep Learning,” vol. 8, no. 11, pp. 285–296, 2015. A. Rikhtegar, M. Pooyan, and M. T. Manzuri-shalmani, “Genetic algorithm-optimised structure of convolutional neural network for face recognition applications,” pp. 559–566, 2016. W. Chao, “Face Recognition,” no. 1, pp. 1–57, 2010. M. Turk and A. Pentland, “Eigenfaces for Recognition,” J. Cogn. Neurosci., vol. 3, no. 1, pp. 71–86, Jan. 1991. M. a Turk and a P. Pentland, “Face recognition using eigenfaces,” Ieee Cvpr’91, vol. 118, no. 5, pp. 586–591, 1991. H. Mokhtari, I. Belaidi, and S. Alem, “Performance Comparison of Face Recognition Algorithms based on face image Retrieval,” vol. 2, no. 12, pp. 65–73, 2013. “EIGENFACES AND FISHERFACES Naotoshi Seo University of Maryland ENEE633 Pattern Recognition Project 2-1,” no. 3, pp. 2– 6. S. Jaiswal, M. I. T. S. Gwalior, and M. I. T. S. Gwalior, “Available Online at www.jgrcs.info COMPARISON BETWEEN FACE RECOGNITION ALGORITHM-EIGENFACES ,” vol. 2, no. 7, pp. 3–9, 2011. F. Damayanti, A. Z. Arifin, and R. Soelaiman, “PENGENALAN CITRA WAJAH MENGGUNAKAN METODE TWODIMENSIONAL LINEAR DISCRIMINANT,” vol. 5, no. 3, pp. 147–156, 2010. C. Engineering, “FACE RECOGNITION USING EIGENFACE AND SUPPORT VECTOR MACHINE,” pp. 4974–4980, 2014. Y. Li, S. Gong, J. Sherrah, and H. Liddell, “Support vector machine based multi-view face detection and recognition,” vol. 22, pp. 413– 427, 2004. F. L. Heng, L. M. Ang, and P. S. Kah, “A multiview face recognition system based on eigenface method,” Proc. - Int. Symp. Inf. Technol. 2008, ITSim, vol. 2, 2008. Himanshu, S. Dhawan, and N. Khurana, “A REVIEW OF FACE RECOGNITION,” Int. J. Res. Eng. Appl. Sci., vol. 2, no. 2, pp. 921–939, 2012. D. P. Mankame and S. Nayeem, “Face Recognition Using Pca and Lda : Analysis and,” vol. 9, no. 10, pp. 335–340, 2015. M. V. Gupta and D. Sharma, “A Study of Various Face Detection Methods,” vol. 3, no. 5, pp. 3–6, 2014. A. Hadid, “The Local Binary Pattern Approach and its Applications to Face Analysis,” vol. 4500, no. 2. M. Bianchini and F. Scarselli, “On the complexity of neural network classifiers: A comparison between shallow and deep architectures,” IEEE Trans. Neural Networks Learn. Syst., vol. 25, no. 8, pp. 1553–1565, 2014. H. Khalajzadeh, M. Mansouri, and M. Teshnehlab, “Face Recognition Using Convolutional Neural Network and Simple Logistic Classifier,” Soft Comput. Ind. Appl., pp. 197–207, 2014. I. W. S. E. P, A. Y. Wijaya, and R. Soelaiman, “Klasifikasi Citra Menggunakan Convolutional Neural Network ( Cnn ) pada Caltech 101,” vol. 5, no. 1, 2016. Y. Cheng, Z. Jin, T. Gao, H. Chen, and N. Kasabov, “Neurocomputing An improved collaborative representation based classi fi cation with regularized least square ( CRC – RLS ) method for robust face recognition,” Neurocomputing, vol. 215, pp. 250– 259, 2016. X. W. jung Meng, Yumao Gao, “Face Recognition based on Local Binary Patterns with Threshold,” Conf. Ieee Int. Comput. Granul., no. 1, pp. 3–7, 2010. M. Pooyan, M., Arash r., “Genetic algorithm-optimised structure of convolutional neural network for face recognition applications,” vol. 7, no. 3, pp. 15–17, 2016. J. McCall, “Genetic algorithms for modelling and optimisation,” J. Comput. Appl. Math., vol. 184, no. 1, pp. 205–222, Dec. 2005. M. Elleuch, R. Maalej, and M. Kherallah, “A New Design Based SVM of the CNN Classifier Architecture with Dropout for Offline Arabic Handwritten Recognition,” Procedia - Procedia Comput. Sci., vol. 80, pp. 1712–1723, 2016.

Suggest Documents