Document not found! Please try again

Improved Neural Based Writer Adaptation for On-Line ... - IEEE Xplore

9 downloads 0 Views 322KB Size Report
Abstract—The adaptation module is a Radial Basis Function. Neural Network (RBF-NN) that can be connected to the output of any recognition system and its ...
2013 IEEE International Conference on Systems, Man, and Cybernetics

Improved Neural Based Writer Adaptation For On-line Recognition Systems Lobna Haddad1 , Tarek M. Hamdani2 , Adel M. Alimi1 REGIM: REsearch Group on Intelligent Machines University of Sfax, National Engineering School of Sfax (ENIS) BP 1173, Sfax, 3038, Tunisia 2 Taibah University, College Of Science And arts at Al-Ula, al-Madinah al-Munawwarah, KSA Email: (lobna.haddad, tarek.hamdani, adel.alimi)@ieee.org

1

we point out the systems which update their parameters values. Among others, we find the writer adaption method based on incremental linear discriminant analysis (ILDA) [8] where the writer adaptation is performed by updating the LDA transformation matrix and the classifier prototypes in the discriminative feature space. Also, we mention those which are based on HMM [18] and support vector machines like the system described in [19] where an adaptation was realized by re-learning the different SVMs using virtual examples. Add to that, the system in [20] applied an SVM based multiple kernel learning where support vectors were adapted to better model the decision boundary of a specific writer. Furthermore, we hold up systems which use adaptation methods without modifying the writer-independent system. In this case, we find the use of module based on Radial Basis Function Neural Network (RBF-NN) [2] and [9]. Moreover, the use of Style Transfer Mapping where the data of different writers are projected onto a style-free space, and the writer-independent classifier needs no change to classify the transformed data and can achieve significantly higher accuracy [7]. In our case, the aim is to give any recognition system an adaptation capacity without losing acquired information by training on the independent-database. This idea belongs to the field of incremental learning, which has been widely used for the training of the RBF-NN. To achieve this objective, we incorporate in the recognition system output’s an adaptation module (AM) which is based on RBF-NN [9]. The (AM) will benefit from the ownership of the network, to which each unit responds to a local region of input space. The paper is organized as follows. Section II gives a brief description of different methods used for the incremental learning of RBF-NN. Section III describes the proposed incremental method, while in Section IV, comparative experimental results are presented using four writer-dependent datasets. Section V contains discussion and conclusion.

Abstract—The adaptation module is a Radial Basis Function Neural Network (RBF-NN) that can be connected to the output of any recognition system and its aim is to examine the output of the writer-independent system and produce a more correct output vector close to the desired response. The proposed adaptation module is built using an incremental training named GAAM algorithm (Growing-Adjustment Adaptation Module). Two adaptation strategies are applied : Growing and Adjustment. The growing criteria are based on the estimation of the significance of the new input and the significance of the nearest unit compared to the input. The adjustment consists of the update of two specific units (nearest and desired contributor) parameters using the standard LMS gradient descent to decrease the error at each time no new unit is allocated. This new training algorithm is evaluated by the adaptation of two handwriting recognition systems. The results, reported according to the cumulative error, show that the GA-AM algorithm leads to decreasing the classification error and to instantly adapting the recognition system to a specific user’s handwriting. Performance comparison of GA-AM training algorithm with two other adaptation strategies, based on four writer-dependent datasets, are presented. Index Terms—Incremental learning of RBF-NN; Writer Adaptation; Pattern Recognition

I. I NTRODUCTION The handwriting recognition is a challenging problem considering the large variability of writing styles and the requirement of big database for training and test from many writers to deal with this variability [17]. However, some writerindependent systems reach unsatisfactory performances for the recognition of irregular or new writing styles. With the appearance of new handheld devices like PDA, Smartphone, tabletPC... the researchers are directed towards the design of recognition systems which adapt to a specific writer style. Writer adaptation is a series of actions which are carried out in order to convert a writer-independent system into a writerdependent one to achieve an optimal recognition rate. In this way, several methods are applied depending on the classifier type used. There are various kinds of writer adaptation (online, off-line, supervised, semi-supervised and un-supervised) which can be combined. Most recognition systems adapt by modifying their parameters values. Firstly, we indicate the prototype based systems which can be adapted to a new writing style by reorganizing the standard prototype set or using also a new writer-dependent prototype set [14] [15] [16]. Secondly,

978-1-4799-0652-9/13 $31.00 © 2013 IEEE DOI 10.1109/SMC.2013.204

II. I NCREMENTAL L EARNING OF R ADIAL BASIS F UNCTIONS N EURAL N ETWORKS (RBF-NN) The first suggested incremental learning algorithm of RBFNN was that of Platt [1] named Resource Allocating Network (RAN). The RAN algorithm allows a sequential learning of the RBF-NN that initially contains no hidden units, and can add hidden units in the RBF-NN to extend the approximation 1175

target vector (D) is 1 for the neuron corresponding to the correct class and 0 otherwise. For a given pattern input (I, D), we calculate the RBF-NN output using following equations:  ( k Cjk − Ik )2 zj = exp(− ) (2) σj2  Ai = zj × Wji (3)

ability when errors classification are reported. In fact, this algorithm has been composed of two actions depending on how the network performs on a new input. If the network performs poorly, a new unit was allocated satisfying some growth criteria. If the network performs well, the existing network parameters were updated using standard LMS gradient descent. Subsequently, improvements were made on this algorithm and other algorithms were applied. So, we mention the system [11] which proposed a dynamic RBF pruning method, where the size of the network varies during training and has been successful applied in many real world applications. In [3], the generalized growing and pruning (GGAP) training algorithm for RBF-NN is applied. GGAP is a RAN algorithm but introducing a formula for computing the significance of the network units. So, the growing and pruning strategy is based on linking the required learning accuracy with the significance of nearest or new units. [4] and [5] present improved GAP-RBF for enhancing its performance in both accuracy and speed and the resulting algorithm is referred to as Fast GAP-RBF. Then, the significance of the network units is estimated by recently received M training samples. The idea to exploit a memory is also used by [10] that corresponds to representative inputoutput pairs. These pairs are selected from training data, and they are learned with newly given training data to suppress forgetting. Add to that, the system in [6] used a self-adaptive error based control parameters to alter the training data sequence, evolve the network architecture, and learn the network parameters. In addition, the algorithm removes the training samples which are similar to the stored knowledge in the network. Also, there is the system [12] which presents different strategies for the determination of the beta basis function neural network architecture based on data structure.

j

At the beginning the (AM) contains no hidden neuron. After each misclassification, we applied an incremental learning algorithm, named GA-AM later on, so that the (AM) learns to correct the mistakes caused by the (RS). The (GA-AM) is a supervised and incremental algorithm, divided on two phases: the growing and the adjustment. Above all, some necessary parameters should be defined for the achievement of the adaptation which are the Significance of the new input and the Significance of the nearest neuron. A. Definition and Estimation of Neurons’s Significance The Significance of the RBF hidden neurons is introduced for the first time by [3] and used and improved by several later works [4], [5]. According to [3], the significance gives a measure of the information content in the neuron about the function to be learned and is defined as the contribution made by that neuron to the network output averaged over all the input data received so far. This definition is applied with performances in the fields of function approximation and classification problems. Our work is a particular and a simplified case for adaptation of RBF-NN, thus, we referred to the simplified formula presented in [4] to estimate the significance of the specific neurons according to a certain number of last inputs. Having a new input, we must find the nearest unit to it from the existing RBF-NN. After that we estimate the significance of intentionally added new neuron and the significance of the nearest neuron. • The significance of intentionally added new neuron The significance of the new hidden neuron is its average contribution to the output based on all the input data already seen. In the case of a real use of devices incorporating online recognition systems, the response time and the storage capacity used are two primary measurements to judge their performance. Accordingly, we can estimate the novelty of the current input just by referring to the (M) recently received inputs. The calculation of the significance is as follows [4]:   n 2 er  Is − I Esig (I) = (4) exp − 2 M κ2 Is − Cnearest 

III. I MPROVEMENT OF RBF-NN BASED A DAPTATION M ODULE In order to achieve a writer adaptation that can be applied to any system independently of the implemented classifiers type, we opt to use a module to adapt the recognition system (RS). Thus, the Adaptation Module (AM), based on Radial Basis Function Neural Network (RBF-NN), learns to recognize the incorrect output vectors produced by the (RS) and produces a more correct output vector close to the desired response. In this way, the (AM) adds to the output of the recognition system (ORS ) an adaptation vector (A) to produce a writer dependent output (AO). The architecture of the dependent recognition system is presented in Fig. 1. AOi = OiRS + Ai

s=B

(1)

Where n is the total number of inputs already seen, I is the new input, M is the number of recently received inputs and must be remembered, B = n − M + 1. The error produced by I is er = D − AO . κ is an overlap factor that determines the overlap of the responses of the hidden neurons in the input space. • The significance of the nearest neuron

The following are the notations used in the equations presented below: I: Input Pattern which is the output of the recognition system, N : Number of unit in hidden layer, σ: Width of RBF, z: Output hidden layer, i: Output layer, j: Hidden layer, C: RBF center, W : Weight between output and hidden neurons, D: Desired output. In our experiments, the

1176

Adaptation Module (AM)

Writer Independent Recognition System

Input

Writer Independent Response (ORS) or (I)

Input Layer

Hidden Layer

G

0.1 0.2 0.5 0.15 0.7

Output Layer

Adaptation Vector (A)

0.1 -0.3

Writer Dependent Response (AO)

desired

+ desired

0.6 0.4

Fig. 1: The architecture of the writer-dependent recognition system including the writer adaptation module current input i.e. if it is different, new and unusual. This leads us to estimate the significance (i.e the novelty) of the new input received using Eq.(4) and the significance of the nearest unit compared to the input applying Eq.(5). If it’s the first time that an error is mentioned, automatically a new RBF unit is allocated. Otherwise, some of the inputs may initiate new hidden neurons. To reach the desired objectives with a small number of RBF centers, we used the following growing criteria (cr1, cr2 and cr3) : ⎧ ⎨ I − Cnearest  > dmin (cr1) (Esig (I) > e1min ) (cr2) ⎩ (Esig (nearest) < e2min ) (cr3)

Knowing that the superfluous neurons can affect the system performance in both time and quality of response, we refine the cases of growing by taking into account the significance of the nearest unit compared to the input. This information is calculated as follows: Esig (nearest) = Wnearest  × znearest

(5)

Where Wnearest is the weights between output and nearest neuron, znearest is the output of the nearest neuron. This formula is a simplification of the one applied for classification problems in [4] to estimate the significance of the nearest unit according to the M recently inputs and used for pruning the hidden neuron that has a little significance. For us, we consider only the current input to calculate the signification of the nearest neuron.

where dmin is a threshold corresponding to the minimal distance, Cnearest is the center of the nearest neuron to the input I and e1min and e2min are the desired approximation accuracy. Therefore, in the case of satisfactory growing criteria, i.e the input is considered far from the existing units (conventional RAN criterion [1]) and novel since Esig (I) is greater than the constrained approximation accuracy e1min or the nearest unit is insignificant to the input while Esig (nearest) is less than an approximation accuracy e2min . To summarize, a new hidden unit will be allocated based on the steps described in the algorithm 2. 2) Parameter Adjustment of RBF-NN: For the achievement of this adaptation strategy, we need to determine two essential units which are the nearest and the desired contributor (Dc). At first, we must seek the nearest unit which is the unit that lays out a minimal distance with the current input (I). Secondly, we determine the desired contributor unit (Dc) which is the one that contributes relatively much to the final output of the system. Thus, it is designated based on the sensitivity of a unit to the output error for the current input. So, to find the (Dc) unit we used Eq. 9 where o is the desired maximum output position. Dc = M axj (zj × Wjo ) (9)

B. Writer Adaptation Strategies The (GA-AM) algorithm is divided into two adaptation strategies that are the growing and the adjustment. The adaptation steps are summarized in the algorithm 1. Algorithm 1 GA-AM Algorithm: Adaptation Strategies For each observation (I, D) Compute the overall writer-dependent recognition system output using Eq.(1, 2, 3) Calculate the Significance of the new input and the nearest neuron using Eq.(4, 5) Apply the criteria for adding or adjustment neurons if Growing criteria are satisfied then Add new neuron using Eq.(6, 7, 8) else Parameter adjustment of existent neurons using Eq.(10, 11) end if 1) Growing Criteria: Basically, the RBF-NN begins with no hidden neurons. The training inputs are sequentially exposed to the system. When the user reports a misclassification and specifies the correct class, we study the quality of the

Thus, we update only the parameters (center and weights) of either the nearest neuron or the two neurons: nearest and

1177

TABLE I: Recognition rate of the numeral and alphanumeric systems (without adaptation)

Algorithm 2 GA-AM Growing Case Algorithm if cr1 AND (cr2 OR cr3) then Allocate a new hidden neuron (N+1) with: 1) The input becomes the center of the new unit. CN +1 = I 2)

3)

Numeral Recognition System Writer Recognition Rate ’Nes’ 94% ’Hen’ 88% ’Man’ 91% ’Ami’ 92%

(6)

The weight values of connections between the new unit and the output layer correspond to the desired output. WN +1 = DN +1 (7)

The IRONOFF handwriting database was used to train the recognizers. To evaluate the performance of the (GA-AM), we connect them to the output of two writer independent recognition systems for numeral and alphanumeric characters. This implies that the output size of the (AM) depends on its input vectors size resulting from the independent recognition system. Furthermore, we used writer dependent datasets of four writers ’Nes’, ’Hen’, ’Ami’ and ’Man’. Hence, each writer was asked to write at least twenty examples for each character class: digits [0-9] and lowercase letters [a-z]. The recognition rates are presented in Table.I. After several experiments and referred to [4] experiments, we choose the following parameter values to train the GAAM: the threshold dmin =0.2, the learning rate α=0.02 and approximation accuracy e1min =0.01 and e2min =0.5, κ=0.8. Also we used the Euclidian distance to caculate the distance between unit centers and inputs. There remains the memory size M to be determined. From statistical point of view, to have a true value of the neuron significance, the memory size must be as big as the input sample size. For us the input sample size is the same as the number of classes of each recognition system. Whereas the GA-AM needs to remember M observations already seen, during our simulations it is fundamental to examine the impact of memory size M on the performance of the adaptated systems. For the alphanumeric case, the results of this study is shown in Fig. 2(a) and we note that for each of the four datasets, even when M varies from 30 to 100, the minimum error is made with M = 40. Also, the effect of M on the neuron growth for the writer ’Man’ is shown in Fig. 2(b). From this figure, we can see that increasing M reduces the number of hidden neurons and the growth rate is also smaller. Also, it should be noted that a large M increases the computational load. For this reason, seek the optimal value of (M) will make the compromise between efficiency and computational load. The same tests are made, for the numeral system, to find the optimal memory size M which is fixed to 13. In addition, to show the effectiveness of the GA-AM we have recourse to the cumulative errors made during the interactive use of the apparatus for the different writer dependent datasets. Moreover, the results comparison will be made using two other adaptation strategies which are : Platt strategies [2] and AM strategies [9]. The Growing criterion for these two adaptation strategies is (cr1). In the case of adjustment, Platt updated only the weights of the nearest neuron and AM updated weights and center of the nearest unit. Fig. 3 points

To avoid the overlap of different regions of RBF units, the width of the new unit is fixed to the distance between the input and the unit which is the nearest to it. σN +1 = d(I,nearest) (8)

else Parameter adjustment of neurons end if (Dc). These two cases are distinguished according to the distance value, d(nearest,Dc) , between both the nearest and the desired contributor (Dc) units. Basically, only the nearest unit is adjusted, but if d(nearest,Dc) is lower than the threshold minimal distance dmin then also (Dc) unit is updated. The adjustment case is described in the algorithm 3. Algorithm 3 GA-AM Adjustment Case Algorithm if Growing criteria are not satisfied then Adjust parameters of the nearest unit using Eq.(10, 11) if d(nearest,Dc) < dmin then Adjust parameters of the Desired Contributor unit using Eq.(10, 11) end if end if The researches which were made in the field of sequential learning of RBF-NN, used either the standard LMS gradient descent or the Extended Kalman Filter (EKF) algorithm. Therefore, having an adaptation time and memory size constraints, we opt for the standard LMS gradient descent to decrease the error at each time no new unit is allocated. This is done using the following equations : α  − AO)  ×W j] ΔCj = 2 (Ik − Cjk )zj [(D (10) σj  j = α[(D  − AO)]z  ΔW j

Alphanumeric Recognition System Writer Recognition Rate ’Nes’ 88% ’Hen’ 83% ’Man’ 90% ’Ami’ 92%

(11)

IV. E XPERIMENTS AND R ESULTS In this section, we present experimental results using two writer independent recognition systems. These systems are developed using a generic toolkit (LipiTk) whose aim is to facilitate the development of on-line handwriting recognition engines [13] available at http://lipitk.sourceforge.net.

1178

(a) 30

25

Number of Hidden neurons

70

Number of Errors

TABLE III: Effeciency of the recognition systems using GAAM algorithm

(b)

80

Writer ’Nes’ Writer ’Hen’ Writer ’Man’ Writer ’Ami’

60

50

40

15

10

30

5

20 0

40

60

80

Numeral Recognition System Writer Effect of adaptation cor→cor cor→err error→cor ’Nes’ 186 0 11 ’Hen’ 163 0 21 ’Man’ 184 0 15 ’Ami’ 182 0 13 Alphanumeric Recognition System ’Nes’ 623 1 75 ’Hen’ 575 0 82 ’Man’ 626 0 57 ’Ami’ 629 0 45

20

0 0

100

Size Memory (M)

M=100 M=40 200

400

600

800

Number of Input Samples

Fig. 2: Impact of memory size. (a) Influence of memory M on adaptation performance. (b) The neuron growth for different M using Adaptation Module for writer ’Man’

60 40 20 0

Writer ’Hen’

120

Cumulative Errors

Cumulative Errors

Writer ’Nes’

100 80 60 40 20

0

200

400

600

800 0

Number of Input Samples

0

200

400

600

800

Number of Input Samples 80

60

Cumulative Errors

Cumulative Errors

Writer ’Man’ 60

40

20

0

0

200

400

600

800

Number of Input Samples Without Adaptation

Platt Strategies

Writer ’Ami’

50 40 30 20 10 0

0

200

400

600

800

Number of Input Samples AM Strategies

22 63 37 46

AM algorithm in most cases increases this number slightly. But this is negligible compared to the reached error rate reduction. Moreover, we notice that the GA-AM is very fast since it takes at the most, in the case of alphanumeric recognition system, 0.009sec to add new unit and between 0.13sec and 0.25sec to adjust units. In the case of numeral recognition system, the adding of new unit takes at the most 0.0001sec and the adjustment between 0.06sec and 0.16sec. The time variation in the adjustment case is due to the fact that there are cases where only the nearest unit is updated and other cases where both the nearest and the (Dc) units are updated. To study the adaptation behaviour according to the response of the writer-dependent recognition system, we present in Table. III firstly how was the response of the writerindependent recognition system and secondly how it becomes after adaptation. To clarify the notations,reported by Table. III, we define them as follows: • cor→cor is the number of correct response by the (RS) and remains correct after adaptation • cor→err is the number of correct response by the (RS) and becomes incorrect after adaptation • err→cor is the number of incorrect response by the (RS) and becomes correct after adaptation • err→err is the number of incorrect response by the (RS) and remains incorrect after adaptation According to these results, we put the point on the value of cor→err which is considered as a reliability measure of the GA-AM algorithm. Hence, the GA-AM algorithm try to correct the mistaken responses of the independent recognition system without decreasing its efficiency by generating new classification errors.

100 80

err→err 3 16 1 5

GA−AM Strategies

Fig. 3: The cumulative number of errors with and without adaptation in the case of the alphanumeric recognition system adaptation out the results for four cases : without adaptation, using Platt, AM and GA-AM strategies. For more clarity of the results presentation, for the writer ’Ami’, we only display the cases without adaptation and using the GA-AM strategies because the curves are superimposed. Fig. 3 shows the baseline cumulative error without adaptation. Also the total number of character errors from the time when the adaptation started is plotted to give the estimated instantaneous error rate. Therefore, we note that the slopes for all writers, using the alphanumeric recognition system, decrease by applying the GA-AM compared to the other strategies. Furthermore, quantitative results are shown in Table II where we put on cumulative errors obtained with and without adaptation for both recognition systems and applying different adaptation strategies. As you can see from Table II, the use of the GA-AM strategies decrease considerably the number of errors for both recognition systems using the four writer dependent datasets. On the other hand, with regard to the number of neurons allocated using each adaptation strategy, we note that using GA-

V. C ONCLUSION In this paper, improvements of adaptation module (AM) algorithm have been developed for making it more efficient. The incremental learning is carried out using the GA-AM algorithm. This algorithm consists of two strategies which are the Growing and the adjustment. The growing criteria need the estimation of the significance of the new input and the significance of the nearest unit using a memory storing the M samples recently seen. The adjustment consists of the update of two specific units (nearest and desired contributor) parameters

1179

TABLE II: Performance comparison of different adaptation strategies for both recognition systems Numeral Recognition System (word written 200) Writer Word Error with Adaptation Error Reduction Error Platt AM GA-AM Platt AM GA-AM ’Nes’ 12 8 7 3 33% 42% 75% ’Hen’ 24 23 20 16 4% 16% 33% ’Man’ 17 10 8 1 41% 52% 94% ’Ami’ 15 12 11 5 20% 27% 66% Alphanumeric Recognition System (word written 720) ’Nes’ 84 35 30 22 58% 64% 73% ’Hen’ 121 84 74 63 30% 39% 47% ’Man’ 71 55 40 37 22% 43% 47% ’Ami’ 52 49 48 46 6% 8% 11%

using the standard LMS gradient descent to decrease the error at each time no new unit is allocated. Performance comparison of GA-AM with two other adaptation starategies on two recognition systems are carried out. As result, referred to the cumulative errors, we note that the use of GA-AM algorithm reduces the error on average by 67% for the numeral recognition system and 44% for the alphanumeric recognition system.

Number of Allocated Neurons Platt AM GA-AM 10 13 15 22 23 23 11 11 16 5 7 12 36 72 24 40

36 62 23 37

34 64 24 37

[13] S. Madhvanath, and D. V. T. M. Kadiresan, LipiTk: A Generic Toolkit for Online Handwriting Recognition, in Proceedings of the ACM SIGGRAPH, 2007. [14] L. Prevost and L. Oudot, Self-Supervised Adaptation for Online Script Text Recognition, Electronic Letters on Computer Vision and Image Analysis, vol. 5, no. 1, pp. 87-97, 2005. [15] A. Nakamura, A Method to Accelerate Writer Adaptation for On-Line Handwriting Recognition of a Large Character Set, in proceedings of the ninth IWFHR, pp. 426-431, 2004. [16] H. Mouchere, E. Anquetil, and N. Ragot, Writer Style Adaptation in On-line Handwriting Recognizers by a Fuzzy Mechanism Approach : The ADAPT Method, International Journal of Pattern Recognition and Artificial Intelligence, vol. 21, no. 1, pp. 99-116, 2007. [17] M. Kherallah, L. Haddad, A. M. Alimi, and A. Mitiche, On-line Handwritten Digit Recognition based on Trajectory and Velocity Modeling, Pattern Recognition Letters, vol. 29, no. 5, pp. 580-594, 2008. [18] M. Liwicki, A. Schlapbach, and H. Bunke, Writer- Dependent Recognition of Handwritten Whiteboard Notes in Smart Meeting Room Environments, in proceedings of the eighth DAS, pp. 151-157, 2008. [19] H. Miyao and M. Maruyama, Writer Adaptation for Online Handwriting Recognition System Using Virtual Examples, in in proceedings of the tenth ICDAR, pp. 1156-1160, 2009. [20] N. C. Tewari and A. M. Namboodiri, Learning and Adaptation for Improving Handwritten Character Recognizers, in proceedings of the tenth ICDAR, pp. 86-90, 2009.

ACKNOWLEDGMENT The authors would like to acknowledge the financial support of this work by grants from the General Direction of Scientific Research (DGRST), Tunisia, under the ARUB program. R EFERENCES [1] J. Platt A Resource-Allocating Network for Function Interpolation, Neural computation, vol. 3, no. 2, pp. 213-225, 1991. [2] J. Platt and N. P. Matic, A Constructive RBF Network for Writer Adaptation, Advances in Neural Information Processing Systems, vol. 9, no. 1, pp. 765-771, 1997. [3] G. B Huang, P. Saratch and N. Sundararajan, A generalized growing and pruning rbf (GGAP-RBF) neural network for function approximation, Neural Networks, vol. 16 , no. 1, pp. 57-67, 2005. [4] R. Zhanga, G. B Huang, N. Sundararajanb, P. Saratchandranb, Improved GAP-RBF network for classification problems, Neurocomputing vol. 70, no 1618, pp. 3011-3018, 2007. [5] M. Bortman , M. Aladjem, A growing and pruning method for radial basis function networks, Neural Networks, vol. 20, no. 6, pp. 1039-1045, 2009. [6] S. Suresh, K. DongbAuthor, H. J. Kimb, A sequential learning algorithm for self-adaptive resource allocation network classifier, Neurocomputing, vol. 73, no. 1618, pp. 3012-3019, 2010. [7] X. Y. Zhang, C. L. Liu, Writer Adaptation with Style Transfer Mapping, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 99, no. 1, pp. 1-15, 2012. [8] L. Jin, K. Ding, Z. Huang, Incremental learning of LDA model for Chinese writer adaptation, Neurocomputing, vol. 73, no. 1012, pp. 16141623, 2010. [9] L. Haddad, T. M. Hamdani, M. Kherallah, A. M. Alimi, Improvement of On-line Recognition Systems Using a RBF-Neural Network Based Writer Adaptation Module, Proceedings of the International Conference on Document Analysis and Recognition, pp. 284-288, 2011. [10] S. Ozawa, An autonomous incremental learning algorithm of Resource Allocating Network for online pattern recognition, Neural Networks (IJCNN), The 2010 International Joint Conference on, pp. 1-8, 2010. [11] L. Yingwei, N. Sundararajan and P. Saratchandran, A sequental learning scheme for function approximation using minimal radial basis function (RBF) neural networks, Neural Computing, vol. 9, pp. 461-478, 1997. [12] T. M. Hamdani, A. M. Alimi, and F. Karray, Enhancing the Structure and Parameters of the Centers for BBF Fuzzy Neural Network Classifier Construction Based on Data Structure, in Proc. IEEE International Join Conference on Neural Networks (IJCNN), pp. 3174-3180, 2008.

1180