Thus the system contains as much adaptation module as handled environments. In this paper we consider two stationary environments that are sitting at a desk ...
2014 14th International Conference on Frontiers in Handwriting Recognition
OHRS-MEWA: On-line Handwriting Recognition System with Multi-Environment Writer Adaptation Lobna Haddad1 , Tarek M. Hamdani2 , Adel M. Alimi1 REGIM-Lab: REsearch Groups in Intelligent Machines University of Sfax, National Engineering School of Sfax (ENIS) BP 1173, Sfax, 3038, Tunisia 2 Taibah University, College Of Science And arts at Al-Ula, al-Madinah al-Munawwarah, KSA Email: (lobna.haddad, tarek.hamdani, adel.alimi)@ieee.org
1
reorganizing the prototypes of the database (addition, modification and deletion) [10], [11], [12], systems updating the parameters of the recognition system [15], [14], [13], [16], [6] and systems adapting without modifying the classifier’s parameters [7], [8], [1], [5].
Abstract—The writer adaptation arisen with the appearance and the excessive use of Handheld devices. These devices are conceived to be used in diverse user settings which can be stationary or mobile. Most of the works tackle the writer adaptation in the ”sitting at a desk” environment, nevertheless we notice a lack of contributions in the multi-environment context. In this paper we present a multi-environment writer adaptation technique to improve accuracy of writer-independent recognition system. Our system is based on adaptation module (AM) which can greatly decrease error accuracy without changing the writer-independent system. The (AM) is built using IGAAM which is an incremental learning algorithm. First, we test the performance of the IGA-AM on Laviola dataset against GA-AM algorithm for writer adaptation. Second, we test the recognition accuracy by taking into account the writing style change proportionally to environment changes. Thus the system contains as much adaptation module as handled environments. In this paper we consider two stationary environments that are sitting at a desk and standing. Finally, results on multienvironment dataset (REGIM-MEnv) are presented.
I.
Despite this already thorough research, we focused on two opportunities for further study. First, the developed systems used data written while the user is sitting. However, handheld devices can be used especially while the user is stationary (sitting or standing), while he is in mobile settings (walking, going up/down stairs) or when he is by car, train or subway. All these situations or environments affect, whith different degrees, the user’s writing style. To perform writer adaptation we need to consider these environments to increase the performance of the writer dependent systems. Consequently, we developed a multi-environment writer adaptation that handle stationary settings (sitting at a desk and standing). Second, applications for mobile tactile interfaces are essentially used by the handheld device owner, as occasionally, they can be used by other people. Hence, writer adaptation must be executed only when the user is the owner. Our system is conceived for such opportunity. Both cases are distinguished by user identification (password). If it is the owner, writer adaptation is activated. Otherwise, the writer-independent recognition system respond without adaptation even if there are classification errors.
I NTRODUCTION
Today, the handheld devices (Tablet, notebook, smartphone, PDA, ...) are rapidly permeating into our lives and are intended to enhance the user experience by making interfaces more realistic, intuitive, and easy to use. Consequently, user spend more and more time interacting with these devices than with conventional desktop or laptop computers. The use of the common keyboard is the supported method for communicating or interacting with tactile devices. While the natural human method is : writing (handwriting recognition) or talking (speech recognition).
In the next section, we describe the online handwriting recognition system (OHRS) based writer adaptation applying the IGA-AM algorithm for sequential learning of the adaptation module. The section 3 presents the architecture of the OHRS based multi-environment writer adaptation. In the experimental section (section 4), the results incorporating an adaptation module into writer-independent recognition system are shown and discussed using benchmarking dataset. In addition, we provide results incorporating an adaptation module per user’s environment. In this case, writer-dependent dataset and results are presented.
Automatic on-line handwriting recognition has been an ongoing challenging problem since several decades. The ability for these devices to recognize natural handwriting, produced using the index of user’s dominant hand or a stylus, is a powerful feature which open new ways of research. At the beginning, recognition systems must be able to recognize a large number of writing styles (writer-independent recognition systems) which carry out an unsatisfactory performances for the recognition of irregular or new writing styles. On the other hand, writer-dependent systems trained on a specific user’s handwriting can reache higher accuracy and out perform a writer-independent system. The transition from independent to dependent recognition systems is done applying writer adaptation techniques.
II.
In order to achieve a writer adaptation that can be applied to any system independently of the implemented classifiers type, we opt to use a module to adapt the recognition system (RS). The Adaptation Module (AM) is based on the Radial Basis Function Neural Network (RBF-NN). RBF-NN is considered the most convenient network in sequential learning because of its local response nature, simple topology and fast convergence. The (AM) is added below the recognition system,
There have been great researches on the field of writer adaptation which can be classified in three groups: systems 2167-6445/14 $31.00 © 2014 IEEE DOI 10.1109/ICFHR.2014.63
N EURAL BASED W RITER A DAPTATION
335
Having a new input, we must find the nearest unit to it from the existing RBF-NN. After that we estimate the significance of intentionally added new unit and the significance of the nearest unit.
and its role is to examine the writer-independent output and produces a more correct output vector close to the desired response of the user. In this way, the (AM) adds to the writerindependent output of the recognition system an adaptation vector (𝐴) to produce a writer-dependent output using Eq.1. (𝐴) is the RBF-NN output. 𝑊 𝐷𝑜𝑢𝑡𝑝𝑢𝑡 = 𝑊 𝐼𝑜𝑢𝑡𝑝𝑢𝑡 + 𝐴
i)
The significance of intentionally added new unit
The significance of the new hidden unit is its average contribution to the output based on all the input data already seen [2]. In the case of a real use of devices incorporating online recognition systems, the response time and the storage capacity used are two primary measurements to judge their performance. Accordingly, we can estimate the novelty of the current input just by referring to a limited number of samples (the 𝑀 recently received inputs). The calculation of the significance is as follows [3]: ) ( 𝑛 2 ∥𝑒𝑟∥ ∑ ∥𝐼𝑠 − 𝐼∥ 𝐸𝑠𝑖𝑔 (𝐼) = (4) 𝑒𝑥𝑝 − 2 𝑀 𝜅2 ∥𝐼𝑠 − 𝐶𝑛𝑒𝑎𝑟𝑒𝑠𝑡 ∥
(1)
The architecture of the on-line handwriting recognition system with writer adaptation (OHRS-WA) is presented in Fig. 1. The following are the notations used in the equations presented below: 𝐼: Input Pattern which is the output of the writer-independent recognition system, 𝑁 : Number of unit in hidden layer, 𝜎: Width of RBF, 𝑧: Output hidden layer, 𝑖: Output layer, 𝑗: Hidden layer, 𝐶: RBF center, 𝑊 : Weight between output and hidden neurons, 𝐷: Desired output. In our experiments, the target vector (D) is 1 for the neuron corresponding to the correct class and 0 otherwise.
𝑠=𝐵
Where 𝑛 is the total number of inputs already seen, 𝐼 is the new input, 𝑀 is the number of recently received inputs and must be remembered, 𝐵 = 𝑛 − 𝑀 + 1. The error produced by 𝐼 is 𝑒𝑟 = 𝐷 − 𝐴𝑂 . 𝜅 is an overlap factor that determines the overlap of the responses of the hidden neurons in the input space.
For a given pattern input (𝐼, 𝐷), we calculate the RBF-NN output using following equations: ∑ ( 𝑘 𝐶𝑗𝑘 − 𝐼𝑘 )2 𝑧𝑗 = 𝑒𝑥𝑝(− ) (2) 𝜎𝑗2 ∑ 𝑧𝑗 × 𝑊𝑗𝑖 (3) 𝐴𝑖 =
ii)
𝑗
The significance of the nearest unit
According to [3], the significance of hidden unit is calculated according to the 𝑀 recently inputs and is used for pruning case. The hidden unit with little significance is removed from the network. Although removing a hidden neuron is an important phase in classification problems, it doesn’t make sense in writer adaptation. In this manner, we calculate the significance of the nearest unit compared to only the current input. This information is calculated as follows:
At the beginning the (AM) contains no hidden neuron. After each misclassification, we applied an incremental learning algorithm, named IGA-AM later on, so that the (AM) learns to correct the mistakes caused by the OHRS. The IGA-AM is a supervised and incremental algorithm, divided on two phases: the growing and the adjustment. The adaptation steps are summarized in the algorithm 1.
𝐸𝑠𝑖𝑔 (𝑛𝑒𝑎𝑟𝑒𝑠𝑡) = ∥𝑊𝑛𝑒𝑎𝑟𝑒𝑠𝑡 ∥ × 𝑧𝑛𝑒𝑎𝑟𝑒𝑠𝑡
Algorithm 1 IGA-AM Algorithm: Adaptation Strategies
(5)
Where 𝑊𝑛𝑒𝑎𝑟𝑒𝑠𝑡 is the weights between output and nearest unit, 𝑧𝑛𝑒𝑎𝑟𝑒𝑠𝑡 is the output of the nearest unit.
For each observation (𝐼, 𝐷) Compute the overall writer-dependent recognition system output using Eq.(1, 2, 3) Calculate the Significance of the new input and the nearest neuron using Eq.(4, 5) Apply the criteria for adding or adjusting neurons if Growing criteria are satisfied then Add new neuron using Eq.(6, 7, 8) else Parameter adjustment of existent neurons using Eq.(11, 12) end if
B. Writer Adaptation Strategies The on-line supervised adaptive training using RBF-NN is generally based on two steps which are the growing by adding a new unit and the adjustement by updating or pruning an existant unit. Likewise, in our proposed algorithm IGA-AM, we implement two adaptation strategies that are the growing and the adjustment without pruning. i)
Growing Criteria
Basically, the RBF-NN begins with no hidden neurons. The training inputs are sequentially exposed to the system and the user must reports the misclassification and specifies the correct class. If it’s the first time that an error is mentioned, automatically a new RBF unit is allocated. Otherwise,we study the novelty of the current input by estimating its significance using Eq.(4). Then, we estimate the significance of the nearest unit compared to the input applying Eq.(5). To perform writer adaptation with a small number of RBF units, we used the following growing criteria (𝑐𝑟1, 𝑐𝑟2 and 𝑐𝑟3) :
A. Definition and Estimation of Neurons’s Significance There exist many versions of incremental algorithms for RBF networks learning. In this context, neurons’s significance was proposed by [2] and used and improved by several later works [3], [4]. In our system, we applied this information to improve adaptation task. As mentioned earlier in the algoritm 1, we have to define the significance of the new input and the significance of the nearest unit.
336
Fig. 1: Architecture of the OHRS Writer Adaptation
{
∥𝐼 − 𝐶𝑛𝑒𝑎𝑟𝑒𝑠𝑡 ∥ > 𝑑𝑚𝑖𝑛 (𝐸𝑠𝑖𝑔 (𝐼) > 𝑒1𝑚𝑖𝑛 ) (𝐸𝑠𝑖𝑔 (𝑛𝑒𝑎𝑟𝑒𝑠𝑡) < 𝑒2𝑚𝑖𝑛 )
ii)
(𝑐𝑟1) (𝑐𝑟2) (𝑐𝑟3)
For the achievement of this adaptation strategy, we need to determine two essential units which are the nearest and the desired contributor (𝐷𝑐). At first, we must seek the nearest unit which is the unit that lays out a minimal distance with the current input (I). Secondly, we determine the desired contributor unit (𝐷𝑐) which is the one that contributes relatively much to the erroneous dependent-output. So, to find the (𝐷𝑐) unit we used Eq.10 where 𝑜 is the desired maximum output position. 𝐷𝑐 = 𝑀 𝑎𝑥𝑗 (𝑧𝑗 × 𝑊𝑗𝑜 ) (10)
where 𝑑𝑚𝑖𝑛 is a threshold corresponding to the minimal distance, 𝐶𝑛𝑒𝑎𝑟𝑒𝑠𝑡 is the center of the nearest unit to the input 𝐼 and 𝑒1𝑚𝑖𝑛 and 𝑒2𝑚𝑖𝑛 are the desired approximation accuracy. Therefore, in the case of satisfactory growing criteria, i.e the current input is considered far from the existing units and novel since 𝐸𝑠𝑖𝑔 (𝐼) is greater than the constrained approximation accuracy 𝑒1𝑚𝑖𝑛 or the nearest unit is insignificant to the input while 𝐸𝑠𝑖𝑔 (𝑛𝑒𝑎𝑟𝑒𝑠𝑡) is less than an approximation accuracy 𝑒2𝑚𝑖𝑛 . To summarize, a new hidden unit will be allocated based on the steps described in the algorithm 2.
Thus, we update only the parameters (center and weights) of either the nearest neuron or the two neurons: nearest and (𝐷𝑐). These two cases are distinguished according to the distance value, 𝑑(𝑛𝑒𝑎𝑟𝑒𝑠𝑡,𝐷𝑐) , between both the nearest and the desired contributor (𝐷𝑐) units. Basically, only the nearest unit is adjusted, but if 𝑑(𝑛𝑒𝑎𝑟𝑒𝑠𝑡,𝐷𝑐) is lower than the threshold minimal distance 𝑑𝑚𝑖𝑛 then also (𝐷𝑐) unit is updated. The adjustment case is described in the algorithm 3.
Algorithm 2 IGA-AM Growing Case Algorithm if cr1 AND (cr2 OR cr3) then Allocate a new hidden unit (N+1) with: 1) The input becomes the center of the new unit. 𝐶𝑁 +1 = 𝐼
(6)
2)
The weight values of connections between the new unit and the output layer correspond to the desired output. 𝑊𝑁 +1 = 𝐷𝑁 +1 (7)
3)
To avoid the overlap of different regions of RBF units, the width of the new unit is fixed to the distance between the input and the unit which is nearest to it. 𝜎𝑁 +1 = 𝑑(𝐼,𝑛𝑒𝑎𝑟𝑒𝑠𝑡)
4)
Algorithm 3 IGA-AM Adjustment Case Algorithm if Growing criteria are not satisfied then Adjust parameters of the nearest unit using Eq.(11, 12) if 𝑑(𝑛𝑒𝑎𝑟𝑒𝑠𝑡,𝐷𝑐) < 𝑑𝑚𝑖𝑛 then Adjust parameters of the Desired Contributor unit using Eq.(11, 12) end if end if The researches which were done in the field of sequential learning of RBF-NN, used generally either the standard LMS gradient descent or the Extended Kalman Filter (EKF) algorithm. Therefore, having an adaptation time and memory size constraints, we opt for the standard LMS gradient descent to decrease the error at each time no new unit is allocated. This is done using the following equations : 𝛼 ⃗ − 𝐴𝑂) ⃗ ×𝑊 ⃗𝑗] Δ𝐶𝑗 = 2 (𝐼𝑘 − 𝐶𝑗𝑘 )𝑧𝑗 [(𝐷 (11) 𝜎𝑗
(8)
Resize the width of the nearest unit 𝜎𝑛𝑒𝑎𝑟𝑒𝑠𝑡 = 𝑚𝑖𝑛(𝜎𝑛𝑒𝑎𝑟𝑒𝑠𝑡 , 𝑑(𝐼,𝑛𝑒𝑎𝑟𝑒𝑠𝑡) )
Parameters Adjustment of RBF-NN
(9)
else Parameter adjustment of neurons end if
⃗ 𝑗 = 𝛼[(𝐷 ⃗ − 𝐴𝑂)]𝑧 ⃗ Δ𝑊 𝑗
The step 4 of the algorithm 2 allows a resize of the nearest unit width to minimize the overlap between the new added unit and the nearest unit. This step represents the improvement of the GA-AM algorithm [8]. It optimize the adaptation module’s structure by acting on the number of hidden units allocated, consequently it acts also on error rate reduction.
III.
(12)
M ULTI - ENVIRONMENT WRITER ADAPTATION
Over the last few years handheld devices are becoming increasingly popular because they can be used easily in many 337
positions and locations and can handle most of the tasks that people need to be able to do.
Writer ’w2’
Writer ’w3’
200
140 IGA−AM GA−AM without adaptation
180
The natural use of these devices consist of the handwriting using a stylus or the index finger especially for write handwritten notes, sending and receiving emails, recording signatures...
IGA−AM GA−AM without adaptation
120
160 100
We state that there are two factors that affect the writing style of the user. The local factors like where they were using the tablet (at home, at work, by car, train, subway, plane). The physical factors like if they were using it standing, sitting on a couch, sitting at a desk, walking, going up/down stairs.
Cumulative errors
Cumulative errors
140 120 100 80 60
80
60
40
40
To improve the recognition accuracy we must take into account these factors that will be named later on ”environments”. In this paper we consider two stationnary environments that are sitting at a desk and standing. The figure Fig. 2 shows the OHRS multi-environment writer adaptation (OHRS-MEWA) architecture.
20 20 0
200 400 600 Number of input samples
800
0
0
200 400 600 Number of input samples
800
Fig. 3: The cumulative number of errors with and without adaptation using LaViola dataset
The figure Fig. 2 displays two conditions. First, writer identification is a simple check of the password to activate the adapted system or not. Second, if user is identified he must states his environment for activating the appropriate adaptation module. IV.
0
learning rate 𝛼=0.02 and approximation accuracy 𝑒1𝑚𝑖𝑛 =0.01 and 𝑒2𝑚𝑖𝑛 =0.5, 𝜅=0.8. Also we used the Euclidian distance to caculate the distance between unit centers and inputs. For the memory size (M) we referred to the work [8] where a study is made to determine the best memory size that minimizes the number of error and optimises the number of hidden units. As a result, the memory size is fixed to 𝑀 = 40.
E XPERIMENTS AND R ESULTS
To test the performance of the Writer Adaptation applying the (IGA-AM) algorithm, we connected it in the output of an independent recognition system for alphanumeric characters. The latter is developed using a generic toolkit (LipiTk) whose aim is to facilitate development of on-line handwriting recognition engines [9] available at http://lipitk.sourceforge.net. The IRONOFF handwriting database was used to train the recognizer.
3) Comparative results: In This section we confront our writer adaptation using IGA-AM algorithm with other algorithm GA-AM which is presented in [8]. The comparison not only involves the overall performance of the system, but also the system complexity (number of hidden units allocated in the adaptation module). The IGA-AM brings improvements to the GA-AM algorithm to reach better writer adaptation. The IGA-AM operates in the same way as GA-AM in the adjustment case. In the growing case, GA-AM have been upgraded by adding a step which resize the width of the nearest unit. This step is meaningful because it optimizes the centers’s position and has a good effect on the error rate reduction and the extent of the adaptation module.
A. Writer adaptation using Benchmarking dataset The architecture of the OHRS-WA is already applied carrying out many writer adaptation algorithms which are Platt, AM, GA-AM and using different datasets. The description and results using these algorithms are presented in [1], [7], [8] respectively. In this section we present a comparative results between the IGA-AM algorithm which is an improvement of the GA-AM algorithm [8].
To show the effectiveness of the writer adaptation, we have recourse to the cumulative errors made during the interactive use of the handheld device for the different algorithms. The figure 3 points out the results for three cases: without adaptation, with module adaptation (GA-AM algorithm and IGA-AM algorithm).
1) Benchmarking dataset description: To test the efficiency of our writer adaptation system, we used a benchmarking dataset named LaViola. The LaViola dataset [17] contains a total of 11,602 samples of handwritten digits (0-9),characters(az) and mathematical symbols written by 11 persons taken with an (HP) Compaq tc1100 Tablet PC. This dataset enclose two sets for training and one for testing. The results on this dataset were reported in [17], [18]. In our experiments we used both training sets for writer adaptation performance evaluation. Each training dataset contains few training samples (10 per class and per writer). I this case we have 720 examples per writer. The average recognition rate without adaptation using the alphanumeric recognition system is 80%.
Fig.3 shows the baseline cumulative error without adaptation. Also, to have an estimated instantaneous error rate, we plot the cumulative errors from the time when the adaptation is started. Therefore, we note that the slopes for writer ’w2’ and ’w3’ decrease by applying the IGA-AM algorithm compared to the GA-AM. Furthermore, quantitative results are shown in Table I where we put on cumulative errors obtained with and without adaptation for all writers. These results also show that IGA-AM’s cumulative errors during adaptation is less than the ones when applying GA-AM algorithm. The recognition rate
2) Parameter values used by the IGA-AM: After several experiments and taking into account experiments done in [3] , we choose the following parameter values to test the writer adaptation performance: the threshold 𝑑𝑚𝑖𝑛 =0.2, the 338
Fig. 2: The architecture of the OHRS multi-environment writer adaptation
Sitting Env
TABLE I: Performance comparison of writer adaptation using Laviola dataset Writer ’w1’ ’w2’ ’w3’ ’w4’ ’w5’ ’w6’ ’w7’ ’w8’ ’w9’ ’w10’ ’w11’
Error without adaptation 193 181 138 157 123 140 133 120 143 160 95
Error with adaptation GA-AM IGA-AM 98 83 114 98 72 65 98 93 69 64 90 80 69 64 75 64 81 74 97 94 61 61
Hidden units GA-AM IGA-AM 64 60 72 63 48 40 60 55 50 45 60 54 43 39 58 49 57 51 70 70 44 41
Digit ’2’
Standing Env
Sitting Env
Standing Env
Character ’l’
Fig. 4: Example characters taken from the writer ’lo’ in the case of sitting and standing environments
reached by the OHRS-WA system using IGA-AM algorithm is 89.4%. On the other hand, we note that using IGA-AM decreases considerably (9.42%) the number of hidden units allocated compared to GA-AM case. This is a relevant measure of efficiency because, in the handheld devices context, memory size and execution time are very important.
real dataset, a writer is asked to write during different periods in a day, at most four characters ten times at each period. The Fig.4 displays the effect of the writer’s environment on his handwriting style. 2) Experimental results: In this section, to attest the goodness of our writer adaptation system (Fig2), we report results in Fig.5 using the REGIM-MEnv dataset. This experiment is designed to examine the performance of The IGA-AM algorithm against GA-AM algorithm in the multi-environment context.
B. Multi-environment writer adaptation 1) Dataset description: To examine the effectiveness of our OHRS writer adaptation in the muti-environment context (OHRS-MEWA), we collect a handwriting samples from four writers taken with a Samsung N5100 GALAXY Note 8.0. Each writer wrote, without any guidance or constraint, the characters a-z and 0-9 ten times each. The multi-environment writer dependent dataset (REGIM-MEnv dataset) contains a total of 1440 samples.
From Fig.5, it can be seen that the cumulative errors with adaptation is reduced dramatically indicating a very high error rate reduction. Using GA-AM, the average error rate is about 48.41% and 49.72% for sitting and standing environments respectively. Using IGA-AM, the average error rate is about 50.84% and 57.67% for sitting and standing environments respectively. Overall, the results clearly show that the IGAAM learning algorithm is very useful and effective for writer adaptation.
For character collection we used the Android application ISIgraphy [19] which is developped for generation of online handwriting sample databases on touchscreen based devices. After that we converted the data to UNIPEN format to be used by the writer-independent recognition system to conduct our experimental evaluation on 36 different classes. Two stationary environments are considered : sitting at a desk and standing. When the user is standing, he holds a device with the nondominant hand and writes characters using the other hand. This situation is exhausting. For this reason and to have a
In other hand, we consider the elapsed time for adaptation. Taking the writer ’w4’ in standing environment as example, the elapsed time to correct 44 classification errors is about 2.10 seconds. Thus, the system takes in average 0.04 seconds to 339
Fig. 5: Comparative results of the OHRS-MEWA on sitting and standing environments using REGIM-MEnv dataset
correct one error. Consequently, our writer adaptation system can work seamlessly and rapidly in multi-environment context. The average global recognition rate of the OHRS-MEWA (for all writers) is increased from 87.47% to 94.18%. V.
[6] L. Jin, K. Ding, Z. Huang, Incremental learning of LDA model for Chinese writer adaptation, Neurocomputing, vol. 73, no. 1012, pp. 16141623, 2010. [7] L. Haddad, T. M. Hamdani, M. Kherallah, A. M. Alimi, Improvement of On-line Recognition Systems Using a RBF-Neural Network Based Writer Adaptation Module, Proceedings of the International Conference on Document Analysis and Recognition, pp. 284-288, 2011. [8] L. Haddad, T. M. Hamdani and A. M. Alimi, Improved Neural Based Writer Adaptation For On-line Recognition Systems, IEEE International Conference on Systems, Man, and Cybernetics, pp. 1175-1180, 2013. [9] S. Madhvanath, and D. V. T. M. Kadiresan, LipiTk: A Generic Toolkit for Online Handwriting Recognition, in Proceedings of the ACM SIGGRAPH, 2007. [10] L. Prevost and L. Oudot, Self-Supervised Adaptation for Online Script Text Recognition, Electronic Letters on Computer Vision and Image Analysis, vol. 5, no. 1, pp. 87-97, 2005. [11] A. Nakamura, A Method to Accelerate Writer Adaptation for On-Line Handwriting Recognition of a Large Character Set, in proceedings of the ninth IWFHR, pp. 426-431, 2004. [12] H. Mouchere, E. Anquetil, and N. Ragot, Writer Style Adaptation in On-line Handwriting Recognizers by a Fuzzy Mechanism Approach : The ADAPT Method, International Journal of Pattern Recognition and Artificial Intelligence, vol. 21, no. 1, pp. 99-116, 2007. [13] M. Liwicki, A. Schlapbach, and H. Bunke, Writer- Dependent Recognition of Handwritten Whiteboard Notes in Smart Meeting Room Environments, in proceedings of the eighth DAS, pp. 151-157, 2008. [14] H. Miyao and M. Maruyama, Writer Adaptation for Online Handwriting Recognition System Using Virtual Examples, in in proceedings of the tenth ICDAR, pp. 1156-1160, 2009. [15] N. C. Tewari and A. M. Namboodiri, Learning and Adaptation for Improving Handwritten Character Recognizers, in proceedings of the tenth ICDAR, pp. 86-90, 2009. [16] H. Cao, S. Prasad, S. Saleem, and P. Natarajan, Unsupervised HMM Adaptation Using Page Style Clustering, in Proc. International conference on Document Analysis and Recognition, pp. 1091-1095, 2009. [17] J. LaViola and R. Zeleznik, Apractical approach for writer-dependent symbol recognition using a writer-independent symbol recognizer, IEEE Transactions on Pattern Analysis and Machine Intelligence, 29, pp. 19171926, 2007. [18] A. Delaye, and E. Anquetil, HBF49 feature set: A first unified baseline for online symbol recognition, IEEE Transactions on Pattern Recognition, 46, pp. 117-130, 2013. [19] A. Das and U. Bhattacharya, ISIgraphy: A Tool for Online Handwriting Sample Database Generation, Proc. of National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics, Jodhpur, India, IEEE Computer Society Press, December 2013.
C ONCLUSION
We tackled the use of handheld devices while the user is stationary (sitting or standing) or while he is in mobile settings (walking, going up/down stairs). thus, users use these devices as either their primary or exclusive means of communication. This new trend incited us to develop a multi environment writer adaptation. We presented a writer adaptation technique which consist of incorporating an adaptation module (AM) at the output of a independent recognition system. The (AM) is based on RBF-NN applying IGA-AM incremental learning algorithm. The (AM) converts the writer-independent output into writer-dependent output. The writer adaptation efficiency is attested using the LaViola dataset. Our empirical results on REGIM-MEnv dataset indicate that our system is usefulness and improve the overall accuracy of the writer-independent recognition system. We judge this work is a good starting point toward writer adaptation treating users mobile environments. ACKNOWLEDGMENT The authors would like to acknowledge the financial support of this work by grants from the General Direction of Scientific Research (DGRST), Tunisia, under the ARUB program. R EFERENCES [1] J. Platt and N. P. Matic, A Constructive RBF Network for Writer Adaptation, Advances in Neural Information Processing Systems, vol. 9, no. 1, pp. 765-771, 1997. [2] G. B Huang, P. Saratch and N. Sundararajan, A generalized growing and pruning rbf (GGAP-RBF) neural network for function approximation, Neural Networks, vol. 16 , no. 1, pp. 57-67, 2005. [3] R. Zhanga, G. B Huang, N. Sundararajanb, P. Saratchandranb, Improved GAP-RBF network for classification problems, Neurocomputing vol. 70, no 1618, pp. 3011-3018, 2007. [4] M. Bortman , M. Aladjem, A growing and pruning method for radial basis function networks, Neural Networks, vol. 20, no. 6, pp. 1039-1045, 2009. [5] X. Y. Zhang, C. L. Liu, Writer Adaptation with Style Transfer Mapping, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 99, no. 1, pp. 1-15, 2012.
340