the ICDAR 2007 competition, and based on a set coming from the IFN/ENIT-database show that a recognition rate of 94.71% without reject can be achieved.
Reject Rules And Combination Methods to Improve Arabic Handwritten Word Recognizers Haikal El Abed, Volker M¨argner Braunschweig Technical University Institut for Communications Technology (IfN) Department of Signal Processing for Mobile Information Systems e-mail: {elabed,v.maergner}@tu-bs.de
Abstract In this paper we present some methods to combine the outputs of a set of Arabic handwritten word recognition systems to achieve a decision with a higher performance. This performance can be expressed by lower rejection rates and higher recognition rates. The used methods range from voting schemes based on results of different recognizers to a neural network decision based on normalized confidences. In addition, different reject rules based on the evaluation and analysis of individual and combined systems output are discussed. Several threshold functions for different reject levels are tested and evaluated. Tests with a set of recognizers, which participated in the ICDAR 2007 competition, and based on a set coming from the IFN/ENIT-database show that a recognition rate of 94.71% without reject can be achieved. With a reject rate less than 25% a recognition rate of more than 99% can be computed.
Keywords:
classifier combination, rejection, Arabic handwriting recognition competition, IFN/ENITdatabase.
1.
Introduction
The last years showed an increasing interest in Arabic text recognition solutions. Starting with small private data sets to evaluate a system with the availability of the IFN/ENIT-database [24], today it is possible not only to develop but also to compare Arabic handwritten word recognizers. The fact that today more than 54 groups in about more than 27 countries are using the IFN/ENIT-database for their research shows the wide distribution and acceptance of these data. Many different methods and algorithms to recognize handwritten words have been developed and tested for many different languages in the past. The special style of Arabic printed text where the characters are connected (with some exceptions) within a word is the normal case
for cursively handwritten words in many different languages, too. That is why methods developed for words handwritten in languages other than Arabic as for instance methods based on a so called Hidden Markov Model (HMM) or on a Neural Network (NN), are applied to Arabic handwritten word recognition systems successfully. The state of the art in Arabic handwritten word recognition has been presented recently [19, 20] and direct comparison of systems was performed twice on the basis of a competition during the ICDAR’05 and ICDAR’07 conferences. Especially the competition in the year 2007 showed very good results, and a progress compared to the competition in 2005 can be observed. On the basis of the competition 2007, this paper presents results on research of two additional aspects of recognition systems. First of all the implementation of a reject class as a very important feature of really working systems is studied. Secondly we discuss different possibilities to achieve higher recognition results or even better system performance by combining different recognizers. Combining different systems or classifiers remains a challenging task despite the promising improvements in the latest combination methods and systems. The research on multiple classifier architecture tries to answer the following questions: How to select a set of systems to achieve high recognition rate? and how to combine different systems on the output level to achieve higher performance? The application field investigated in recently published papers [23, 6] is character/digit recognition. Compared to Latin text where a lot of research work is done, the number of work for the combination of Arabic text recognition systems is quite limited. One of the first works in this field was given by Snoussi Maddouri et al. in [25], presenting a combination scheme on feature level being tested on a sub set of pieces of arabic words and words selected from a local bank checks database. In [10] Essoukri Ben Amara and Bouslema give an overview of multiple sources of information for recognition techniques and analyze different problems related to Arabic script recogni-
tion with hybrid architectures. In [4], an approach to combine hidden markov models for Arabic handwritten word recognition is presented, and recently Farah et al. [11] introduce a system based on the combination of different Neural Networks for the recognition of Arabic literal amount with a recognition rate of about 94% on a small test database containing 4800 words. This paper is organized as follows. In section 2 the most important features of the IFN/ENIT-database essential for this work and some results of the two Arabic handwriting recognition competitions are briefly presented, followed by a comparison with the systems on the basis of a simple reject method in section 3. Section 4 describes the combination methods used in our tests. The tests and the results achieved with the different combination methods are presented and discussed in section 5. Finally, the paper ends with some concluding remarks.
2. 2.1.
Database and Competitions
3.
Competitions
The first competition on Arabic handwriting recognition was based on the IFN/ENIT-database, and the results were presented at the International Conference on Document Analysis and Recognition (ICDAR) 2005 [22]. To reach an optimal result, the competition was organized in a closed mode, i.e. the participants developed their systems using the IFN/ENIT-database for training and sent it to the IfN where the tests were carried out using new test data,unknown to the participants. Five groups submitted systems to this competition. The second competition on Arabic handwriting recognition was organized in the same manner as the first one with the only difference that the test set of the first competition (data set e) was now available for training, too, and the tests were made now on another new set f. The results again were presented at the International Conference on Document Analysis and Recognition (ICDAR) 2007 [21]. This competition compared 14 systems submitted from 8 groups (some groups delivered more than one system). A comparison with the 2005 tests shows an improvement of
Simple Reject
The simplest rejection strategy is to use a threshold on the confidence value of the recognizer result. This means that a result is rejected if the confidence value of this result is below a predefined threshold. This rejection strategy allows a simple comparison of different recognition systems. The output of a system Si,j giving a sample word image xk consists of an ordered sequence of m pairs of values composed of the system output word yi,j (xk ) together with its confidence value wi,j (xk ). The function Si,j (xk ) is defined as follows:
IFN/ENIT-Database
The IFN/ENIT-database is a database of Arabic handwritten names (Tunisian town names) which is freely available (www.ifnenit.com) for non commercial research. Version 2.0 with patch level 1e (v2.0p1e) consists of 32492 Arabic words handwritten by more than 1000 different writers (a new set e was added to the data of version 1.0). The words written are 937 Tunisian town/village names [22, 9]. Each writer filled some forms with pre-selected town/village names and the corresponding postal code. Ground truth was added to the image data automatically and verified manually.
2.2.
about 10% of the best system. But of course there is still a remaining error of about 20%. This is very good for a cursive word recognition system but not good enough for an industrial application. If we want to integrate a system into an industrial application we have to use a reject class to reduce the amount of remaining errors.
Si,j (xk ) = {(yi,j (xk ), wi,j (xk ))} where j ∈ {1, 2, . . . , m} is the index of the j-best output, and k ∈ {1, 2, . . . , N } is the index of the images in the test set. If the test set consists of N word images and the number of correct recognized words out of a set of k words is R(k) then the recognition rate on this subset is given as recog(k) = R(k) k . We now sort the output of a recognizer for the whole test set in a descending order of the confidences and calculate the recognition rate on each subset in the aforementioned way. This recognition rate is calculated using the threshold thres = wi,1 (xk ) for rejecting all resulting words with a confidence less than wi,1 (xk ). Figure 1 shows a plot of recog(k) for test set f of selected systems participating in the competition at ICDAR’07. The three vertical lines are set at the position of a rejection rate of 10%, 20%, and 50%. It can easily be seen that the order of the systems differs depending on the number of rejected words. Table 1 shows the recognition rates of the different systems depending on the reject. In general we can say that five systems reach a recognition rate of more than 97% with a reject of 50%. These results show that even such a simple rejection rule can be used to reduce the remaining errors of a system to less than 2% if a reject of 50% can be accepted.
4.
Combination Methods
Combination of classifiers can be done on different system levels. Xu et al. [28] describe different methods of combining a number of classifiers and carry out some tests with the classification of handwritten numerals. Different combination architectures are described in [16, 13]. The combination tests we made are based on the system output. As we are going to combine very different
0% 10% 20% 50% 01 61.70 63.52 65.43 75.18 05 59.01 62.70 65.55 73.69 06 83.34 89.94 94.26 98.85 07 82.77 88.30 92.73 97.67 08 87.22 91.75 94.26 98.41 09 79.10 81.98 83.65 87.38 10 81.65 86.29 89.20 95.53 11 81.93 86.47 89.43 94.56 12 81.81 84.07 85.40 88.77 13 81.47 87.62 92.36 98.36 14 80.18 86.47 91.28 98.09 systems we have to do some kind of normalization of the confidence measurements of the systems.
4.1.
Data Preprocessing
=
max
orig (xk ) wi,j (Si,j (xk )) − min
k∈{1,...,N }
4.2.
90
80
70
60
(Si,j (xk ))
k∈{1,...,N }
Majority Voting
As the first combination strategy a majority voting [28] was implemented. The majority voting is based upon the assumption that the more recognizers decide for the same result the more reliable it is. Two different schemes of this approach were tested within this research. The advantage of the majority voting method is that different classifiers have the same weight without considering their differences in performance.
S1,1 S2,1 S3,1 S4,1 S5,1 S6,1 S7,1 S8,1 S9,1 S10,1 S11,1 1250
10% 50% 20%
2500
3750 5000 word images (set f)
6250
7500
8671
Figure 1. Performance of individual systems. Three reject thresholds are shown at 10%, 20% and 50%.
4.2.1.
The importance of confidence analysis has been recognized as an important step in classifier combination. Different methods for the evaluation and description of classifier confidence are published [18, 5, 8, 15]. Due to the fact that we made our tests with classifiers which were developed independently by different research groups, the confidences for the recognition results vary in the values (max, min, steps) and in the weights. For this reason confidence values of different systems have to be normalized to make them comparable. The new confidence value norm wi,1 (xk ) for a recognized word image xk is calculated based on the normalized difference of the highest and lowest confidence in a test set with N word images. With a system Si,j , a sample word image xk , and its original orig norm (xk ), the new confidence wi,j (xk ) is confidence wi,j defined by using the following equation:
norm wi,j (xk )
100
recognition rates (in %)
Table 1. Simple reject: Recognition rates in % using set f with different reject thresholds (0%, 10%, 20% and 50%).
Simple Majority Voting (Mv )
The simple majority voting strategy [7, 16] does not use any confidence level but only counts how many systems deliver the same output to an input image. The following equation shows how to calculate the threshold T which is the number of systems with the same output when a total number of n recognizers are used for combination: T =
n 2 + n+1 2
1 if n is even, if n is odd.
If more than T systems decide for the same output class, this output is given, otherwise a reject is made. 4.2.2.
Weighted Majority Voting (WM v )
The simple majority voting strategy for combining recognizers uses the recognition results but not the confidence values. It may be a disadvantage of this strategy that even if one recognizer makes his decision with the highest confidence it is outvoted by two recognizers with the same output independent of their confidence. The following equation defines the weighted majority voting [7, 16] decision combining n systems:
WM v (xk ) =
4.3.
max
Si,1 ∈{S1,1 ,...,Sn,1 }
(
X
wi,1 (xk ))
i
Rank-based Methods
The second combination strategy tested is based on the rank of result class in the j-best result list of each classifier.
4.3.1.
Borda (Bc)
The combination method Borda, which is introduced by Jean-Charles de Borda in 1770, is adapted to pattern classification problems [14, 29, 27]. Each system Si,j is considered as a voter and the result classes are the candidates. The basic idea of the Borda combination method is to use the ranking information (also the r-best results from the entire result list) to come to a decision, not just the first best results of each system. It also returns a complete ranked list of the possible results. The Borda method is defined by the following equation: Bc(xk ) =
max
{Si,j ,wi,j }
(
n X
(r−rank(Si,j (xk ), wi,j (xk ))+1))
i=1
For each result in the r-best lists the value ”rank of the result+1” is assigned. 4.3.2.
Rank Count (Rc)
G¨unter introduced in [12] a general form of the rankbased method. The basic idea is to attribute a cost function ci for each classification system. In addition to the cost function a system confidence value ai is assigned to each system. This system confidence can be used as a general rank function for the different systems. The rank count method is given by the following equation: Rc(xk ) =
max
{Si,j ,wi,j }
(
n X
For n classifiers we consider the first class from the result list of the first classifier and we compare it with the 10 (in the algorithm 1 defined by parameter m) first result classes of the other n−1 classifiers. If the class is found in these lists, we insert the corresponding weight in the NN input vector, otherwise we insert the weight corresponding to the last class in the 10-best list. The result of these steps is an input vector with n2 elements.
(ai +ci (rank(Si,j (xk ), wi,j (xk ))))) 4.5.
i=1
In this paper we use two forms of this method for the combination of different systems. The first one is based on the definition of the rank count method. We assign for each class the corresponding weight as output value of the cost function ci . In the second method, the cost function ci is defined as a product of rank and weight of each word image xk . In both methods we use as a first choice a system confidence ai = 0.
4.4.
Input: Recognition Result List Output: Neural Network Input List n: total number of recognizers; m: number of propositions; NN InputList[n × n]; for i 6 n do NN INputList[i, 1] ←− wi,j ; for j 6 i and Si 6= Sj do if ∃k with Si,1 = Sj,k then NN InputList[i, j] ←− wj,k ; end else NN InputList[i, j] ←− wj,m ; end j++; end i++; end Algorithm 1: Generating the input list for the Neural Network
Neural Network
As the third group of combination methods, we use a simple neural network architecture (based on the simpleNet library [1]). The training step is based on the set d (all systems use the sets a, b, c and d for the training). Different tests with the number of training iterations and the number of hidden neurons to define optimal settings for the combination tests were carried out. El-Hajj et al. [2] have used Neural Networks to combine 3 different Hidden Markov Model based systems, which have different features as input. We generalized the principle described in [3] for n systems. The generation of the input values for the Neural Network is described in algorithm 1.
The Accuracy-Reject Dependency
The performance of a system is often measured with the accuracy or the recognition rate that is reached on a test set. This recognition rate depends directly on the error rate of the system. In [26] Suen and Tan divide the errors in three categories and try with a systematic identification of the reason of misclassification to reduce the error rate and improve the recognition rate. Due to the fact that different systems usually make different errors on different patterns, we analyze the behavior of each system and different combination and we define different reject thresholds based on different output levels. We define a linear dependency function Q(Recog, Rej). This function is presented as a parameter to qualify different combination results. The quality of a result is decided by a compromise between a low reject rate and a high recognition rate. To have the possibility to adopt this function for different applications, we introduce two cost parameters CRecog and CRej . These parameters regulate the relative importance of the two inputs of the dependency function. The optimal combination scheme for a given application can be defined as the maximum of the following function Q: Q(Recog, Rej) = CRecog Recog − CRej Rej
Table 2. Recognition rates with reject on test set f.
ID 2 3 4 5 6 7 8 9 10 11
5.
Recog 98.46 92.86 98.73 97.82 98.03 98.68 98.10 98.71 99.22 98.87
Mv Reject 21.46 8.06 22.65 18.44 19.05 21.38 21.12 23.31 41.17 42.82
Q() 77.00 84.80 76.08 79.38 78.98 77.30 76.98 75.40 58.05 56.05
Tests and Results
Recog 97.97 96.04 97.44 98.09 98.46 98.87 98.95 99.19 99.22 99.31
WM v Reject 19.31 17.53 18.88 19.78 20.61 20.24 20.71 22.90 24.54 29.63
Q() 78.66 78.51 78.56 78.31 77.85 78.63 78.24 76.29 74.68 69.68
The results of the systems are combined using the following schema: the best two (in terms of recognition rate using set f) systems are combined first, then weaker performing systems are added successively. Ten different combination possibilities (combining 2, 3, . . . , 11 systems) are tested and evaluated using variations of combination methods and reject rules. For the tests with the neural network we used a network architecture n2 -24-n, based on empirical tests, which showed that the optimal network with minimal back-propagation errors is a network with 24 neurons in the hidden layer. The network is initialized with random weight values in [−0.5, 0.5], a standard error margin of 0.1 and trained with set d for 1000 iterations (learning rate of 1.2). The trained network (the best one during the training phase) is tested with set f.
5.1.
Without Reject
Table 3. Recognition rates without reject on test set f
ID OR WM v NN Bc Rc M Rc 2 93.23 89.11 90.60 86.72 90.82 91.02 3 93.71 88.69 91.10 85.25 90.65 90.36 4 95.21 89.19 92.46 84.78 91.55 91.10 5 95.43 89.44 90.69 87.32 91.90 91.81 6 95.50 89.74 92.58 86.58 92.04 91.90 7 96.19 91.12 92.93 88.98 92.97 92.99 8 96.25 91.22 93.45 87.65 92.85 92.99 9 96.57 92.84 93.88 89.00 93.77 93.99 10 96.69 93.00 94.13 89.98 93.82 94.49 11 96.78 93.45 93.36 90.86 94.11 94.71 Table 3 shows the recognition rates reached with the 5 proposed combination methods without reject. The first column shows as a kind of upper bound the percentage of word images out of the test set f which were at least recognized correctly by one of the combined recognizers
NN MRc Recog Reject Q() Recog Reject Q() 98.94 92.38 6.56 99.43 59.45 39.98 97.14 68.92 28.22 99.30 49.13 50.17 98.93 79.47 19.46 99.52 53.22 46.30 99.33 44.94 54.39 99.45 54.03 45.42 99.62 60.54 39.08 99.37 45.12 54.25 99.33 56.94 42.39 99.44 44.86 54.58 99.07 76.52 22.55 99.34 45.82 53.52 99.50 72.08 27.42 99.30 32.32 66.98 0.00 100.00 -100.00 99.28 24.36 74.92 0.00 100.00 -100.00 99.21 22.86 76.35 (or-case). The second column shows the result using Neural Network. In each row of table 3 the highest recognition is typed in bold digits. The recognition results of the weighted majority voting (WM v) and the Borda count (Bc) methods are in no case better than any of the other methods. The rank count method in the first version (Rc) gives a best result in the case of combination of 5 systems. The Neural Network (NN) method gives the best result in 4 cases, the combination of 3, 4, 6, and 8 systems. The second rank count method (M Rc) gives best results in combining 2, 7, 9, 10, and 11 systems. The M Rc method also gives the overall best result with a recognition rate of 94.71% combining all 11 systems. These results show that the NN method is very good but it needs a lot of statistically relevant data for training. It seems that this criterion is not satisfied especially in the case where more systems with less quality are used for combination. Additionally, the zigzag effect observed in [17], caused by a combination based on majority voting of odd and even numbers of classifiers, can be observed in the results of the combination of a different number of classifiers with the Neural Network (see table 3). The Rc method in the second version seems to be more robust and even all 11 systems can be combined with an overall better recognition rate. Comparing these results with the best performing single system at ICDAR’07 competition, an increase of recognition rate from 83.34% to 94.71% is achieved.
5.2.
With Reject
The results achieved with classifier combination are promising, but in many cases this is still not sufficient. Working systems often need a recognition rate near to 100% and can accept a number of rejected words. In a test we implemented a system with combination of recognizers first and subsequently a reject was carried out. Table 2 shows the results. It is interesting to see that with such a method a recognition rate of more than 99% is achieved
for the price that less than 30% of words out of the test set are rejected.
6.
Conclusion
This paper presents different ways to enhance the quality of Arabic handwritten word recognizers using reject and classifier combination methods. The tests were made with a well known database and the combination is done on the output of different recognizers. The results on set f of the IFN/ENIT-database show that a recognition rate of 94.71% without reject can be achieved. This is an improvement of about 6.5% compared with the best result of the best individual system. If a higher recognition rate is needed or better less errors in the remaining data are required a rejection should be implemented. The experiments with the same data set and recognition systems show that a recognition rate of more than 99% can be achieved when a reject of about 30% is acceptable.
References [1] “SimpleNet”, http://www.linux-related.de/, 2007. [2] R. Al-Hajj, Reconnaissance hors ligne de mots manuscrits cursifs par l’utilisation de syst´emes hybrides et de techniques d’apprentissage automatique,PhD thesis, l’Ecole Nationale Sup´erieure des T´el´ecommunications, 2007. [3] R. Al-Hajj, C. Mokbel and L. Likforman-Sulem, “Combination of HMM-Based Classifiers for the Recognition of Arabic Handwritten Words”, 9th Inter. Conf. on Document Analysis and Recognition (ICDAR), 2007, volume 2, pp 959–963. [4] S. Alma’adeed, C. Higgins and D. Elliman, “Off-line recognition of handwritten Arabic words using multiple hidden Markov models”, the Twenty-third SGAI International Conference on Innovative Techniques and Applications of Artificial Intelligence, 2004, volume 17, pp 75–79. [5] A. Atukorale and P. Suganthan, “Combining classifiers based on confidence values”, 5th Inter. Conf. on Document Analysis and Recognition (ICDAR), 1999, pp 37–40. [6] R. Bertolami and H. Bunke, “Multiple Classifier Methods for Offline Handwritten Text Line Recognition”, 7th International Workshop on MCS, 2007, pp 72–81. [7] M. Cheriet, N. Kharma, C.-L. Liu and C. Suen, Character Recognition Systems: A Guide for Students and Practitioners, Wiley-Interscience, 2007. [8] L. P. Cordella, P. Foggia, C. Sansone, F. Tortorella and M. Vento, “Reliability Parameters to Improve Combination Strategies in Multi-Expert Systems”, Pattern Analysis & Applications, 2(3):205–214, 1999. [9] H. El Abed and V. M¨argner, “The IFN/ENIT-Database a Tool to Develop Arabic Handwriting Recognition Systems”, IEEE International Symposium on Signal Processing and its Applications (ISSPA),, 2007. [10] N. Essoukri Ben Amara and F. Bouslama, “Classification of Arabic script using multiple sources of information: State of the art and perspectives”, Inter. Journal on Document Analysis and Recognition, 5(4):195–212, 2003. [11] N. Farah, L. Souici and M. Sellami, “Classifiers combination and syntax analysis for Arabic literal amount recognition”, Engineering Applications of Artificial Intelligence, 19(1):29–39, 2006.
[12] S. G¨unter, Multiple classifier systems in offline cursive handwriting recognition,PhD thesis, Institute of Computer Science and Applied Mathematics (IAM), 2004. [13] T. K. Ho, “Multiple Classifier Combination: Lessons and Next Steps”, Hybrid Methods in Pattern Recognition, 2002, pp 171–198. [14] T. K. Ho, J. Hull and S. Srihari, “Decision combination in multiple classifier systems”, IEEE Trans. Pattern Anal. Mach. Intell., 16(1):66–75, 1994. [15] J. Kittler, “A Framework for Classifier Fusion: Is it still needed?”, Advances in Pattern Recognition, 2000, pp 45– 56. [16] L. I. Kuncheva, Combining Pattern Classifiers. Methods and Algorithms, Wiley, 2004. [17] L. Lam and C. Y. Suen, “Rejection Versus Error in a Multiple Expert Environment”, Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition, 1998, volume 1451 of LNCS, pp 746–755. [18] X. Lin, X. Ding, M. Chen, R. Zhang and Y. Wu, “Adaptive confidence transform based classifier combination for Chinese character recognition”, Pattern Recognition Letters, 19(10):975–988, 1998. [19] L. Lorigo and V. Govindaraju, “Offline Arabic handwriting recognition: a survey”, IEEE Trans. Pattern Anal. Mach. Intell., 28(5):712–724, 2006. [20] V. M¨argner and H. El Abed, Arabic and Chinese Handwriting Recognition, , volume 4768, chapter Databases and Competitions: Strategies to Improve Arabic Recognition SystemsSpringer, LNCS, , 2008. [21] V. M¨argner and H. El Abed, “ICDAR 2007 Arabic Handwriting Recognition Competition”, 9th Inter. Conf. on Document Analysis and Recognition (ICDAR), 2007, volume 2, pp 1274–1278. [22] V. M¨argner, M. Pechwitz and H. El Abed, “ICDAR 2005 Arabic Handwriting Recognition Competition”, 8th Inter. Conf. on Document Analysis and Recognition (ICDAR), 2005, volume 1, pp 70–74. [23] L. S. Oliveira, M. Morita and R. Sabourin, “Feature selection for ensembles applied to handwriting recognition”, Inter. Journal on Document Analysis and Recognition, 8(4):262–279, 2006. [24] M. Pechwitz, S. S. Maddouri, V. M¨argner, N. Ellouze and H. Amiri, “IFN/ENIT-Database of Handwritten Arabic Words”, Colloque Inter. Francophone sur l’Ecrit et le Document (CIFED), 2002, pp 127–136. [25] S. Snoussi Maddouri, H. Amiri, A. Bela¨ıd and C. Choisy, “Combination of local and global vision modelling for Arabic handwritten words recognition”, 8th Inter. Workshop on Frontiers in Handwriting Recognition (IWFHR), 2002, pp 128–135. [26] C. Y. Suen and J. Tan, “Analysis of errors of handwritten digits made by a multitude of classifiers”, Pattern Recognition Letters, 26(3):369–379, 2005. [27] M. Van Erp and L. Schomaker, “Variants Of The Borda Count Method For Combining Ranked Classifier Hypotheses”, 7th Inter. Workshop on Frontiers in Handwriting Recognition (IWFHR), 2000, pp 443–452. [28] L. Xu, A. Krzyzak and C. Suen, “Methods of combining multiple classifiers and their applications to handwriting recognition”, IEEE Trans. Syst., Man, Cybern., 22(3):418– 435, 1992. [29] H. K. Zouari, Contribution a` l’´evaluation des m´ethodes de combinaison parall´ele de classifieurs par simulation,PhD thesis, Universit´e de Rouen, 2004.