International Journal of Medical Informatics 58 – 59 (2000) 179 – 190
www.elsevier.com/locate/ijmedinf
Power of heterogeneous computing as a vehicle for implementing E3 medical decision support systems Vili Podgorelec *, Janez Brest, Peter Kokol Laboratory of System Design, Faculty of Electrical Engineering and Computer Science, Uni6ersity of Maribor, Smetano6a 17, SI-2000 Maribor, Slo6enia Received 16 December 1999; accepted 31 March 2000
Abstract A computer system of PCs, workstations, minicomputers etc. connected together via a local area network or wide area network represents a large pool of computational power. Our aim is to use this power for the implementation of an E3 (efficiency, effectiveness, efficacy) medical decision support system, which can be based on different models, the best providing an explanation together with an accurate and reliable response. One of the most viable among models is decision trees, already used for many medical decision-making purposes. In this paper, we present a parallel implementation of a genetic algorithm on a heterogeneous computing system for the induction of decision trees with the application on solving the mitral valve prolapse syndrome. Our approach can be considered as a good choice for different real-world decision making, with respect to the advantages of our model, especially the great computational power. © 2000 Elsevier Science Ireland Ltd. All rights reserved. Keywords: Decision support systems; Decision trees; Evolutionary algorithms; Heterogeneous computing; Mitral valve prolapse
1. Introduction As in many other areas, decisions play an important role in medicine, especially in medical diagnostic processes. Decision support systems (DSS) helping physicians are becoming a very important part of medical decision making, particularly where decisions must be made effectively and reliably [1,2]. Since con* Corresponding author. Tel.: +386-62-2207456; fax: + 386-62-211178. E-mail address:
[email protected] (V. Podgorelec).
ceptually simple decision making models with the possibility of automatic learning should be considered for performing such tasks [3], according to recent reviews [1,4,5] decision trees are a very suitable candidate. They have already been successfully used in DSS for many decision-making purposes, but some problems still persist in their traditional way of induction, which has not changed much since their introduction. Considering the advantages of the evolutionary methods in performing complex tasks, we decided to overcome these problems by inducing deci-
1386-5056/00/$ - see front matter © 2000 Elsevier Science Ireland Ltd. All rights reserved. PII: S 1 3 8 6 - 5 0 5 6 ( 0 0 ) 0 0 0 8 6 - 1
180
V. Podgorelec et al. / International Journal of Medical Informatics 58–59 (2000) 179–190
sion trees with genetic algorithms. A problem with the evolutionary approach is the computational time, which is much higher than with the traditional induction approach. To reduce the required computation time, we implemented the parallel algorithm on a heterogeneous computing system.
2. Decision trees Inductive inference is the process of moving from concrete examples to general models, where the goal is to learn how to classify objects by analyzing a set of instances (already solved cases) whose classes are known. Instances are typically represented as attribute-value vectors. Learning input consists of a set of such vectors, each belonging to a known class and the output consists of a mapping from attribute values to classes. This mapping should accurately classify both the given instances and other unseen instances. A decision tree [5–7] is a formalism for expressing such mappings and consists of tests or attribute nodes linked to two or more subtrees and leafs or decision nodes, labeled with a class which means the decision. A test node computes some outcomes based on the attribute values of an instance, where each possible outcome is associated with one of the subtrees. An instance is classified by starting at the root node of the tree. If this node is a test, the outcome for the instance is determined and the process continues using the appropriate subtree. When a leaf is eventually encountered, its label gives the predicted class of the instance. The finding of a solution with the help of decision trees starts by preparing a set of solved cases. The whole set is then divided into: (1) a learning set, which is used for the induction of a decision tree; and (2) a testing
set, which is used to check the accuracy of an obtained solution. First, all attributes defining each case are described (input data) and among them one attribute is selected that represents a decision for the given problem (output data). For all input attributes specific value classes are defined. If an attribute can take only one of a few discrete values then each value takes its own class; if an attribute can take various numeric values then some characteristic intervals are defined, which represent different classes. Each attribute can represent one internal node in a generated decision tree, also called attribute node (Fig. 1). Such an attribute node has exactly as many branches as its number of different value classes. The leaves of a decision tree are decisions and represent the value classes of the decision attribute (Fig. 1). When a decision has to be made for an unsolved case, we start with the root node of the decision tree and moving along attribute nodes, select branches where values of the appropriate attributes in the unsolved case match the attribute values in the decision tree, until the leaf node is reached, representing the decision. Usually the members of a set of objects are classified as either positive or negative instances (e.g. patients with/without MVP syndrome). We extended this approach with multi-class decision making, enabling one to differentiate between patients with prolapse, silent prolapse and no prolapse at all. An example of a simple decision tree can be seen in Fig. 1.
3. Genetic algorithms Genetic algorithms are adaptive heuristic search methods which may be used to solve all kinds of complex search and optimisation problems [8–12]. They are based on the evolutionary ideas of natural selection and ge-
V. Podgorelec et al. / International Journal of Medical Informatics 58–59 (2000) 179–190
netic processes of biological organisms. As the natural populations evolve according to the principles of natural selection and ‘survival of the fittest’, first laid down by Charles Darwin, so by simulating this process, genetic algorithms are able to evolve solutions to real-world problems, if they have been suitably encoded [12]. They are often capable of finding optimal solutions even in the most complex of search spaces or at least they offer significant benefits over other search and optimisation techniques. A typical genetic algorithm operates on a set of solutions (population) within the search space. The search space represents all
181
the possible solutions that can be obtained for the given problem and is usually very complex or even infinite. Every point of the search space is one of the possible solutions and therefore, the aim of the genetic algorithm is to find an optimal point or at least come as close to it as possible. The genetic algorithm consists of three genetic operators: selection, crossover (recombination) and mutation (Fig. 2). Selection is the survival of the fittest individuals within the genetic algorithm with the aim of giving the preference to the best ones. For this purpose, all solutions have to be evaluated, which is done with the use of the evaluation function.
Fig. 1. A part of a decision tree for the MVP classification (an example of a decision tree).
Fig. 2. Genetic operators: selection, crossover and mutation.
182
V. Podgorelec et al. / International Journal of Medical Informatics 58–59 (2000) 179–190
Selection determines individuals to be used for the second genetic operator-crossover or recombination, where from two good individuals, a new and even better individual is constructed. The crossover process is repeated until the whole new population is completed with the offspring produced by the crossover. All constructed individuals have to preserve the feasibility regarding the given problem; in this manner it is important to coordinate internal representation of individuals with genetic operators. The last genetic operator is mutation, which is an occasional (low probability) alteration of an individual that helps to find an optimal solution to the given problem faster and more reliably.
4. Heterogeneous computing There is a continual demand for greater computational speed from a computer system than currently possible [13]. Areas requiring great computational speed include numerical modelling and simulation of scientific and engineering problems. Such problems often need huge repetitive calculations on large amounts of data to give valid results. One way of increasing the computational speed, a way that has been considered for many years, is by using multiple processors operating together on a single problem. Research began in the field of heterogeneous computing in the mid 1980s and since then it has grown tremendously [14,27]. From the scientific community to the federal government, heterogeneous computing has become an important area of research and interest. Heterogeneous computing includes both parallel [15] and distributed processing. The virtual heterogeneous associative machine concept makes the distributed machines appear as one single machine. By exploiting the different features and capabilities of a
heterogeneous environment, higher levels of performance can be attained. A computer system of PCs, workstations, minicomputers etc. connected together via a local area network or wide area network represents a large pool of computational power. The use of parallel processing as a means of providing high-performance computational facilities for large-scale and grandchallenge applications has been investigated widely. Until recently, however, the benefits of this research were confined to the individuals who had access to such systems. The trend is to move away from specialised platforms, such as the Cray/SGI T3E and IBM SP2 to cheaper, general purpose systems (Fig. 3) consisting of loosely coupled components built up from single or multi-processor platforms. This approach has a number of advantages including that of being able to build a system for a given budget, which is suitable for certain types of applications and workloads. A cluster can be a network of PCs or UNIX workstations connected via some network. For parallel computing purposes, a cluster will generally comprise of high-performance workstations inter-connected by a high-speed network. A cluster works as an integrated collection of resources and can have a single system image spanning all its nodes [16]. A cluster is a type of parallel or distributed processing system, which: consists of a collection of networked computers, and is used as a single, integrated computing resource. A computer node can be a single or multiprocessor system with memory, I/O facilities and an operating system. A cluster generally refers to two or more computers (nodes) connected together. The nodes can exist in a single cabinet or be physically separate and
V. Podgorelec et al. / International Journal of Medical Informatics 58–59 (2000) 179–190
183
Fig. 3. An example of a heterogeneous computing system.
connected via a LAN. An interconnected (LAN-based) cluster of computers can appear as a single system to users and applications. Such a system can provide a cost-effective way to gain features and benefits (fast and reliable services) that have historically only been found on more expensive propriety shared memory systems. The following list highlights some of the reasons for preference of network of workstations over specialised parallel computers [17,18]: 1. Individual workstations are becoming increasingly powerful. 2. The communications bandwidth between workstations is increasing and latency is decreasing as new networking technologies and protocols are implemented in a LAN. 3. Workstation clusters are easier to integrate into existing networks than special parallel computers. 4. Typical low user utilisation of personal
workstations and their unused CPU cycles can be exploited. 5. Workstation clusters are a cheap and readily available alternative to specialised high performance computing platforms. 6. Clusters can be easily grown — node capability can be easily extended (e.g. by adding memory blocks) and additional nodes can be easily added to extend cluster capability. Programming environments can offer portable, efficient and easy to use tools for the development of applications. They include message passing libraries, debuggers and profilers. It should not be forgotten that clusters could be used for the execution of sequential or parallel applications. The network layer can provide interconnectivity between computing sites. Communication tools suitable for network heterogeneous computing are a message passing interface (MPI) [19], a parallel virtual
184
V. Podgorelec et al. / International Journal of Medical Informatics 58–59 (2000) 179–190
machine (PVM), portable programs for parallel processors (P4) etc.
5. Mitral valve prolapse Prolapse is defined as the displacement of a bodily part from its normal position. The term mitral valve prolapse (MVP) [20–23], therefore, implies that the mitral leaflets are displaced relative to some structure, generally taken to be the mitral annulus. The silent prolapse is the prolapse that cannot be heard with the auscultation diagnosis and is especially hard to diagnose. The implications of the MVP are the following: disturbed normal laminar blood flow, turbulence of the blood flow, injury of the chordae tendinae, the possibility of thrombus’ composition, bacterial endocarditis and finally, hemodynamic changes defined as mitral insufficiency and mitral regurgitation. Mitral valve prolapse is one of the most prevalent cardiac conditions, which may affect 5 – 10% of the normal population and is one of the most controversial cardiac conditions. The commonest cause is probably myxomatous change in the connective tissue of the valvular leaflets that makes them excessively pliable and allows them to prolapse into the left atrium during ventricular systole. The clinical manifestations of the syndrome are multiple. The great majority of patients are asymptomatic. Other patients, however may present atypical chest-pain or supraventricular tachyarrhythmyas. Rarely, patients develop significant mitral regurgitation and, as with any valvular lesions, bacterial endocarditis is a risk. Uncertainty persists about how it should be diagnosed and about its clinical importance. Historically, MVP was first recognized by auscultation of a mid-systolic ‘click’ and late systolic murmur, and its presence is still
usually suggested by auscultatory findings. However, the recognition of the variability of the auscultatory findings and of the high level of skill needed to perform such an examination has prompted a search for reliable laboratory methods of diagnosis. M-mode echocardiography and 2-D echocardiography have played an important part in the diagnosis of mitral valve prolapse because of the comprehensive information they provide about the structure and function of the mitral valve. Medical experts propose [20–22] that echocardiography enables properly trained experts, armed with proper criteria, to evaluate mitral valve prolapse (MVP) almost 100%. Unfortunately however, there are some problems concerned with the use of echocardiography. The first problem is that current MVP evaluation criteria are not strict enough (private communication and [1]). The second problem is the incidence of MVP in the general population and the unavailability of the expensive ECHO-machines to general practitioners. Because of the above problems, we have decided to develop a decision support system, enabling the general practitioner to evaluate the MVP using conventional methods and to identify potential patients from the general population.
6. Implementation of the algorithm Before the evolution process can begin, we have to define the internal representation of individuals within the population, together with the appropriate genetic operators that will work upon the population. It is important to assure the feasibility of all solutions during the whole evolution process, therefore, we decided to present individuals directly as decision trees. This approach has some important features: all intermediate solutions
V. Podgorelec et al. / International Journal of Medical Informatics 58–59 (2000) 179–190
are feasible, no information is lost because of conversion between internal representation and the decision tree, the evaluation function can be straightforward, etc. The problem with direct coding of a solution may bring some problems in the defining of genetic operators. As decision trees may be seen as a kind of simple computer program (with attribute nodes being conditional clauses and decision nodes being assignments), we decided to define genetic operators similar to those used in genetic programming where individuals are computer program trees. The evolution process starts with the seeding of the initial population. We decided to present individuals directly as decision trees and since the evolution will work upon a constant size population, first we have to construct a certain number (population size) of random decision trees. This is performed by randomly choosing attributes and connecting them to a randomly selected branch in a growing decision tree. To prevent a tree deteriorating, we defined certain probabilities of selecting individual branches when constructing a tree. The first phase of repeating the evolution process is selection. We chose a slightly modified exponential ranking selection scheme with a predefined elitism value (number of preserved unchanged best individuals in each generation). Before the actual selection can take place, all the individuals have to be evaluated for their fitness score. As the evaluation function, a weighted sum of wrong decisions for testing objects was selected and also the complexity of evolved decision tree itself was considered (size and depth of a tree), but its participation in the function was very small. Wrong decisions occur when a testing object is not correctly classified (the correct classifications for all learning objects are known, but hidden for the developing method). There are also some
185
cases where the decision leaf is labeled ‘decision cannot be made’ — that happens in the case where there are approximately the same number of patients with prolapses and no prolapses remaining in the leaf and no further attributes can be selected. Since not all wrong decisions are equally important, we sorted them as follows: the most important errors are made when patients with prolapse or silent prolapse are classified as ‘no prolapse’; when a decision for such a patient cannot be made is the next on the scale of importance; but it is less serious when a healthy patient is classified with prolapse or silent prolapse, because this only means that the patient will be sent for further examination with more specialized equipment. The second phase of the repeating evolution process is the application of genetic crossover. As two individuals are selected, a node is randomly chosen in the first tree and it is searched for the same attribute node in the second tree. Only if the same attribute node exists in both selected trees can the crossover be applied. If the same attribute node is found, a branch at the resulting node in the first tree is cut and is then replaced by a branch at the same attribute node in the second tree. Next, it is checked for possible existence of two attribute nodes of the same kind on the same branch in the resulting tree; in such a case the second one is extracted. The third and final phase of the repeating evolution process is mutation. The genetic mutation operator is divided into two parts with separate mutation probabilities. The first part works as the exchange of a randomly chosen attribute node in a decision tree with another, randomly selected one from the list of all attributes. The only limitation in attribute nodes exchange is that both of them have to have the same number of successors in order to preserve all branches from the selected node in a mutated tree. The
186
V. Podgorelec et al. / International Journal of Medical Informatics 58–59 (2000) 179–190
second part of the mutation works on the leaf nodes of a tree, namely decisions. When a decision node is selected it is exchanged with one of the other possible decisions, also randomly selected. As the evolution repeats, more optimal solutions are obtained regarding the chosen evaluation function. Appropriately constructed initial individuals and mutation prevents premature convergence, elitism preserves the best individuals through generations and crossover, together with selection and mutation, provides for improvement of the best individuals towards the optimum. The evolution stops when an optimal or at least an acceptable solution is found.
6.1. Heterogeneous implementation of genetic algorithm One significant disadvantage of our evolutionary approach [24,25], when considered the traditional induction method, is the time spent for the solution to be evolved. Although evolutionary algorithms in general are used to reduce computation time significantly, in this case the traditional induction method is much faster. Sometimes, when only a few decision trees have to be induced, which are then used for a longer time, this disadvantage is not very serious, but there are several cases when a larger number of decision trees are required. For example, in medical decision-making based on the medical history and status of a patient, a new decision tree is required for each patient. To reduce the computation time of our method, we implemented the described genetic algorithm on a heterogeneous computing system (Fig. 3). Local area multicomputer (LAM) [19,26] is a parallel environment development system of independent computers. It features the MPI programming standard, supported by
extensive monitoring and debugging tools. We used MPI/LAM and 41 PCs and workstations connected in ATM network to implement the described parallel genetic algorithm for the induction of decision trees. We decided to partition the whole population to several sub-populations. The number of sub-populations was determined by the number of computers in the heterogeneous system; the individuals were uniformly distributed to sub-populations. Each sub-population has been evolved in accordance with the described evolutionary process, but some important parameters, like crossover and mutation probability, weights for the evaluation function etc. were different. The parameters for each sub-population were determined by a small random alteration of initial parameter values, which were defined experimentally. After a predefined number of generations, all evolved individuals from all sub-populations are compared. The best individuals are stored as the current general solutions and some individuals are then randomly exchanged between sub-populations, after which the evolutionary process continues. The whole process is stopped when the best solution overall does not change significantly in the last few cycles.
7. Application and results Using the Monte Carlo sampling method we selected 900 children and adolescents representing the whole population under 18years-of-age. All were born in the Maribor region and all were white. Routinely they were called for an echocardiography with no prior findings. We examined 631 volunteers. They all had an examination of their health state in the form of a carefully prepared protocol specially made for the syndrome of MVP. The protocol consisted of general data, mother’s and fathers’ health, pregnancy, de-
V. Podgorelec et al. / International Journal of Medical Informatics 58–59 (2000) 179–190
livery, post-natal period, injuries to chest or any other kind, chronic diseases, sports, physical examination, subjective difficulties like headaches, chest-pain, palpitation, perspiring, dizziness etc. auscultation, phonocardiography, EKG and finally ECHO. In that manner, we gathered 103 parameters that could possibly indicate the presence of MVP. All 631 patients were divided into a learning and testing set, 500 patients were selected for the learning and 131 for the testing set. Initially, we constructed decision trees traditionally with 0, 5 and 10% tolerance, without and with post-pruning. Then, a few decision trees were evolved through our new evolutionary approach for the same learning and testing sets and a comparison was made (Tables 1 and 2). A notable improvement has been achieved with our new model, both regarding the classification of prolapse (four instead of three correct classifications means an improvement of 33%) and no prolapse, when considering all the advantages of our evolutionary approach over the traditional one beside the accuracy of classification. A comparison of the decision trees has shown that similar attributes were chosen as
the most important (nearer to the root of a decision tree), but the less important ones (deeper in tree, near to decision attributes) were quite different. This gives us a hint that the presence of MVP can be determined in several ways, with the help of different examinations and data — the same opinion is shared by some medical experts who have tried to predict MVP in different ways. This result favors our evolutionary approach that is able to find those different solutions as opposed to the traditional approach. The next important improvement, maybe even more important than the accuracy of suggested decisions, is the topology and size of the constructed decision trees (Table 3). The number of attribute nodes is :20% lower in a simple evolutionary constructed decision tree compared with the traditional one, and even lower in a self-adapting evolutionary constructed decision tree, in spite of the post-pruning performed in the traditionally constructed one. This result supports our hypothesis about the optimization of tree topology and complexity and is very important because it enables a physician to predict the presence of MVP with two examinations less
Table 1 The results of MVP classification by a traditionally constructed decision tree Classified as prolapse Prolapse Silent prolapse No prolapse
3 0 8
Classified as silent prolapse 0 4 4
Classified as no prolapse 4 0 108
Table 2 The results of MVP classification by an evolutionary constructed decision tree Classified as prolapse Prolapse Silent prolapse No prolapse
4 0 5
Classified as silent prolapse 0 4 3
187
Classified as no prolapse 3 0 112
188
V. Podgorelec et al. / International Journal of Medical Informatics 58–59 (2000) 179–190
Table 3 A comparison of complexity of decision trees Traditionally DT constructed constructed DT with evolutionary method Number of attribute nodes Number of decision nodes Maximal depth Average depth
34
25
52
36
10
6
5.73
4.11
Table 4 A comparison of effectiveness for different decision tree induction methods
Accuracy Sensitivity Specificity
Traditionally constructed DT
Evolutionary constructed DT
87.79 63.64 90.00
91.60 72.72 93.33
A comparison of the effectiveness of different diagnostic methods is usually described in the field of machine learning by accuracy and in the field of medicine by sensitivity and specificity (Table 4). The accuracy of a diagnostic method is calculated as the relation between the correctly classified and all testing objects. Sensitivity is based on ratio of correctly classified positive patients (patients having prolapse or silent prolapse in our case) compared to all positive patients and specificity is based on ratio of correctly classified negative patients (without prolapse or silent prolapse) compared to all negative patients. All values are given as percentages. From Table 4 it is obvious that the evolutionary approach greatly improves all measures of effectiveness, especially when the selfadapting model is considered. Fig. 4 shows an example of parallel program execution under the XMPI/LAM environment that runs under several UNIX operating systems. We ran the algorithm on the cluster of 41 PCs and workstations for :6 h and in that time over 21 million generations were evolved. A single computer would require :10 days to perform the same amount of work.
8. Conclusion
Fig. 4. An example of a parallel program execution in XMPI interface.
on average than before — a result that really made the medical experts enthusiastic.
In the paper we present a parallel and distributed implementation of decision trees induction used for the medical DSS on a heterogeneous computing system. We applied it to the prediction of MVP in children, but it can be easily used for different kinds of medical decision making purposes. The whole evolution process is described: seeding of initial population, selection, crossover, mutation and evaluation function, as well as the parallelization of the evolutionary process. Results of MVP classification by evolutionary con-
V. Podgorelec et al. / International Journal of Medical Informatics 58–59 (2000) 179–190
structed decision trees are compared with those obtained by traditionally constructed ones. Because of the advantages of our model and the quality of the obtained results, in our opinion this approach can be successfully used for different kinds of real-world decision making.
[11]
[12] [13]
Acknowledgements
[14]
A part of the work presented in this paper was financed by the Slovenian Ministry of Science and Technology, grant No. L2-16400796-99.
[15] [16]
[17]
References [1] P. Kokol, et al., Decision trees and automatic learning and their use in cardiology, J. Med. Sys. 19 (1994) 4. [2] P. Kokol, V. Podgorelec, I. Malcic diagnostic process optimization with evolutionary programming, Proceedings of the Eleventh IEEE Symposium on Computer-Based Medical Systems, CBMS ’98, Lubbock, TX, 1998, pp. 62– 67. [3] P. Kokol, B. Stiglic, V. Zumer, Metaparadigm: a soft and situation oriented MIS design approach, Int. J. Bio-Med. Comput. 39 (1995) 243– 256. [4] P. Kokol, et al., Spreadsheet software and decision making in nursing, in: E.J.S. Honvenga, et al. (Eds.), Nursing Informatics, Springer – Verlag, New York, 1991. [5] J.R. Quinlan, Decision trees and decision making, IEEE Trans. Sys. Man. Cybernet. 20 (2) (1990) 339–346. [6] J.R. Quinlan, Induction of decision trees, Mach. Learn. 1 (1986) 81–106. [7] J.R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, San Mateo, CA, 1993. [8] T. Baeck, Evolutionary Algorithms in Theory and Practice, Oxford University Press, Oxford, UK, 1996. [9] S. Forrest, Genetic algorithms, ACM Comp. Surv. 28 (1) (1996) 77–80. [10] D.E. Goldberg, Genetic and evolutionary al-
[18] [19]
[20]
[21]
[22] [23]
[24]
[25]
[26]
189
gorithms come of age, Comm. ACM 37 (3) (1994) 113 – 119. D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Reading, MA, 1989. J.H. Holland, Adaptation in Natural and Artificial Systems, MIT Press, Cambridge, MA, 1975. B. Wilkinson, M. Allen, Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers, PrenticeHall, Englewood Cliffs, NJ, 1998. M.M. Eshagian (Ed.), Heterogeneous Computing, Artech House, Norwood, MA, 1996. I. Foster, Designing and Building Parallel Programs, Addison-Wesley, Reading, MA, 1995. K. Hwang, Z. Xu, Scalable Parallel Computing: Technology, Architecture, Programming, WCB/ McGraw-Hill, New York, 1998. R. Buyya (Ed.), High Performance Cluster Computing: The System Issues, Prentice Hall, NJ, 1998. Jazznet, A Dedicated Cluster of Linux PCs. http://math.nist.gov/jazznet/ G.D. Burns, R.B. Daoud, J.R. Vaigl, Lam: An open cluster environment for MPI, Proceedings of Supercomputing Symposium ’94, Toronto, Canada, 1994. H.R. Anderson, et al., Clinicians Illustrated Dictionary of Cardiology, Science Press, London, UK, 1991. W. Markiewicz, et al., Mitral valve prolapse in one hundred presumably young females, Circulation 53 (3) (1976) 464 – 473. J.B. Barlow, et al., The significance of late systolic murmurs, Am. Hlth. J. 66 (1963) 443 – 452. R. Devereoux, Diagnosis and prognosis of mitral valve prolapse, N. Engl. J. Med. 320 (16) (1989) 1077 – 1079. V. Podgorelec, P. Kokol, A genetic algorithm for highly constrained scheduling problems, Proceedings of the Sixth Electrical and Computer Science Conference ERK ’97, Portoroz, Slovenia, 1997, pp. 459 – 460. V. Podgorelec, P. Kokol, Genetic algorithm based system for patient scheduling in highly constrained situations, J. Med. Sys. 21 (6) (1997) 417 – 427. G.D. Burns, The local area multicomputer, Proceedings of the Fourth Conference on Hypercube Concurrent Computers and Applications, ACM Press, 1989.
190
V. Podgorelec et al. / International Journal of Medical Informatics 58–59 (2000) 179–190
[27] J. Brest, V. Zumer, M. Ojstersek, Dynamic scheduling on a network heterogeneous computer system, Proceedings of the Fourth International ACPC Conference Including Special Tracks on
Parallel Numerics (Par Num ’99) and Parallel Computing in Image Processing, Video Processing and Multimedia; LNCS 1557, Salzburg, Austria, 1999, pp. 584 – 585.
.