Time-Constrained Sorting | A Comparison of Dierent Algorithms P. Puschner Technische Universitat Wien, Austria
[email protected] A. Burns University of York, UK
[email protected]
Abstract: The designers of real-time systems try to avoid a poor utilization of hardware by assigning only the absolutely necessary time budget to each task. In certain cases it is also acceptable to cut the time quantum assigned to a task below its worst-case needs, provided (a) a high percentage of executions complete within this quantum and (b) the quality of the aborted computations is sucient for further processing. In this paper we study the eect of reserving less than the worst-case execution time for dierent sorting algorithms. We investigate into the quality of the partial results of the sorting algorithms at the point of their termination. To do so, we de ne a set of metrics and compare the quality of incomplete sorts of aborted computations against these metrics. Further, we evaluate the sensitiveness of the results to changes in the completion rate of the chosen time quantum. As a result we present a rating of the evaluated algorithms and show how to achieve the best tradeo between the CPU-time allocation and completion rate for the sorting algorithms.
1 Introduction The sorting of data is a key activity in a number of real-time applications. Although there are many dierent sorting algorithms to choose from, most have the property that the average time it takes to complete a sort operation is signi cantly less than the theo-
retical worst-case time. The system/application designer is therefore faced with a choice (dilemma):
reserve enough resources for the worst case, and experience considerable under-
utilization at run-time, or reserve less than the worst-case, have better resource utilization but risk an occasional incomplete sort. Clearly if it is imperative that the sort operation must always be completed then worstcase analysis and resource reservation must be used. But for many application this is not necessary; a percentage success rate is sucient as long as the abandoned sort leaves the data in an eective partially ordered form. In this paper we de ne a set of possible metrics for partially ordering and examine a number of sort routines against these metrics. The results of this study are recommendations on how best to trade computation time against quality of result. Although aimed speci cally at sorting, the framework we use is applicable to other algorithms, for example search routines, that have this distinction of average and worst-case computation time. There exists a lot of literature that describes the operation and the peculiarities of sorting algorithms [Aho, Hopcroft, Ullman 1983, Cormen, Leiserson, Rivest 1990, Knuth 1973, Mehlhorn 1984, Sedgewick 1989, Sedgewick, Fajolet 1996]. With respect to performance these works typically focus on the average and to some extent on the worst-case number of operations that the algorithms need to obtain a fully sorted solution. Recent work [Mittermair, Puschner 1997, Puschner to appear] investigates the suitedness of dierent sorting programs for hard and soft real-time applications. This paper extends these studies on sorting algorithms by considering the termination of the sorting programs when their deadlines expire. It compares the quality of the resulting partial sorts of the dierent algorithms.
1.1 Motivation The purpose of this section is to illustrate that it is indeed worth while to refrain from reserving resources for the worst case, if the application allows this. It is well-known that for many sorting algorithms the worst-case execution time is far beyond the execution time of an average execution. Also, the standard deviation of the execution times from the average execution time is small compared to the dierence between the worst case and the average case. As a consequence, a resource allocation strategy that reserves less than the WCET for a sorting task has to reserve signi cantly less time than the WCET even if the reserved duration must guarantee a completion ratio for the task that is close to
100 percent. Thus, con guring the duration of execution intervals based on the demanded completion rate of an algorithm instead of its WCETs, is a very eective means to keep resource costs low (see Figure 1). 300
insertion wcet insertion 99.9 insertion avgt heap wcet heap 99.9 heap avgt
250
execution time
200
150
100
50
10
20
30
40 50 number of elements
60
70
80
Figure 1: WCETs, 99.9% Time Quanta, and Average Execution Times of Three Sorting Algorithms Figure 1 shows the time quanta one has to reserve for two sample sorting algorithms (insertion sort and heap sort) to accommodate that 99.9% of all executions complete. The time quanta (curves without dot marks) are compared to average execution times (cross dot marks) and worst-case execution-time bounds (triangular dot marks). One can observe that the quanta are closer to average execution times than worst-case bounds. The paper is structured as follows: In Section 2 we derive the amount of time that has to be reserved for a program in order to ensure a given percentage of completions. Section 3 gives a detailed description of the experiments we conducted to assess the sorting program. The main part, Section 4, investigates how the algorithms behave with respect to the dierent metrics. Also, we explain the dependence between the quality of the results and the completion rate. Section 5 summarizes our ndings and concludes the paper.
2 Derivation of Resource Needs We de ne the p-quantum for a task as the duration that needs to be reserved for the task to guarantee that p out of all executions complete within that duration. In other words, on the average only 1 out of 1=(1 , p) executions need more time than the p-quantum. Ideally one would derive a p-quantum by constructing the cumulative distribution function (CDF) of the execution time of the task. The execution time at which the CDF reaches p is the p-quantum (see Figure 2).
cumulative probability of completion
1 0.95 0.8
0.6
0.4
0.2
p-quantum (p = 0:95)
0 0.5
0.55
0.6
0.65 0.7 0.75 0.8 0.85 time relative to WCET of task
0.9
0.95
1
Figure 2: Derivation of a p-quantum from the CDF of the execution times. The example shows the CDF of a sample task and illustrates the derivation of the 0:95-quantum. Since the construction of exact CDFs is in practice infeasible, p-quanta must be approximated. A viable solution for the estimation of a p-quantum is to measure the execution times of a large number of executions of the task and to apply a statistic test for quantiles of continuous distributions, as for example described in [DeGroot 1986], to approximate the bound. From n execution time samples, an estimate for the p-quantum with con dence coecient is computed as the value of the order statistics of the execution time samples at the quantile of the binomial distribution for n trials and probability p.
3 Experiment Description The reservation of p-quanta for tasks is tolerable as long as abandoned tasks leave their data in a state of acceptable quality. The goal of the experiments is to compare the quality of partial solutions of sorting algorithms when terminated by the expiration of their p-quantum. We will de ne a set of possible metrics and examine a number of sorting algorithms against these metrics. Additionally, we will test in which way the choice of dierent values for p in uences the quality of the results.
3.1 Sorting Algorithms In the experiments we used six common algorithms for sorting arrays: Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, Quick Sort, and Heap Sort. We implemented the algorithms in C to sort elements of two-byte integer. The coding followed the description of the algorithms in standard literature [Mehlhorn 1984, Sedgewick, Fajolet 1996]. We implemented very simple versions of all algorithms, e.g., merge sort splits list down to the length of a single element and quick sort takes the rst item of each list as the pivoting element. The algorithms, of course, have dierent timing characteristics, a comparison of which (from the real-time perspective) we have already given [Mittermair, Puschner 1997, Puschner to appear]. In this paper we are primarily concerned with the quality of the partially ordered output from the algorithms when they are terminated early. We therefore \normalize" the results by use of the same p-quantum on each algorithm rather than use absolute time.
3.2 Experiment Steps The experiments followed the following steps for each sorting program and each value of p for the p-quanta. First the program was compiled and prepared for execution with execution-time simulation for the MC68000 processor. To derive an estimate for the pquantum, we generated 500000 arrays of random data and executed the sorting program with these inputs. The execution-time simulation computed the number of CPU cycles for each of the test runs and stored the execution time together with the input data set. From the execution times we derived the needed p-quantum as described in the previous section. After the computation of the p-quantum the stored execution times and input arrays were re-used for the succeeding evaluations. For each of the 500000 input sets the execution
time was compared to the p-quantum. All inputs with an execution time greater than the p-quantum were sorted again. This time, however, the execution was terminated upon expiration of the p-quantum. The partial result that had been computed within the pquantum was taken as part of the entire evaluation of the sorting algorithms.
3.3 Evaluation Metrics The quality of a partially sorted solution very much depends on the application. While one application might demand that a large fraction of the array is sorted, another application might favor solutions where the array elements are not too far from their correct position. To cover a certain range of possible application needs we investigated the incomplete results of the sorting algorithms with respect to ve dierent metrics:
length of the longest sorted sub-sequence number of elements in wrong position max jactual position , correct positionj
P jactual position , correct positionj P(actual position , correct position)
2
3.4 Points of Evaluation We organized our work into two series of experiments. In the rst series we evaluated and compared the partial solutions for dierent array sizes. In the second series we assessed the in uence of the size of the p-quantum on the quality of the partial solutions. In the rst series we chose array sizes of 10, 50, 100, 150, and 200 elements, respectively. For each size we determined the 0.9-quantum for each of the algorithms and evaluated the partial results | Table 1 shows that stopping at the p-quantum reduces the time consumption of many algorithms signi cantly, especially if arrays to be sorted are large. We determined the percentage of results that were already sorted, though they still had not completed1, and compared all partial solutions against the above-listed metrics. Most standard sorting algorithms continue operation even though the data might already be completely sorted. This comes from the fact that loops have xed iteration counts and that the algorithms do not check the sortedness of data during their operation. 1
Table 1: Duration of 0.9-Quanta Relative the WCET Bounds (in Percent) 10 elements 100 elements bubble 90.1 85.3 insertion 77.3 56.1 selection 99.1 97.7 merge 97.0 96.7 quick 74.3 24.7 heap 79.1 64.0 In the second series we xed the array size to 100 elements. We varied the size of the pquantum by choosing six dierent values for p: 0.75, 0.8, 0.85, 0.9, 0.95, and 0.99. Again, each algorithm was evaluated with respect to all these quantum sizes and metrics.
3.5 Ensuring Consistent Termination Before we start the main investigation we have to ensure that the abandoned sorts leave the arrays in a consistent state. We have to consider that most sorting algorithms perform sequences of operations during which the contents of the array is not consistent | elements may be missing or duplicated. To make sure the contents of the array is consistent at evaluation time, sorting programs must not be interrupted during action sequences that have to be atomic. Thus if an atomic operation is in progress at the time the p-quantum ends, the termination of the sorting algorithm is delayed until the contents of the array is valid again. Figure 3 shows that the precautions for consistency are relevant. Especially Insertion Sort and Heap Sort return a large fraction of inconsistent solutions if the critical sections of code are not protected. Only Merge Sort does not produce inconsistent results. Since Merge Sort uses two arrays, one array is always in a consistent state. As mentioned above, we can avoid inconsistencies in the results, if we ensure that the termination of the sorting programs are delayed until the array is in a valid state. The maximum termination delays are bounded and can be computed by program code analysis. In this way we can ensure that the sorting programs terminate before the expiration of their time quantum despite the presence of non-interruptible code sections. Figure 4 shows the average delays caused by atomic sequences of operations for the investigated sorting programs (The units along the y-axis are numbers of instruction cycles on the MC68000 processor). The reader can observe the following points from the graphs:
1 bubble insertion selection merge quick heap
0.9 0.8
fraction of samples
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0
20
40
60
80
100 120 array size
140
160
180
200
Figure 3: Percentage of Consistent Terminations among all Terminations at the p-Quantum (p=0.9) 8000 bubble insertion selection merge quick heap
7000 6000
time units
5000 4000 3000 2000 1000 0 0
20
40
60
80
100 120 array size
140
160
180
200
Figure 4: Average Delay of Interruptions if Interrupts are being Disabled while the Array being Sorted is in an Inconsistent State
Insertion Sort shows the longest delays. The delays grow proportionally with the array size. Compared with Insertion Sort, the delays for Heap Sort are short. They grow slower than linearly for increasing array sizes. The delays for Merge Sort are zero, for the other algorithms they are close to zero.
Our experiments take into account that not all programs can be terminated immediately at
the end of the assigned p-quantum. The execution simulator was programmed to delay the termination till the end of an atomic operation in case the array contents was inconsistent when the p-quantum expired.
4 Results This section presents the results of our experiments together with explanations and interpretations. Section 4.1 evaluates how many of the interrupted sorts yielded a sorted solution at the time of the interruption. Section 4.2 compares the partial results of the sorting algorithms against the dierent metrics and investigates the dependence of the results on the array size. Finally, Section 4.3 assesses the in uence of a change of the completion ratio on the results.
4.1 Percentage of Sorted Arrays Despite Early Termination 1 bubble insertion selection merge quick heap
0.9
fraction of samples
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
20
40
60
80
100 120 array size
140
160
180
200
Figure 5: Percentage of Sorted Arrays for Dierent Array Sizes. Figure 5 shows the percentage of sorted arrays from those execution of sorting algorithms that were terminated because of the expiration of the 0:9-quantum. The gure presents the following facts:
For all algorithms the completion rate decreases with increasing array size. Heap sort has the worst completion ratio for small arrays. For all algorithms but Heap Sort the completion rate is higher than 80% for arrays with 10 elements.
Selection Sort has the highest rate of sorted array in the case of early termination. Bubble Sort is second-best for arrays for medium size to large arrays (60 to 70% sorted).
4.2 Evaluation Against Metrics We evaluated the six sorting algorithms against each of the ve metrics listed in Section 3.3. All computations of the algorithms were terminated at the time the respective 0.9-quantum was reached. The results of those executions that had not completed at the time the quantum expired were taken for our assessment. This section compares the ndings from these evaluations.
Length of the longest sorted sub-sequence 200 bubble insertion selection merge quick heap
180 160
number of elements
140 120 100 80 60 40 20 0 0
20
40
60
80
100 120 array size
140
160
180
200
Figure 6: Longest Sorted Sequences
For all algorithms except Merge Sort the average length of sorted subsequences in the interrupted executions lies between 90 and 100 percent of the total array size.
For Merge Sort the average length of the maximum sorted sub-sequence is much
shorter than for the other algorithms (see Figure 6). This result has the following reason. Merge Sort internally uses two arrays and merges elements from the sorted lists of one array into the other array. If the destination array is not completely lled at the time the 0.9-quantum expires, Merge Sort returns the source array. The maximum length of a sorted subsequence in the source array is in this case the
maximum power of two that is smaller than the array size, i.e., 8 for arrays of size 10, 32 for arrays of size 50, and so on. Thus the pessimistic result for Merge Sort.
Average Number of Elements in Wrong Position Figure 7 shows the results for the second metric, which judges the algorithms by the average number of elements found in a wrong position at the expiration time of the 0.9-quantum. 160 bubble insertion selection merge quick heap
140
number of elements
120 100 80 60 40 20 0 0
20
40
60
80
100 120 array size
140
160
180
200
Figure 7: Average Number of Elements in Wrong Position for Dierent Array Sizes
The average number of elements in a wrong position is lowest for Selection Sort, low for Bubble Sort, Quick Sort, and Heap Sort.
For Merge Sort the number of elements in a wrong position is extremely high (more than 50% for 50 elements, 75% for 200 elements). For Insertion Sort the number of elements in a wrong position is slightly smaller (appr. 35% for 50 elements and 50% for 200 elements).
max jactual position , correct positionj This metric compares the algorithms with respect to the maximum distance an array element has from its correct position at termination time. The respective results are displayed in Figure 8.
120 bubble insertion selection merge quick heap
100
distance
80
60
40
20
0 0
20
40
60
80
100 120 array size
140
160
180
200
Figure 8: Average of max jactual position , correct positionj for Dierent Array Sizes
The reader can observe similarities to the previous metric. Merge Sort and Insertion Sort perform much worse than the other algorithms.
The gradient of the curve of Merge Sort steadily increases in each interval between
10, 50, 100, and 150 elements. In the interval between 150 and 200 elements it is, however, lower than between 100 and 150 elements. The reason is that in all but the last interval the maximum size of the sorted sublists increases, meaning that the maximum possible distance of an element from its correct position increases, too. Only for 200 elements the maximum size of a sorted sublist is equal to to the maximum sublist size for 150 elements: 128. Since the maximum possible distance of an element from its correct position does not increase, the average of the maxima does not increase so quickly, either.
P jactual position , correct positionj While the result of the previous metric was determined by a single array element, this metric sums up the displacements of all array elements. The graphs resulting from the evaluation of the partial sorts of the algorithms with respect to this metric are shown in Figure 9.
Selection Sort performs best. Even for 200 elements the average sum of distances of elements from their correct position is lower than 1.
10000 bubble insertion selection merge quick heap
1000
sum of distances
100
10
1
0.1
0.01 0
20
40
60
80
100 120 array size
140
160
180
200
Figure 9: Average of P jactual position , correct positionj for Dierent Array Sizes
Expect for 10 elements, Merge Sort shows the worst result. The reason is that Merge
Sort sorts sublists and array elements cannot move across prede ned borders in each sorting run. Thus although the unsorted arrays resulting from early termination always consist of two sublists that are entirely sorted, the distance of each element from its nal location can still be long.
Though better than Merge Sort, Insertion Sort performs much worse than the other algorithms for an array size of 50 or more.
The results of Bubble Sort, Quick Sort, and Heap Sort are relatively similar. For arrays of medium to large size the metric yields much better results than Merge Sort and Insertion Sort, and worse results than Selection Sort. Among these three algorithms Quick Sort shows the best results for small arrays of 10 Elements. For large arrays its results are, however, worse than for Bubble and Heap Sort.
P(actual position , correct position)2 This metric is similar to the previous one. In comparison to the previous metric, however, this metric places a stronger weight on larger displacements. The results for this metric are very similar to the results of the previous metric (see Figure 10). The variances of the distances of array elements from their correct position seem to be similar for all algorithms. Since the results for this metric closely resembles the results from the previous point the same interpretation hold as for Figure 9.
1e+06 bubble insertion selection merge quick heap
100000
sum of square distances
10000 1000 100 10 1 0.1 0.01 0
20
40
60
80
100 120 array size
140
160
180
200
Figure 10: Average of P(actual position , correct position)2 for Dierent Array Sizes
4.3 Changing the Completion Ratio p Up to this point we had xed the completion ratio p to 0.9 and compared the results of the dierent algorithms when terminated by the expiration of the p-quantum. In the following we investigate how a change of p aects the duration of the p-quantum and the results of the sorting algorithms. These data are interesting, because they provide guidance for selecting appropriate values of p. Also these data tell us in which situation a reduction of p yields a substantial reduction of the resource needs (=size of the p-quantum) and in which situation a decrease of p yields only little gain. Figures 11 and 12 show the cumulative execution time distribution functions (CDF) of the sorting algorithms for 10 and 100 elements, respectively. For each algorithm the CDF is scaled relative to its WCET. The gures display the correlation between p and the size of the respective p-quantum for the algorithms. We can immediately observe that a reduction of the completion ratio from 1.0 to a value \close to 1.0", e.g., to 0.99 for arrays with 100 elements or to 0.97 for arrays of 10 elements, yields the most signi cant reduction of the size of the p-quantum. A further reduction of p, below this point, brings little gain with respect to the CPU utilization in relation to the fall o in the completion rate. While Figure 12 shows the CDFs of the sorting programs relative to their WCETs, Figure 13 presents the absolute execution time CDFs of the implementations of the algorithms in our target environment. Besides the dot marks that help to identify the graphs, we have also dot-marked the 0:99-quanta and the WCETs (cumulative probability of completion = 1.0) of all CDF graphs. The results demonstrate that it makes a big dierence whether
cumulative probability of completion
1
0.8
bubble insertion selection merge quick heap
0.6
0.4
0.2
0 0.2
0.3
0.4
0.5 0.6 0.7 0.8 execution time relative to WCET
0.9
1
Figure 11: CDFs of Execution Times for Sorting Programs with 10 Elements Relative to WCETs
cumulative probability of completion
1 bubble insertion selection merge quick heap
0.8
0.6
0.4
0.2
0 0.2
0.3
0.4
0.5 0.6 0.7 0.8 execution time relative to WCET
0.9
1
Figure 12: CDFs of Execution Times for Sorting Programs with 100 Elements Relative to WCETs we judge the algorithms based on their WCETs or their p-quanta. Thus, if not all executions of a program have to run till completion, it pays o to investigate the resource needs of the p-quanta of possible alternative implementations. For example, Merge Sort has the shortest WCET among all algorithms shown in Figure 13. If we are, on the other hand, only interested in 0.99 completion, then Quick Sort has the smallest 0.99-quantum. Quick Sort would thus be preferable in this case. The reader can observe a similar result when comparing the three 0(N 2 ) algorithms Bubble Sort, Insertion Sort, and Selection Sort. While Selection Sort has the smallest WCET, the 0.99-quantum of Insertion Sort is signi cantly smaller than the other quanta.
cumulative probability of completion
1 bubble insert. select. merge quick heap
0.8
0.6
0.4
0.2
0 0
100000
200000
300000 400000 500000 600000 execution time [CPU cycles]
700000
800000
Figure 13: CDFs of Absolute Execution Times for Sorting Programs with 100 Elements The small change of the size of the p-quantum for variations of p below the eective range near the completion rate of 1.0 leads to only small changes in the quality of partial sorts when p is changed. This can be seen in Figure 14, where we xed the array size and varied p between 0.75 and 0.99. The gure shows that the fraction of interrupted, yet sorted solutions changes only slowly in comparison to the change of the completion ratio of the p-quantum. 1 bubble insertion selection merge quick heap
0.9
fraction of samples
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 75
80
85 90 percentage of completions
95
100
Figure 14: Percentage of Sorted Arrays of Consistent Interruptions for Different Completion Ratios In the following we explored, to which extent the completion ratio p, selected for the pquantum, in uences the quality of the unsorted solutions with respect to the metrics we
used earlier. Again the quality of the results hardly changed when we changed p in the range between 0.75 and 0.99 (see Figures 15 to 17; the graphs for the third and the fth metric have been left out since they very much resemble the shown gures). Figures 15 to 17 compare those partial results of the dierent algorithms that had not been completely sorted at the termination time. Considering the steepness of the execution time CDFs, the results shown in Figures 15 to 17 con rm our expectations. Since for all values of p in the range between 0.75 and 0.99 the sizes of the p-quanta are very similar, nearly the same time is available for the computation in the dierent situation. Thus, also the qualities of the partial results hardly dier for the dierent completion rates. 100 bubble insertion selection merge quick heap
95
number of elements
90 85 80 75 70 65 60 75
80
85 90 percentage of completions
95
100
Figure 15: Longest Sorted Runs of Unsorted Consistent Arrays for Dierent Completion Ratios The above observations suggest a rule for the selection of the parameter p for the p-quantum of the sorting algorithms. Select that value of p close to 1.0 where the CDF of the execution times of the sorting algorithms shows the maximum change of the gradient. This value of p yields the best tradeo between CPU time savings and the degradation of the completion rate.
5 Summary and Conclusion In this paper we have analyzed a collection of related algorithms that have the property that their worst-case execution times tend to be much greater than their average. In choosing between the algorithms, it is necessary to know the quality requirement of the application. The algorithms addressed in this paper are all sorting routines. If an application requires a
100 bubble insertion selection merge quick heap
90 80
number of elements
70 60 50 40 30 20 10 0 75
80
85 90 percentage of completions
95
100
Figure 16: Average Number of Elements in Wrong Position of Unsorted Consistent Arrays for Dierent Completion Ratios 10000 bubble insertion selection merge quick heap
sum of distances
1000
100
10
1 75
80
85 90 percentage of completions
95
100
Figure 17: Average of P jactual position , correct positionj for Unsorted Consistent Arrays for Dierent Completion Ratios guarantee that all input data must be fully sorted on all occasions then the designer must choose the algorithm with the shortest WCET for the maximum size of the problem that will be encountered. But if this absolute guarantee is not required then a number of other attributes become important. Firstly the probability of completion, as a function of percentage of WCET assigned to the algorithm, allows routines to be picked that can have high expectation of completion for relatively low execution times. For example with Quick Sort of 100 elements 0.99 of all executions can be expected to complete within only 0.25 WCET.
The other important issue is the quality of the data if the algorithm is terminated before sorting is completed. We considered a number of metrics in the paper | the length of the longest sorted subsequence at termination time, the number of elements in wrong position, the maximum displacement of an element, the sum of displacements of all displaced elements, and the sum of the squares of the displacements. A main observation is that the evaluations against the dierent metrics yield similar rankings of the algorithms: Merge Sort performs very poor for each of the metrics. This is certainly due to the fact that Merge Sort does signi cant moves of elements in the nal run. Next to Merge Sort comes Insertion Sort. Its maximum displacement of an element is comparable to Merge Sort. With respect to the number of elements in a wrong position and the two metrics that sum up displacements Insertion Sort performs better then Merge Sort, but still signi cantly worse than all other algorithms. The results of Bubble Sort, Heap Sort, and Quick Sort are relatively close together, with Quick Sort showing the quickest degradation of the quality among the three algorithms. Selection Sort returns the best results for all considered metrics. While the result is only slightly better than Bubble Sort, Heap Sort, and Quick Sort, for the rst three metrics, the two sum-metrics give a clear vote for Selection Sort. The investigation into the sensitiveness of the results on the completion ratio (and thus the duration of the reserved time quantum) revealed a uniform characteristic for all algorithms. A reduction of the completion ratio from 1.0 to a value \close to 1.0" (0.99 for arrays with 100 elements, 0.97 for arrays with 10 elements in our experiments) yields signi cant resource savings. A reduction of the completion rate below this point shortens the duration of the necessary time quantum for the algorithms only little relative to the big loss of completions; such a further reduction is thus not advisable.
6 References [Aho, Hopcroft, Ullman 1983] A. Aho, J. Hopcroft, and J. Ullman. Data Structures and Algorithms. Addison-Wesley, Reading, MA, 1983. [Cormen, Leiserson, Rivest 1990] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press and McGraw Hill, 1990. [DeGroot 1986] M. H. DeGroot. Probability and Statistics. Addision-Wesley, Reading, MA, 1986. [Knuth 1973] D. E. Knuth. The Art of Computer Programming, volume 3. AddisonWesley, Reading, MA, USA, 1973. [Mehlhorn 1984] K. Mehlhorn. Data Structures and Ecient Algorithms, volume 1. Springer, EATCS Monographs, 1984.
[Mittermair, Puschner 1997] D. Mittermair and P. Puschner. Which Sorting Algorithms to Choose for Hard Real-Time Applications. In Proc. Euromicro Workshop on Real-Time Systems, pages 250{257, Toledo, Spain, June 1997. [Puschner to appear] P. Puschner. Real-Time Performance of Sorting Algorithms. RealTime Systems, to appear. [Sedgewick 1989] R. Sedgewick. Algorithms. Addison-Wesley, Reading, MA, USA, 1989. [Sedgewick, Fajolet 1996] R. Sedgewick and P. Fajolet. An Introduction to the Analysis of Algorithms. Addison-Wesley, Reading, MA, USA, 1996.