In this paper we propose a hybrid policy, called eager scheduling with lazy retry (ESLR). This policy tries to schedule a task eagerly upon its arrival.
Eager Scheduling with Lazy Retry for Dynamic Task Scheduling ? Huey-Ling Chen and Chung-Ta King Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan
Abstract. Task scheduling policies can be generally classi ed as eager scheduling and lazy scheduling. The former attempts to schedule the tasks whenever there are free processors available, while the latter delays the scheduling of some tasks so as to accommodate more appropriate tasks. In this paper we propose a hybrid policy, called eager scheduling with lazy retry (ESLR). This policy tries to schedule a task eagerly upon its arrival. If the scheduling fails, then the task is rescheduled after a delay period. This later mechanism is referred to as lazy retry. Simulation results show that the ESLR policy can reduce system fragmentation and enhance scheduling eciency.
1 Introduction Task scheduling is concerned with the sequence in which the tasks entering a multiprocessor system are served. A scheduling is dynamic if it has no a priori knowledge of the arriving tasks and scheduling decisions can only be made based on the past and current arrivals of the tasks. Task scheduling strategies can be generally classi ed as eager scheduling and lazy scheduling. Eager scheduling attempts to schedule the tasks whenever there are free processors available. For example, the SCAN policy proposed in [2] classify the tasks according to the number of processors requested. When there are tasks nishing execution and releasing processors, the scheduler examines the classes in a round-robin fashion All tasks in the same class are allocated before the scheduler moves on to the next class. One major problem with the eager scheduling policies are that they tend to favor those requesting a smaller number of processors and thus may increase the fragmentation in the system [3]. Lazy scheduling, on the other hand, delays the scheduling of some tasks so that more appropriate tasks may be accommodated later. In this way tasks with dierent requests for free processors will have equal opportunities to schedule. In the lazy scheduling policy LAZY [3], this is done by assigning, as best as one can, the subcube released from a completed task to a task requesting exactly the same number of processors. Unfortunately, lazy scheduling by its nature is conservative in scheduling. Tasks may be kept in the queue even if there are free processors available. It follows that tasks may wait longer in the job queue. ?
This work was supported in part by the National Science Council under grants NSC85-2221-E-007-031 and NSC-85-2213-E-007-049.
From the above discussions, we can see that a scheduling method which combines the advantages of both types of scheduling policies is very desirable. In this paper we propose a hybrid policy, called the eager scheduling with lazy retry (ESLR) strategy. The policy will be described in detail in the next section.
2 Eager Scheduling with Lazy Retry The basic idea of the ESLR strategy is as follows. When a task enters the system, the scheduler tries to schedule the task immediately. In this way the scheduler exercises eager scheduling for new tasks. If a task cannot be allocated due to a lack of free processors, it is put into the retry queue. A task in the retry queue will be rescheduled after a delay. The interval between two consecutive retries by the same task will be called the retry period. To avoid task starvation, we follow the idea of previous schemes and set a threshold . If the waiting time of a task in the retry queue exceeds , a global
ag stop alloc will be set. Then, the scheduler will not consider other tasks until all starving tasks are scheduled. The basic algorithm of the ESLR scheduling strategy is shown below. In the algorithm, the functions Proc Allocate() and Proc Release() perform processor allocation and deallocation respectively. The tasks in the retry queue are sorted according to the expiration times of their retry periods. ESLR scheduling: 1. ESLR Scheduler: 1.1. When a new task T arrives, call ESLR Request(T ). 1.2. When a task T nishes, call ESLR Release(T ). 1.3. When the retry period of a task T in the retry queue expires, call ESLR Request(T ). 2. ESLR Request(T ): 2.1. If stop alloc is not set, then 2.1.1.Call Proc Allocate(T ). 2.1.2.If the call succeeds, then return; else compute waiting time of T . 2.1.3.If waiting time exceeds , then set stop alloc; else get a new retry period and place T into the retry queue. 2.1.4.Return. 2.2. Call Proc Allocate(T ), where T is the starving task. 2.3. If the call succeeds, then reset stop alloc. 2.4. Return. 3. ESLR Release(T ) 3.1. Call Proc Release(T ). i
i
i
i
i
i
i
i
i
i
w
w
i
i
3 Performance Evaluation The simulator was developed using CSIM Version 16 and ran on a Sparc/20 workstation. Thus the time unit used in the following discussions will be the
simulation cycles of CSIM. Task arrivals followed a Poisson distribution with = 1:0. The size of the requested subcubes followed a uniform or normal distribution. The service time followed a Poisson distribution. Four dierent workloads were used: low, medium, high, and mixed, where the system loading factor was set to 0.5, 0.8, 1.0, and 0.95, respectively. The loading factor represents the percentage of time that the system is busy. Under the mixed workload, 25% of the tasks had a loading factor of 0.5, 25% had 0.8, 25% have 1.0, and another 25% had 1.5. The dimension of the simulated hypercube system was 10. The number of tasks simulated was 6000. Each run iterated 100 times, each has a seed to the random number generator. We rst compare the performance of the ESLR policy with the LAZY policy. From Fig. 1(a), we can see that the ESLR scheduling algorithm outperforms the LAZY algorithm in terms of the turnaround time. This can be attributed to the eager scheduling in ESLR for new tasks and fair retries for blocked tasks. Fig. 1(b) shows the corresponding utilization of the system. Both strategies perform similarly, although ESLR is slightly better. One explanation is that processors can be better used with eager scheduling in ESLR, as long as system fragmentation can be reduced with lazy retry. Turnaround time 1200 1000
00 11 00 11 ESLR 00 11 11 00 11 Uniform 00 00 11 00 LAZY 11 00 11
800 600 400 200
00 11 11 00 00 0011 11 00 11 00 11
00 11 00 11 00 11 00 11 00 11 00 11
low
11 00 00 ESLR 11 00 11 00 11 00 11 00 LAZY 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 11 00 00 11 00 00 11 11 00 11 11 00 11 00 00 11 00 00 11 11 00 11 00 11 11 00 00 11 11 00 00 11 00 11 00 11 11 00 00 11 11 00 00 11 00 11 11 00 11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 00 11 00 11 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 0011 11 00 11 00 11 00 11 00 00 11 00 11 00 11 00 11 00 11 00 00 11 11
Util.
Normal
11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 0011 11 00 11
11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11
medium
high
mixed
1.0
00 11 00 11 ESLR 00 11 Uniform 00 11 00 11 00 11 00 LAZY 11 00 11
0.8 0.6 0.4 0.2
11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 0011 11 00 11 00 11
00 11 00 11 11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11
low
11 00 00 ESLR 11 00 11 00 11 00 LAZY 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 11 00 11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 11 00 11 00 00 11 00 11 11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 0011 11 00 11 00 11 00 11 00 11 00 11 00 00 11 11 00
Normal
11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 0011 11 00 11
00 11 11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11
medium
high
mixed
Fig. 1. Comparisons of ESLR and LAZY Note that under the mixed workload, the performance dierence between ESLR and LAZY is larger than that under other loading conditions. One reason is that the mixed workload contains tasks which require a very long time to execute. In LAZY, these tasks may prevent other tasks which request the same number of processors from being scheduled. ESLR, on the other hand, treats tasks more equally. Thus a long task has a smaller impact on the performance. Next, we study the eects of the retry period. In this experiment, we used high workload and tried dierent values of the retry period. The results are shown in Table 1. In the table is the mean of the service time distribution and X is a random variable following a Poisson distribution with a mean . Table 1 shows that if the retry period is somewhere between 0:5X and 2X , then we can obtain very good performance. This is because there is a high probability that a task will complete between two consecutive retries. Thus when
Table 1. Eects of the retry period on the ESLR scheduling algorithm Input distribution Type Retry Uniform Normal period Time Util. Time Util. (cycles) (cycles) Fixed
10.0 200.0 500.0
Random
0:5X X
2X
217 0.805 383 0.664 522 0.555 230 0.811 199 0.827 240 0.820 257 0.814
171 0.831 281 0.704 349 0.634 187 0.816 191 0.829 182 0.817 167 0.800
a task retries, it has a higher probability of success. When the retry period is smaller, the system utilization will be higher. This is because blocked tasks will try rescheduling very frequently and whenever free processors become available, the tasks will grab these processors immediately. On the other hand, many retries may turn out to be a waste of CPU, because there is no new processor released since last retry.
4 Concluding Remarks In this paper we propose a new task scheduling strategy which combines both eager and lazy scheduling together. With lazy and randomized retry, the ESLR scheduling strategy is fair to all classes of tasks and thus can reduce system fragmentation. We have considered dierent design choices of the ESLR policy and examined several variations. Simulation results show that the proposed ESLR strategy performs better than previous schemes. More research works are needed in order to determine the exact interactions between processor allocation and task scheduling. By combining the two together we should be able to achieve the best performance.
References 1. Kim, J., Das, C.R., Lin, W.: A top-down processor allocation scheme for hypercube computers. IEEE Trans. on Parallel and Distributed Systems 2 (1991) 2. Krueger, P., Lai, T.H., Dixit-Radiya, V.A.: Job scheduling is more important than processor allocation for hypercube computers. IEEE Trans. on Parallel and Distributed Systems 5 (1994) 3. Mohapatra, P., Yu, C., Das, C.R., Kim, J.: A lazy scheduling scheme for improving hypercube performance. Proc. of Int. Conf. on Parallel Processing (1993) This article was processed using the LATEX macro package with LLNCS style