Scheduling distributed real-time tasks with minimum jitter - IEEE Xplore

1 downloads 0 Views 968KB Size Report
AbstractРThe problem of scheduling real-time tasks with minimum jitter is particularly ... algorithm over earliest deadline first and rate monotonic algorithms.
IEEE TRANSACTIONS ON COMPUTERS,

VOL. 49, NO. 4,

APRIL 2000

303

Scheduling Distributed Real-Time Tasks with Minimum Jitter Marco Di Natale and John A. Stankovic, Fellow, IEEE AbstractÐThe problem of scheduling real-time tasks with minimum jitter is particularly important in many control applications; nevertheless, it has rarely been studied in the scientific literature. This paper presents a unconventional scheduling approach for distributed static systems where tasks are periodic and have arbitrary deadlines, precedence, and exclusion constraints. The solution presented in this work not only creates feasible schedules, but also minimizes jitter for periodic tasks. We present a general framework consisting of an abstract architecture model and a general programming model. We show how to design a surprisingly simple and flexible scheduling method based on simulated annealing. Experimental results demonstrate the significant improvement of our algorithm over earliest deadline first and rate monotonic algorithms. Index TermsÐReal-time, scheduling, distributed systems, simulated annealing, jitter.

æ 1

R

INTRODUCTION

EAL-TIME distributed systems are becoming more commonplace. Applications like process control, avionics, and robotics need real-time support to schedule and synchronize real-time tasks running on remote nodes. In those cases where the environment and the application characteristics are well-known in advance, the worst case conditions and the critical rates can be evaluated. The computations required by the application can be distributed in a set of periodically activated tasks being scheduled according to a fixed pattern. The scheduler is executed offline and uses the parameters of the task set to generate a table of activation times to be used by the local dispatchers. In some applications, the feasible scheduling of all the task instances within the deadlines is not sufficient to guarantee the correct behavior of the system. For example, common semantics used for periodic tasks permits two successive instances of the same task to be separated by an amount of time variable between zero and two periods minus the minimum computation time of the task. In some cases, this variation is a serious problem and there is the need for different scheduling solutions that minimize jitter. Such cases can be found in avionics. In 1994, Carpenter et al. [6] described the scheduling problem for the Airplane Information Management System of the Boeing 777. Besides frequency, deadline, and latency requirements, the application tasks had minimum jitter requirements on the order of 100 s to 1 ms. The kind of jitter (peak to peak) requirement described in that paper is exactly the problem we solve here. Further, other characteristics of this application problem also match the assumptions we make in our problem definition: The problem consists of a multiproces-

. M. Di Natale is with the Departimento Ingegneria dell'Informazione, Universita di Pisa, Italy. E-mail: [email protected]. . J.A. Stankovic is with the Department of Computer Science, University of Virginia, Charlottesville, VA 22903. E-mail: [email protected]. Manuscript received 26 Sept. 1996; revised 29 Nov. 1999; accepted 13 Mar. 2000. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number 102115.

sor static scheduling of tasks without preemption and tasks are statically allocated to the processors. Another case study on the development of the Olympus Satellite's Attitude and Orbital Control System [5] cites minimizing output jitter as one of the requirements in task scheduling. A maximum release jitter of 5 ms is given as the requirement for a fixed priority scheduler in yet another aircraft application [4]. Finally, the importance of output jitter in task scheduling is stressed in [20] and [32] (both discussing static scheduling techniques) and, more recently, in [28], where activation offsets have been added to the schedulability analysis. Even if output jitter is often (but not always) considered as part of the constraints, the performance of digital control systems actually depends on the amount of jitter [13]. In these cases, a minimum jitter solution is preferrable. Even when task scheduling is performed off-line, it is a difficult job to find a feasible ordering for thousands of task instances in the least common multiple of their periods, especially if we require synchronization and the usage of shared resources (in [6], the computation times were 4 to 48 hours for a custom search algorithm). Most of the research solutions try to solve the feasibility problem only (i.e., schedule tasks within their deadlines). These solutions most often use heuristic-driven search algorithms to find the feasible task sequence. Good examples of state-of-the-art scheduling algorithms include the branch and bound algorithm by Xu and Parnas [31], where tasks with exclusive relations and precedence constraints are scheduled in a preemptive manner, and the Mars scheduler [8] using an iterative deepening search algorithm to solve the distributed scheduling problem. If we try to use these traditional search-based solutions, we need a fairly complex heuristic function to achieve the goal of minimizing jitter. The problem of finding a suitable heuristic function for the jitter minimization problem is likely to be difficult as there is, to our knowledge, no proposed solution. Simulated annealing techniques have been used by Tindell et al. [29] to find the optimal processor binding for

0018-9340/00/$10.00 ß 2000 IEEE

304

IEEE TRANSACTIONS ON COMPUTERS,

real-time tasks to be scheduled according to fixed-priority policies. We assume the assignment of the tasks to the processor(s) is fixed and the scheduling of the tasks on the cpus is calculated with simulated annealing techniques. However, it is possible to extend our method to handle the allocation problem. In this paper, we propose the application of simulated annealing techniques to static scheduling problems as an alternative to conventional search algorithms. As we show, a simulated annealing scheduler can deal with both jitter minimization and feasibility. The main contributions of this paper include: .

developing innovative ways to specialize simulated annealing to find solutions which contain feasible schedules with minimized jitter; . the demonstration of the performance and value of the algorithm on sample and experimental task sets for different loads and attributes, and . the creation of a real-time scheduling tool based on simulated annealing. One additional result is the design and implementation of a description language called DTDL that carefully identifies the information needed to be provided in order to perform distributed, real-time static scheduling; it serves as a controlled input to the scheduling tool itself, making it easier to use the tool. Note, this paper uses, rather than extends, the simulated annealing method. This paper does provide a new application of simulated annealing to an important problem: distributed real-time scheduling. It also provides a transformation of this scheduling problem into a form that is amenable to the application of simulated annealing. In this paper, we focus our attention on a nonpreemptive scheduling scheme. In our opinion, the use of nonpreemptive schemes should not be ruled out a priori as there are cases where application constraints or design considerations make nonpreemption an attractive choice. For example, nonpreemptive schemes allow better control on the assignment of the processor time and ease the observability and the debugging of the system. Furthermore, nonpreemptive schemes allow better control of jitter than their preemptive counterparts. Last, the performance gap between the more flexible preemptive scheduling schemes and nonpreemptive ones is smaller in multiprocessors than it is in single processor systems, as suggested by some theoretic results [21]. In any case, the solution presented in this paper can be extended to allow for a limited degree of preemption (see Section 3.6).

2

SCHEDULING PROBLEM: DEFINITIONS

We consider the following scheduling problem: We assume a LAN network connecting multiprocessor nodes. Each node consists of a multiprocessor bus and a number of boards with processors. A set of periodic processes P1 ; P2 . . . Pn are statically (later, this condition will be removed) assigned to the processors. Each process instance in the scheduling period, defined as the least common multiple of the processes' periods is characterized by:

VOL. 49,

NO. 4,

APRIL 2000

. a deadline di , . a release time ri , . a cpu specification (static allocation case), and . an activation period Pi (the scheduling period is lcm…P †.) Each process is further divided into a chain of scheduling units, the tasks. Each time a process instance executes, the entire set of tasks for this process must execute. The tasks can be defined as sequential nonpreemptive computational units that begin or end with a synchronization point or with a basic operation (request or release) on one or more resources. The tasks inherit the time attributes of the processes to which they belong and add other attributes of their own. Each task instance i in the scheduling period is characterized by: a deadline di , a computation time ci , a release time ri , a set of resources requested in exclusive mode fRi 1 ; Ri 2 ; . . . Ri n g, and . a set of precedence constraints: fi ! P Ci1 ; i ! P Ci2 ; . . . i ! P Cim g meaning that task i must be completed prior to the execution of tasks fP Ci1 ; P Ci2 ; . . . P Cin g. The tasks can have precedence and exclusion constraints (induced by the usage of synchronous primitives and shared resources). Now, if we divide the tasks into sets of periodic instances, we can define the starting time sik as the time instant when ik , the kth instance of task i is successfully scheduled and takes control of one of the cpus. The jitter Jik for the kth instance of task i (successfully scheduled where k ˆ 1 . . . mi ) can be defined as: . . . .

Jik ˆj sik‡1

sik

Pi j for k ˆ 1; 2; . . . mi

Jimi ˆj si1 ‡ lcm…P †

Pi

1

…1†

simi j :

The distributed scheduling problem can now be formulated as a combinatorial optimization problem on the set of all the task instances. Our problem input consists of the set of all the instances of tasks 1 ; 2 . . . n in the least common multiple of the processes' periods (already defined as scheduling period), each labeled with its future release time and deadline. Assume the cost of the schedules is P the sum of the maximum jitter of all tasks Jmax ˆ i maxj fJij g. The problem is to minimize the cost of the schedules subject to the precedence, exclusion, and deadline constraints and the constraint that no task can be scheduled before its release time. As part of our solution to this scheduling problem, we designed a system description language called DTDL (Dependency and Temporal Description Language) to describe processes, tasks, the allocation of the processes to the cpus, the usage of resources by the tasks, the precedence constraints among tasks, and, finally, all the temporal data of the system (deadlines, periods, computation times, and so on). The DTDL description mirrors all the definitions given in this section and should be produced at the time the

DI NATALE AND STANKOVIC: SCHEDULING DISTRIBUTED REAL-TIME TASKS WITH MINIMUM JITTER

305

Fig. 1. Core of the simulated annealing scheduling algorithm.

application is designed. Its purpose is to serve not only as a synthetic description of the application, but also as the input for our scheduling procedure. The complete DTDL syntax and a sample parser code are available from the web site http://retis.sssup.it/RTTools/DTDL/index.

3

SCHEDULING TASKS ANNEALING

WITH

SIMULATED

In the previous section, we explained how our scheduling problem can be formulated as a combinatorial optimization problem. A simulated annealing algorithm is a well-known solution approach to this class of problems. In our combinatorial optimization problem, we have an objective function to be minimized and the space over which the function is defined is factorially large so that it cannot be explored exhaustively. In other words, we have to search the solution space S (typically a subset of the possible permutations of the set of all task instances) for an optimal (minimum jitter) solution. A possible search algorithm works as follows: We define a transition operator TR between any pair of scheduling solutions …i ; j † 2 S and a neighborhood structure Si for each solution i containing all the solutions that are ªreachableº from i by means of the operator T R (to give an example, a possible definition for a neighborhood is the set of all schedules obtainable from the given one by a permutation of any two task instances). If we use a local search, the algorithm generally ends in a local optimum with no guarantee on the global optimality or even on the ªqualityº of the found solution.

To improve the chances of reaching the global optimum, simulated annealing allows transitions in a direction opposite to the pursued optimum. This modification is designed to permit escape from a local optimum, but not from a global one. The acceptance probability for higher cost transitions is exponentially decreasing with the cost difference and is slowly lowered with time. This is done by tuning a parameter that is often called the temperature in analogy with annealing in thermodynamics. The simulated annealing scheduler is stochastic. New states (schedules) are evaluated on the basis of a jitter metric. A transition to a lower cost state is always accepted; however, a transition to a higher cost state is accepted with an acceptance probability p which decreases exponentially with the cost difference and is lowered with each iteration …†

def

p ˆ e kT …p ˆ 1 if  < 0†;

…2†

where  is the cost difference between the solutions. The pseudocode for the simulated annealing algorithm (modeled on the base scheme proposed in [2]) is given in Fig. 1. We will briefly examine the algorithm of Fig. 1 and then give a short summary of the theoretical results for the simulated annealing methods. This will help to explain how to choose the parameters and the evaluation function that characterize the algorithm itself. The simulated annealing algorithm requires:

306

IEEE TRANSACTIONS ON COMPUTERS,

.

the description of all the instances of the tasks with their deadlines and their precedence and exclusion constraints, . a control parameter T (analog of temperature) and an annealing sequence fTk g which indicates how it is to be lowered with each iteration. The algorithm includes procedures to: .

generate random changes in the scheduling solutions; these changes are the options presented to the system (the transition operator TR), . evaluate the objective cost function C (analog of energy) whose minimization is the goal of the procedure. The input parameters to the algorithm are the initial and final value for the temperature parameter (input parameters init_temperature, final_temperature), the cooling rate, (input parameter coolrate), and the descriptors of all the task instances in the scheduling time. The latter are ordered in a list, each descriptor containing information about the timing data, the precedence relations, and the resource usage of the corresponding task instance (in the algorithm of Fig. 1, the list is pointed to by the variable instance_list). The routine anneal makes use of other procedures to perform the random changes in the solution space (these are the procedures choose_instance(), choose_change(), apply_change()), the evaluation of the newly found solution (using schedule_and_compute_value()), and the eventual acceptance (or rejection) of the new scheduling solution (calling undo_change() and good_change()). The final output of the algorithm is an execution time (expressed as a starting and a corresponding ending instant) for each task instance in the scheduling period. These execution time intervals can be stored in a table to be used by run time dispatchers. The theory of simulated annealing [1] says that the probability of a transition among any two solutions (or states) can be expressed as the product of two factors: the generation probability Gij or the probability that the state j is chosen for the transition exiting from i (uniform over the neighborhood) and the acceptance probability Aij for the transition in j to be accepted (defined as one for costdecreasing solutions and exponentially decreasing for higher cost transitions according to the law p ˆ e The set of conditional probabilities Pij …k

…Cj Ci † T

).

1; k† defined for

each pair of states …i; j† and for the kth iteration as the probability that the transition exiting from state i ends in state j identifies a Markov chain. In case the chain is nonhomogeneous [22] (T decreasing with k, the number of iterations), the parameter T, or better Tk , must be decreased such that: . . .

limk!1 Tk ˆ 0, Tk  Tk‡1 , n , 8k  2 : Tk  log…k†

VOL. 49,

NO. 4,

APRIL 2000

where n is the maximum number of transitions that separate the minimum cost state from any other state and  is the maximum cost difference between two solutions. In this case, the Markov chain is proven to converge (as a limit for k ! 1) to a stationary distribution qi …T † for each ith state (or solution) that is a function of the control parameter T. qi …T † ˆ P

e

j2S

…C…i† Copt †=T

e

…C…j† Copt †=T

;

…3†

where C is the cost function, Copt the minimum cost, and S the space of all solutions. When T ! 0, the stationary distribution has probability one for the state (or the overall set of states) corresponding to the global minimum and zero for all the other states. The convergence result is asymptotic and there is no guarantee the optimal solution is found in finite time. The objective, then, is not to reach the stationary distribution, but to get as close as possible to it. This can be obtained by using finite length Markov chains. The objective is to obtain a distribution vector ai …L; T †, function of the length L of the chain and such that (quasi-equilibrium condition [1]): ka…L; T †

q…T †k < ":

From this distribution vector, the algorithm starts with the next iteration pursuing the next state of quasi-equilibrium for a lower value of T. The approximate implementations of the algorithm, like the one in Fig. 1 generate homogeneous Markov chains of finite length L…T † for decreasing values of the parameter T. The inner loop of the algorithm in Fig. 1 computes the Markov chains given the value of T fixed by the outer cycle. The algorithm we used accepts a maximum number of transitions (proportional to the dimension of the problem) for each value Tk of the control parameter. We have chosen to limit the length of the chains by allowing a maximum number MAXCHANGE of accepted transitions for each value of Tk and an upper limit MAXTRY to the length of each chain (for lower values of Tk only a limited number of transitions can be accepted). The value Tk‡1 is lowered proportionally to Tk according to the formula Tk‡1 ˆ Tk

…4†

with ˆ 0:9=0:95 as proposed in [14]. It is particularly important that T0 , the starting value of T, be sufficiently large to allow a transition to any other state of the system. The terminal value for Tk or T1 must be chosen such that, for T1 , it is not possible to have a transition towards cost-increasing states. The initial and terminal value assigned to Tk , depend on the definition of the cost function and the transition operator. Like any other algorithm of this kind, the final error and the convergence speed depend on many factors, such as the size of the problem, the starting values for the control parameter, the limit for the length of the Markov chain for each iteration, and the complexity of the functions for the generation of the random changes and the evaluation of the new configurations. It is important to correctly define the parameters of

DI NATALE AND STANKOVIC: SCHEDULING DISTRIBUTED REAL-TIME TASKS WITH MINIMUM JITTER

307

the algorithm and to program it as efficiently as possible to reduce the overall complexity and computation time.

3.1 The Transition Operator For our scheduling problem, the set of all possible solutions consists of all the possible permutations of the tasks subject to the additional precedence and exclusion constraints and to their deadlines. To generate new schedules, it is possible to change the scheduling policy or, more reasonably, to keep the scheduling policy fixed and change the task parameters to new values consistent with their constraints. In particular, when minimizing jitter, the tasks should not be allowed to run when they are ready. We define a task configuration as the set of the temporal characteristics (release times, deadlines, computing times), the resources requested by each task (communication network included), and the precedence relations. Our transition operator assigns random changes to the task configuration in order to generate a new, different schedule. The new configuration is obtained by changing the release times ri of the tasks and delaying them to change the activation times and the ordering of the tasks in the new schedule. The choice of the new release times is not arbitrary, but must define new values compatible with the original release times, the deadlines, and the precedence relations. By compatible release time for a task i we mean any time instant ri0 that would not a priori prevent the feasible scheduling of the task i , that is: . ri0  ri , . ri0  rj ‡ cj for all tasks j : j ! i , . ri0  di ci , . ri0  dk ck ci for all tasks k : i ! k . The operator responsible for the transition to the neighboring solutions is the following: . .

A task instance is randomly chosen, The operator computes a time interval within which the release time of the instance can be moved . The operator assigns one randomly chosen new release time ri0 to the given task instance. The beginning rfirst of the compatibility interval is given by the maximum among the actual release time for the task instance i and the maximum among the earliest possible completion times (rj ‡ cj ) of the predecessors j . The ending time rlast of the interval is the minimum among the latest possible release time for the task, di ci , and the deadlines of the successors k minus the sum of the times ci ‡ ck necessary to complete the execution of i and to execute k before its deadline. At this point, it is important to understand why the scheduling policy that reads the modified task configuration and produces the schedule can be chosen with a great degree of freedom. The resulting schedule will be a function of the task configuration rather than the scheduling policy. We adopted FIFO because of its speed. This does not mean the final schedule is FIFO since we are not considering the actual release times. As an example, Fig. 2 shows a pure FIFO schedule (on top) and a FIFO schedule with modified release times (acting like earliest deadline scheduling.)

Fig. 2. Pure FIFO and FIFO on modified release times.

Going back to the algorithm of Fig. 1, it is now possible to explain what the other functions called in the body of the anneal procedure do. The function choose_instance randomly picks a task instance from the list of all instances; the function choose_change computes the compatibility interval and the new release time for the chosen instance. The function apply_change performs the change in the release time and inserts the task back in the instance list. The functions good_change and undo_change will later confirm or undo the proposed modification.

3.1.1 Precedence Constraints Any configuration of precedence constraints can be managed by the algorithm in the following way: All the schedules calculated by the algorithm in the successive iterations are obtained by FIFO ordering all the task instances whose release times have been modified randomly. If all tasks were to be executed on the same CPU and if there were no other shared resources, then any ordering in which the release times of the predecessors are earlier than the release times of the successors would be consistent with the precedence constraints. An initial ordering of this kind can be obtained by assigning the release times to the tasks in the following way: 8i : ri < rj whenever i ! j ; that is, 8i ri ˆ maxfri ; maxj frj ‡ cj for all j ! i gg;

…5†

then the transition operator previously defined keeps the ordering of the release times of the tasks consistent with the precedence constraints. This procedure is not sufficient to satisfy the precedence constraints as it doesn't guarantee, by itself, that the tasks will be scheduled in the right order. We must remember that our scheduling problem is characterized by shared resources and remote precedence constraints, so a consistent ordering of the release times doesn't imply that precedence constraints are respected. An example is in Fig. 3, where task T1, arriving earlier than its successor T2, is delayed when requesting a shared resource and thus violates the precedence constraints. Nevertheless, we assign the release times at the beginning of the simulated annealing algorithm (right before calling the procedure of Fig. 1) and in the intermediate iterations according to (5) and the compatibility constraints for two reasons. First, the described assignments do not add any artificial constraints, but change the problem into

308

IEEE TRANSACTIONS ON COMPUTERS,

VOL. 49,

NO. 4,

APRIL 2000

Fig. 4. Reachability of schedulable solutions. Fig. 3. Violation of the precedence constraints because of shared resources (the release times are consistent with the precedence constraints).

another one which is equivalent to the original. Second, they help avoid considering a priori infeasible solutions, thus saving time when randomly producing new configurations. At run time, when evaluating the solutions, our FIFO scheduler simply suspends the execution of the task instances that happen to be at the top of the ready list until the local or remote predecessors have completed their execution.

3.1.2 Access to Shared Resources The shared resources are managed at the moment the resulting schedule is evaluated for each single transition. The FIFO procedure that evaluates the partial schedules blocks and puts those tasks that request access to busy resources in a waiting queue. At the moment the resource is released, the waiting instances again compete for access. The optimization in the assignment of the resource is not the FIFO scheduler's duty, but is the result of the selection by the simulated annealing algorithm of the right activation times for all tasks. 3.1.3 Access to the Communication Network The communication network can be modeled as a global shared resource to be accessed in a mutually exclusive way. In static systems, this solution is probably sufficient; the concurrency control could be implemented a priori and on a global basis, thus allowing the use of very simple access protocols (without the MAC layer). A different possibility is to use only a fraction of the global bandwidth on each node (with media access protocols like TDMA, token ring, or timed token) and allow concurrent accesses with bandwidth sharing. What is required at the stage at which the partial schedules are computed is to handle the network resource by calculating the time necessary to forward the messages to the remote locations, on the basis of the fraction of bandwidth allocated to each node, and assign these transmission times to the sending tasks. This implies a larger time necessary for sending single messages since only a fraction of the total bandwidth is available. 3.1.4 Reachability of the Optimal Solution The purpose of our algorithm is to get as close as possible to the optimal scheduling solution sopt starting from any initial schedule s. It is important to prove how our method for generating new solutions through the modification of the

release times and a FIFO scheduler makes solutions close to the optimal or any other feasible solution reachable from the initial configuration (left side of Fig. 4). In other words, the situation shown in the righthand side of Fig. 4 never happens since our TR can produce any schedulable solution. Theorem 3.1. Any feasible scheduling solution ssch can be produced by applying the TR transition operator. In order to prove this, we show how to build any given feasible solution by applying the TR operator a finite number of times and then schedule task instances (with the new release times) according to a FIFO strategy. Proof. Consider a feasible schedule ssch and the corresponding execution order f1sch ; 1sch ; . . . nsch g of the task instances in the scheduling period. Each task instance j takes control of its cpu at a time instant ssch to be j executed until its termination, respecting all precedence and exclusion constraints. Consider the task instance nsch that is scheduled last in the feasible schedule. Since nsch is successfully schedwhen n takes control of the uled, the time instant ssch n cpu in the feasible schedule is contained in the initial compatibility interval for its release time. Now, consider the initial configuration of the task instances, where the only modification on the release times is given by the application of (5). Take the task instance nsch and assign it the new release time rn0 ˆ ssch n . The new configuration produces a new schedule that can possibly be accepted as a result of the first transition. (This step requires a nonfeasible solution to be acceptable as an intermediate state, see Section 3.2.) The second transition applies to the instance nsch1 , which is scheduled immediately before nsch in the feasible solution. The release time of the instance rn 10 is changed to the time instant ssch n 1 . By iterating this procedure, it is possible to modify all the release times of the tasks and all task instances will have their release times equal to the starting times they have in the chosen feasible schedule ssch . A FIFO scheduler working on this new set of release times produces the desired feasible sch 2 fssch schedule since, in any ri0 ˆ ssch i 1 ; . . . sn g, there is always the only task i to be scheduled on its given cpu, all its predecessors being completed and all the resources it needs being free. u t

3.2 The Cost Function The value for each newly generated schedule is the sum of the maximum jitter of all tasks:

DI NATALE AND STANKOVIC: SCHEDULING DISTRIBUTED REAL-TIME TASKS WITH MINIMUM JITTER

309

T0 must be evaluated such that, in the first chain, almost all the proposed transitions could be accepted. A value too small for T0 could make the algorithm stop on a local minimum, thus preventing the reachability of the global optimum. At the same time, T0 shouldn't be too large to lose excessive time in useless random wandering through the solution space. We evaluate T0 from the equation shown in [12] that links the initial temperature with the initial acceptance ratio, T0 ˆ

Fig. 5. Reachability of a near-optimal solution.



X i

maxfJik g; k

…6†

where Jik , the jitter for the kth instance of task i , is evaluated according to (1). The function can easily be computed during (or right after) the evaluation of the partial schedules. It is important to specify that our scheduler rejects all the task instances that would execute past their deadlines together with their successors. The schedules obtained with one or more tasks missing (because they are not schedulable) are accepted and evaluated by the simulated annealing routine. In case a task instance is missing, the jitter is simply calculated spanning the empty period and considering the next schedulable instance. Considering nonfeasible schedules in the intermediate states is a temporary, although necessary, violation of the deadline constraints. The reason why we consider schedules with one or more missing instances is to make the solution space and the energy function as continuous as possible. In complex systems, we expect that most of the computed schedules are not feasible. If such schedules (states) are not accepted as a destination of our transition function, then we cannot guarantee that, starting from any solution, the entire solution space can be searched and the simulated annealing algorithm can find an approximate solution. An approximate representation of this situation is in Fig. 5. Because of the way we calculate jitter on both feasible and not feasible schedules, it is clear that the solution with the minimum jitter is likely to also be a feasible one (when it exists). When no task instance is missing, there are no void periods and the jitter should be sensibly reduced (actually, a new scheduled instance causes a step in the jitter function, as shown in the experimental section). To make sure that the solution with the minimum jitter is also a feasible one, we check the final schedule and verify that no instance is missing. If all instances are scheduled within the deadline constraints, then the schedule is accepted.

3.3

Initial and Final Values for the Control Parameter T Given the cost function (6) and the transition operator that is used to generate the new configurations, the initial value

 ln…0 1 †

…7†

by imposing an initial acceptance ratio 0 ˆ 0:9 ( is an estimate of the average cost difference for a cost-increasing transition). In our case, the value of  should be greater than the double of the maximum period of the application processes to allow for a temporary transition to a solution where at least one more task instance is rejected. For the final value T1 , it is sufficient to say that the minimum increase in cost for our metric function is one unit. This means that, for T ˆ 0:06, the probability of accepting any higher-cost solution is lower than p ˆ 4  10 7 and, for T ˆ 0:02, lower than 10 20 . Going below this last value makes little sense.

3.4

Implementation and Considerations of Efficiency All the procedures called from the main loop of the simulated annealing algorithm must be as efficient as possible. In our case, the task to be modified is chosen with a look-up table and the admissible transitions are obtained by scanning a double linked list of task instances. The computation of the new schedule and the corresponding evaluation are implemented with a FIFO scheduler running in time quadratic in the size of the problem (number of task instances). The overall complexity is, therefore, O…n4 †. 3.5 Considerations on Task Allocation In our model, we assumed the tasks to be statically assigned to the processors on the basis of locality or other optimization functions (distribution of the system load or minimization of the communication flow over the network). Actually, finding the optimal allocation for the system tasks according to most of the metrics of practical interest is another NP-complete problem. A simulated annealing solution has already been proposed for this problem [29]. In our system, nothing prevents us from doing the same thing, despite a slower execution speed and an increased complexity for the scheduling algorithm. It is sufficient to define another transition operator that modifies the allocation of a randomly chosen task and uses it in conjunction with the previously described operator that modifies the release times. At this point, it is necessary to consider two additional activities. First, the usage of the network must be reevaluated for each new iteration and cannot be simply done statically. Second, the evaluation of the new solution must not only consider the schedulability of the task

310

instances with respect to their deadlines, but also the feasibility of the new allocations (e.g., because of memory bounds). Simulated annealing can perform an allocation of the tasks together with their scheduling. The great flexibility of the method that easily permits extensions or subcases of the original problem with minimum effort is probably the greatest advantage in the use of simulated annealing techniques.

3.6 Toward a Preemptive Model We have chosen to restrict the study to nonpreemptive scheduling policies for reasons given earlier. In some situations, this decision may be restrictive. It is well-known that preemption allows greater flexibility in the assignment of cpu times to the task instances (at the cost of greater complexity in the scheduling algorithm). This improved flexibility can make it possible to find a feasible solution when there is none for a nonpreemptive model. This is particularly true for uniprocessor systems, with no shared resources, as proven by Baker et al. [3] and Lenstra and Rinnooy Kan [17]. On the other hand, when the tasks use shared resources in exclusive mode, the differences are less evident, as intuitively confirmed by the fact that the usage of a shared resource prevents preemption from a task making use of the same resource. In multiprocessor or distributed systems, the advantages of preemption are less and less evident, as suggested by McNaughton's theorem [21] which states the equivalence of preemptive and nonpreemptive solutions for a particular scheduling metric (minimizing the weighted sum of the completion times). In our case, the metric to be optimized is different (often referred to as number of late tasks, the corresponding problem has complexity NP [15]), but McNaughton's theorem can give an indication of how the advantages obtainable from preemption are less evident in multiprocessor systems. Nevertheless, in the situations where it is not possible to find any nonpreemptive solution, it is possible to use our solution by changing the definition of the problem to allow for a greater flexibility in the assignment of the cpus or a certain amount of preemption. It is sufficient to split each task i in the process chain into a sequence of fixed (possibly short) cQ length tasks i1 ; i2 ; . . . in . Now, it is possible to switch the cpu to another task with a maximum delay of cQ time units, thus enhancing preemption. The predecessor tasks for i become predecessors for i1 and the successors must be executed after the completion of in . Needless to say, the increased number of tasks makes the algorithm more complex (remember that the algorithm has complexity of the order of the fourth power in the problem size). When splitting the tasks into unit length segments, it is necessary to take additional care when accessing shared resources. The FIFO scheduler must guarantee the atomicity of the operations on resources, thus preventing the execution of subtasks requesting a resource already in use.

IEEE TRANSACTIONS ON COMPUTERS,

VOL. 49,

NO. 4,

APRIL 2000

Fig. 6. Structure of the simulated annealing scheduling tool.

4

SCHEDULING TOOL DESCRIPTION EXPERIMENTAL RESULTS

AND

4.1 The Scheduling Tool We built a scheduling tool based on simulated annealing and ran experiments. Its structure is represented in Fig. 6. The description of the application is translated into a graph of processes and tasks linked by precedence constraints. Then, a description of the system architecture is automatically computed from the allocation declarations and the system resources are identified. The description of the system is stored in a data structure like the one in Fig. 7. Processes and tasks are sorted into two separate lists. Each process descriptor contains a list of all the tasks that originate from it. All tasks descriptors contain a reference to the predecessors and the successors (two linked lists) and another linked list to the resources they need. Once the scheduler has a representation of all tasks, processes, and system resources, it begins to compute all the task instances contained in the scheduling period. The instances are ordered in a double-linked list according to their release times. The instance descriptors contain the time information inherited from the processes and tasks they represent, together with linked lists for predecessors and successors and the specification of the resources they use. The double-linked list of the task instances works as an event list during the iterative scheduling process. Each instance is characterized by a release time. When the partial schedules are evaluated, the list is scanned from the first to the last release time. Each instance descriptor has a bidirectional link to a similar descriptor representing the termination event for the task instance. The event (or instance) list is scanned, simulating the time flow. When a new instance is examined, in case it is the release of a task instance, the instance is either queued in the ready list of its cpu or in the waiting lists of the resources that it needs and that are currently in use. The procedure for checking the schedulability and assigning the cpu is called after the instance is in the ready list and in case the cpu is idle. Whenever an instance reaches the top of its ready list and all its predecessors are completed, its schedulability is checked. If the instance is found schedulable, it gets the cpu; otherwise, it is rejected together with all its successors. When an instance gets the cpu, the corresponding descriptor for the ending event is inserted in the event list in a position corresponding to the worst case ending time. Later, while scanning the event list, the scheduler will find the ending instance and the corresponding cpu will be set to idle, the resources used by the ending instance will be made available, and the

DI NATALE AND STANKOVIC: SCHEDULING DISTRIBUTED REAL-TIME TASKS WITH MINIMUM JITTER

311

Fig. 7. Data structures of the simulated annealing scheduling tool.

dispatching procedure will be called for all the idle cpus in the node. At a more abstract level, we have a first phase, when the task instances are ordered in an event list and the release times calculated according to (5), and a second phase, when the simulated annealing algorithm described in Fig. 1 takes control. The algorithm works through successive modifications to the configuration of the task instances, the computation of the resulting schedules, and the evaluation of the results. At the end of the cooling process, the final result is a set of tables containing the activation times (when the tasks get the control of the cpu) and the ending times of all task instances and for each cpu. These tables are stored in files and can be used at run time by the local dispatchers.

4.2 Examples and Experimental Results We first show some examples to explain the way the scheduling algorithm works and to show its potential performance advantages. Then, we present some experimental results on task sets with variable utilization and deadline to period ratios. All the tasks and processes of the three examples differ only in the activation periods. Our simulated system consists of 10 processes. Table 1 shows the allocation of the processes, the task structure, the computation times of all tasks, the precedence constraints among tasks, and the resources used by each task. Table 2 shows the deadlines and the periods of all the processes in the three task sets. Fig. 8 plots the total maximum jitter as a function of the number of iterations of the algorithm. The Y-axis represents

the sum of the maximum jitter of all tasks as the simulated annealing scheduler iterates for lower values of the temperature T (the plotted values are those obtained at the end of each iteration). The continuous line on the graph shows the results for the Task set 1. For this task set, the system is not heavily loaded and simple EDF scheduling can give the correct schedule immediately. The solution obtained by EDF scheduling has a total maximum jitter of five time units (dotted horizontal line), while our simulated annealing scheduler succeeds in ordering the task instances with no jitter. The dashed line shows the results for Task set 2. This system (average processor utilization U = 0.716) is obtained from the previous specification by dividing the process periods by 2. In this case, the EDF scheduler cannot find a feasible solution (two task instances are rejected). With our simulated annealing algorithm, the whole set is scheduled and the total maximum jitter in the activation of the task instances is as low as 3.5 time units. The third curve corresponds to a medium loaded system (average U = 0.435, task set 3), where the processes have four different periods (more instances in the scheduling period). Again, EDF scheduling doesn't work, while our scheduler gives a feasible solution with a total maximum jitter of 15 time units.

4.2.1 Experiments No benchmarks exist for the real-time scheduling problem addressed in this paper. Hence, we resort to synthetic test cases that span a very large parameter space. Several

312

IEEE TRANSACTIONS ON COMPUTERS,

VOL. 49,

NO. 4,

APRIL 2000

TABLE 1 The Processes in the Experimental Set

Fig. 8. Sum of the maximum jitter on each iteration.

.

principles guide our synthetic workload. First, the task set should be difficult to schedule (therefore have sufficiently high utilization) and be sufficiently complex (otherwise, most search algorithms would easily work). Unfortunately, the complexity of the problem and the long execution times of our algorithm (see Fig. 11) limit the number of experimental runs that can be performed. Therefore, we performed 114 runs on task sets with different configurations. The overall computation time was close to four months of machine time (on Pentium processors.) We built a tool to generate synthetic task configurations based on the parameters of Fig. 9. Only some of the available parameters have been used in the simulation runs. We mainly investigated the behavior of the algorithm for various utilizations, varied the harmonic relationships of the periodic tasks, and varied the period to deadline ratio. We performed our runs on four classes of processes (and tasks) according to their periods: . . .

class 0: harmonic instances, periods were randomly drawn from the set 16; 32; 64; 128; 256 class 1: periods drawn from the set 12; 30; 60; 120; 240 class 2: periods drawn from the set 10; 20; 50; 100; 180 TABLE 2 Process Periods and Deadlines in the Examples

class 3: (less harmonic) periods drawn from the set 15; 30; 50; 180; 200 For each class, we performed sets of 10 runs with variable cpu utilization and variable deadline to period ratio (deadline to period ratios going from 1.0 to 0.25.) We compared the output of the simulated annealing algorithm (SA) with the output of a standard Rate Monotonic scheduling (RM) and Earliest Deadline First scheduling (EDF). Results. Our algorithm always performed better than EDF and, in 110 runs out of 114, it outperformed RM. The only cases in which RM performed best happened when the overall cpu utilization was low. In those cases, RM found a solution with exactly 0 jitter, while the SA solution came very close to it (0.25 the highest value). An important anomaly occurred once. EDF found a feasible schedule, while SA converged on a nonfeasible solution with much lower jitter (according to our metric). Although this occurred only once, it may be possible to fix this undesired behavior by changing the metric and raising the penalty for missed task instances. Class 0. We performed 54 runs on sets belonging to class 0. The results (see Table 3) are: .

. .

SA failed to find a feasible solution six times. In all those cases, RM and EDF failed as well (nonschedulable sets have not been removed since deciding whether a set is not feasible is an NP complete problem itself!). EDF produced a nonschedulable solution 13 times. RM did not produce a feasible solution 30 times.

The experiments in class 0 have been further divided into four groups: the first with utilization 0:2  U  0:5 (deadline to period ratio = 1 to 0.2), the second with 0:3  U  0:75, the third with 0:55  U  0:85, and the fourth with 0:65  U  0:9. The results are expressed as follows: The first row shows the number of feasible solutions found, the second shows the absolute and relative (in parenthesis, to the worst performing algorithm) value for the average jitter metric among all sets. The third row presents the results considering only the schedulable sets.

DI NATALE AND STANKOVIC: SCHEDULING DISTRIBUTED REAL-TIME TASKS WITH MINIMUM JITTER

313

TABLE 4 Results for Class 1

.

Fig. 9. Parameters for the task generation tool.

Class 1. We performed 20 runs on sets belonging to class 1. The results are: . SA always found a feasible solution. . EDF produced a non schedulable solution five times. . RM did not produce a feasible solution 11 times. The experiments in class 1 have been further divided into 2 groups: the first with 0:3  U  0:75 (deadline to period ratios from 0.25 to 1), the second with 0:55  U  0:9. The results are shown in Table 4. Class 2. Out of the 20 runs performed on class 2 sets, we got the following results: . SA always found a feasible solution. . EDF produced a nonfeasible scheduling three times. . RM did not produce a feasible solution 13 times. The first group defined for the experiments in class 2 had utilizations ranging from 0:3  U  0:75 (deadline to period ratios from 0.25 to 1); in the second, 0:55  U  0:9. The results are shown in Table 5. Class 3. Finally, the 20 runs performed on class 3 resulted in the following: . .

SA found a feasible solution 12 times. EDF produced a schedulable solution seven times. TABLE 3 Results for Class 0

RM did produce a feasible solution only in five of the 10 runs. The first group had 0:25  U  0:5 (deadline to period ratios from 0.25 to 1); the second had 0:3  U  0:65. The results are found in Table 6. Overall evaluation of results. If we consider all results, averaging the output for the different classes, we have Fig. 10, which shows the dependency on the deadline to period ratio. From our experiments, our solution works quite well for different load conditions and with different task sets (having harmonic as well as nonharmonic periods). The average results are always at least five times better than the best solution found by either EDF or RM. The SA algorithm also proved to be a powerful scheduling tool for finding more feasible solutions in each of the different classes of tests. Experimental runs on single processor configurations were also performed (but not shown here). In these tests, similar performance gains occurred.

5

EXECUTION TIME COST

OF

SA ALGORITHM

The runtime costs of executing the simulated annealing algorithm are strongly dependent on the complexity and size of the task set. In Fig. 11, the computation times (in seconds) are plotted as a function of the number of task instances in the scheduling period (program written in C running on a DEC Alpha 4/266 250 workstation). Even though these are long execution times, note that, for static systems, the schedule is computed only once and offline. One or even a few days of computing time can be perfectly reasonable given that the design and programming stage take much more time and the schedule is just another component of the system, hard-coded and unchanged unless the task set is modified (often involving a major reengineering of the whole system). Last, not all the TABLE 5 Results for Class 2

314

IEEE TRANSACTIONS ON COMPUTERS,

VOL. 49,

NO. 4,

APRIL 2000

TABLE 6 Results for Class 3

tasks in the system have hard real-time and jitter requirements. A perfectly reasonable approach could be to schedule only the time critical and jitter sensitive tasks, then use the time gaps left by those tasks to execute the nonreal-time tasks. Given the long computation times required for SA, it is reasonable to first try simple search algorithms and apply SA only when needed. When the scheduling problem is difficult, for example, when the cpus are heavily utilized, there are many precedence constraints and conflicts on resources, simulated annealing appears to be a very effective approach. The following subsections discuss the issue of the comparison between standard search techniques and simulated anneal-

Fig. 11. Computation times (in seconds) for the simulated annealing algorithm.

ing when the problem requires scheduling real-time tasks with jitter requirements.

5.1

Finding a Feasible Schedule by Using Simulated Annealing If output jitter is not a concern, it is still possible to use a simulated annealing algorithm to solve the standard realtime feasibility problem. An SA solution can be of value since most of the scheduling algorithms for the general schedulability problem (distributed system, resource and precedence constraints, deadlines shorter than period) are search algorithms with no guarantees of the quality of the result or the computer time needed to produce it. While the purpose of this paper is not to run a fair comparison between these methods and an SA solution, it is probably worthwhile to explain how to modify the procedure shown in this paper to solve the standard feasibility problem. This also may add some value to the flexibility of the method. In the feasibility case, the cost of the schedules is the number of tasks scheduled after their deadlines. The problem is to minimize the cost of the schedules subject to the precedence and exclusion constraints and the (implicit) constraint that no task can be scheduled before its release time. In this case, the value for each newly generated schedule is a function of the number of tasks scheduled within their deadline and can be simply obtained as X Cˆ i ; …8† i

Fig. 10. Results as a funcion of the average deadline to period ratio.

where i ˆ 1 if i is scheduled beyond its deadline; 0 elsewhere. If the cost function is (8), then the algorithm converges to a feasible solution. The running times in this case are much smaller given the smaller number of different values the cost function can possibly take.

DI NATALE AND STANKOVIC: SCHEDULING DISTRIBUTED REAL-TIME TASKS WITH MINIMUM JITTER

315

annealing's flexibility and the potential for solving problems not easily solvable in a conventional way should be sufficient to develop simulated annealing based scheduling tools such as the one developed here. Experimental results demonstrate the effectiveness of the solution to the minimization of jitter problem. We are not aware of any other published solutions to this problem.

ACKNOWLEDGMENTS Fig. 12. The schedule obtained with RM/EDF techniques is not the one with minimum jitter.

5.2

Alternative Methods for Scheduling Tasks with Minimum Jitter Both EDF (Earliest Deadline First) and RM (Rate Monotonic) are not efficient algorithms for minimizing jitter. This can be demonstrated by an example (see Fig. 12). In this example, there are two tasks: T1 and T2. Six instances are scheduled by EDF and RM. Task T2 experiences a jitter of two time units. This is not a minimum jitter since a very simple policy that schedules T1 at the beginning of its period and T2 at the end results in no jitter. The reason for the poor performance of EDF and RM is that they try to pack tasks together as early as possible in the schedule. Intuitively, this is inappropriate for minimizing jitter. Scheduling which minimizes jitter sometimes needs to purposely add idle time. Adding idle time has been used by Mok [23] and Ramamritham [25] to avoid priority inversions. In particular, the Ramamritham paper uses a search algorithm that employs idle slots. This search algorithm can be modified to deal with jitter. For example, the ordering heuristic could be modified to insert idle tasks when the packing of an actual task into the schedule would produce a large jitter. The original search heuristic also checks deadline feasibility as tasks are inserted into the tentative schedule. This check can be modified to check both deadline feasibility and jitter requirements. While this sounds simple, a number of questions remain, including should jitter be considered in the ordering heuristic or not, how will the performance of the search algorithm be affected (there could easily be a large increase in the number of backtracks), and when and what size idle tasks need to be inserted. Consequently, while modifying search heuristics might be another solution to minimizing jitter, there is no evidence to date that such an approach is viable.

6

CONCLUSIONS

Simulated annealing algorithms are often used to solve NPcomplete problems. They have the potential to avoid local minima while giving approximate solutions in polynomial time [1]. Even though it is polynomial, the algorithm is computationally expensive, which suggests its use only for problems that are not easily solvable with other techniques. A valuable example of problems of this kind is distributed, real-time scheduling with jitter minimization. Simulated

The authors wish to thank Professor Krithi Ramamritham for his comments and useful discussions.

REFERENCES [1] [2] [3] [4] [5]

[6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]

E. Aarts and J. Korst, Simulated Annealing and Boltzmann Machines. John Wiley & Sons, 1989. E. Aarts and P. Van Laarhoven, ªA New Polynomial Time Cooling Schedule,º Proc. IEEE Conf. Computer-Aided Design, pp. 206-208, 1985. K.R. Baker et al., ªPreemptive Scheduling of a Single Machine to Minimize Maximum Cost Subject to Release Dates and Precedence Constraints,º Operations Research, 1982. I. Bate, A. Burns, J. McDermid, and A. Vickers, ªTowards a Fixed Priority Scheduler for an Aircraft Application,º Proc. Euromicro Workshop Real-Time Systems, 1996. A. Burns et al., ªThe Olympus Attitude and Orbital Control System. A Case Study in Hard Real-Time System Design and Implementation,º Technical Report YCS90, Dept. of Computer Science, Univ. of York, 1993. T. Carpenter, K. Driscoll, K. Hoyme, and J. Carciofini, ªARINC 659 Scheduling: Problem Definition,º Proc. 1994 Real Time Systems Symp., Dec. 1994. E.G. Coffman and R. Graham, ªOptimal Scheduling for TwoProcessor Systems,º ACTA Informatica, vol. 1, 1972. G. Fohler, ªFlexibility in Statically Scheduled Hard Real-Time Systems,º Technisch Naturwissenschaftliche Fakultaet, Technische Universitaet Wien, Apr. 1994. R. Garey and D. Johnson, ªComplexity Results for Multiprocessor Scheduling under Resource Constraints,º SIAM J. Computing, 1975. M.R. Garey, D.S. Johnson, B.B. Simons, and R.E. Tarjan, ªScheduling Unit-Time Tasks with Arbitrary Release Times and Deadlines,º SIAM J. Computing, vol. 10, no. 2, May 1981. J.R. Jackson, ªScheduling a Production Line to Minimize Maximum Tardiness,º Research Report 43, Management Science Research Project, Univ. of California, Los Angeles, 1955. D.S. Johnson et al., ªOptimization by Simulated Annealing: an Experimental Evaluation,º Proc. Workshop Statistical Physics in Eng. and Biology, Apr. 1984 (revised 1986). K.N. Kim et al., ªVisual Assessment of a Real-Time System Design: A Case Study on a CNC Controller,º Proc. 1996 Real Time Systems Symp., Dec. 1996. S. Kirkpatrick et al., ªOptimization by Simulated Annealing: Quantitative Studies,º J. Statistics of Physics, vol. 34, 1984. E.L. Lawler, ªOptimal Sequencing of a Single Machine Subject to Precedence Constraints,º Management Science, vol. 19, 1973. E.L. Lawler, ªRecent Results in the Theory of Machine Scheduling,º Math. Programming: The State of the Art, A. Bachen et al., eds. New York: Springer-Verlag, 1983. J.K. Lenstra, A.H.G. Rinnooy Kan, ªOptimization and Approximation in Deterministic Sequencing and Scheduling: A Survey,º Annals of Discrete Math., vol. 5, pp. 287-326, 1977. J. Leung and J. Whitehead, ªOn the Complexity of Fixed Priority Scheduling of Periodic, Real-Time Tasks,º Performance Evaluation, vol. 2, no. 4, pp. 237-250, 1982. C.L. Liu and J.W. Layland, ªScheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment,º J. ACM, vol. 20, no. 1, 1973. C.D. Locke, ªSoftware Architecture for Hard Real-Time Applications: Cyclic Executives vs. Fixed Priority Executives,º Real-Time Systems, vol. 4, no. 1, Mar. 1992. R. McNaughton, ªScheduling with Deadlines and Loss Functions,º Management Science, vol. 6, 1959.

316

[22] D. Mitra et al., ªConvergence and Finite-Time Behavior of Simulated Annealing,º Proc. 24th Conf. Decision and Control, Dec. 1985. [23] A.K. Mok, ªFundamental Design Problems of Distributed Systems for the Hard-Real-Time Environment,º PhD thesis, Dept. of Electrical Eng. and Computer Science, Massachusetts Inst. of Technology, Cambridge, Mass., May 1983. [24] J. Moore, ªAn n Job, One Machine Sequencing Algorithm for Minimizing the Number of Late Jobs,º Management Science, vol. 15, no. 1, pp. 102-109, Sept. 1968. [25] K. Ramamritham, ªAllocation and Scheduling of PrecedenceRelated Periodic Tasks,º IEEE Trans. Parallel and Distributed Systems, vol. 6, no 4, pp. 412-420, Apr. 1995. [26] K. Ramamritham, J. Stankovic, and P. Shiah, ªEfficient Scheduling Algorithms For Real-Time Multiprocessor Systems,º IEEE Trans. Parallel and Distributed Computing, vol. 1, no. 2, pp. 184-194, Apr. 1990. [27] J. Stankovic, M. Spuri, M. Di Natale, and G. Buttazzo, ªImplications of Classical Scheduling Results on Real-Time Systems Design,º Computer, pp. 16-25, June 1995. [28] K. Tindell, ªAdding Time Offsets to Schedulability Analysisº Technical Report YCS 221, Dept. of Computer Science, Univ. of York, year? [29] K. Tindell, A. Burns, and A. Wellings, ªAllocating Real-Time Tasks (an NP-Hard Problem Made Easy),º Real-Time Systems J., June 1992. [30] J.D. Ullman, ªPolynomial Complete Scheduling Problems,º Proc. Fourth Symp. Operating System Principles, 1973. [31] J. Xu and D. Parnas, ªScheduling Processes with Release Times, Deadlines, Precedence, and Exclusion Relations,º IEEE Trans. Software Eng., vol. 16, no. 3, pp. 360-369, Mar. 1990. [32] J. Xu and D. Parnas, ªOn Satisfying Timing Constraints in Hard Real-Time Systems,º Proc. ACM SIGSOFT 91 Conf. Software for Critical Systems, Dec. 1994.

IEEE TRANSACTIONS ON COMPUTERS,

VOL. 49,

NO. 4,

APRIL 2000

Marco Di Natale is an assistant professor in the Information Engineering Department at the University of Pisa, Italy. Before joining the University of Pisa, he worked at the Scuola Superiore S. Anna in Pisa and at the University of Massachusetts, in cooperation with the Spring Group, researching scheduling algorithms for single processor and distributed systems. His current research interests are in algorithms and tools for the development of embedded real-time systems.

John A. Stankovic is the BP America Professor and Chair of the Computer Science Department at the University of Virginia. He is a fellow of both the IEEE and the ACM. Professor Stankovic also serves on the Board of Directors of the Computer Research Association. Before joining the University of Virginia, Professor Stankovic taught at the University of Massachusetts. At the University of Massachusetts, he won an outstanding scholar award. He has also held visiting positions in the Computer Science Department at CarnegieMellon University, at INRIA in France, and the Scuola Superiore S. Anna in Pisa, Italy. He has served as the chair of the IEEE Technical Committee on Real-Time Systems. Prof. Stankovic has also served as an IEEE Computer Society distinguished visitor, has given distinguished lectures at various universities, and has been a keynote speaker at various conferences. His PhD thesis was chosen as one of the best CS PhD theses and published as a book. He is the editor-in-chief of both the IEEE Transactions on Distributed and Parallel Systems and the RealTime Systems Journal. His research interests are in distributed computing, real-time systems, operating systems, and distributed multimedia database systems. He is a fellow of the IEEE.

Suggest Documents