An Annotated Bibliography on Scheduling Algorithms ...

11 downloads 483 Views 516KB Size Report
Sep 4, 2008 - scheduling, where the scheduler must guarantee to service soft tasks at the earliest possible time. Both problems are challenging, with present ...
An Annotated Bibliography on Scheduling Algorithms for Fixed and Dynamic Priority Real-Time Systems, with Hard, Firm or Soft Deadline Tasks, and Emphasis on Feasibility Testing [Draft] Michael E. Thomadakis Computer Science Department Texas A&M University College Station TX 77843–3112 miket AT tamu DOT edu September 4, 2008

Abstract In this report we present an annotated bibliography of research work relating to real-time scheduling of tasks. Special emphasis is placed on the different off-line and on-line schedulability testing algorithms for hard, firm and soft real-time tasks. Each proposed algorithm is presented and analyzed in terms of asymptotic time and space complexity. Algorithms are critically examined, when there is sufficient information, with respect to pragmatic issues and hidden overheads in implementation. Both fixed-priority and dynamic-priority methods are presented and analyzed.

Index Terms: Real-Time Scheduling; Firm-Aperiodic Tasks; Dynamic Priority Preemptive Scheduling; Fixed Priority Preemptive Scheduling; FPF/LDL Schedules; Hard Periodic Real-Time Tasks; Idle-Capacity Management; Off-Line and On-Line Feasibility Algorithms; Workload Processes Note: This annotated bibliography is in the process of continuous development and it should be considered a draft document.

1

1

PREVIOUS WORK ON FEASIBILITY ALGORITHMS

Previous Work on Feasibility Algorithms

A brief overview is given here on feasibility testing algorithms, which are divided into two broad categories. Algorithms in the first category are concerned with the problem of determining if a given periodic task set is feasible under a priority assignment scheme. Algorithms in the second, are attempting to determine if a new task can be added to an existing task set with a known feasible schedule, without violating the existing set’s schedulability.

1.1 1.1.1

Fixed-Priority Scheduling Disciplines Aperiodic Service by Periodic Server

A number of on-line feasibility testing algorithms have been proposed in the past. In this section we provide an overview of these methods and discuss their time and space complexities. We include feasibility testing methods for dynamic priority systems for comparison purposes. Feasibility algorithms fall broadly into two categories, namely, the slack-stealing and the schedule construction methods. A third approach, the server-based methods, preallocate off-line portions of the schedule via the use of aperiodic servers, such as, the Poll server (PS) [133, 182], the Priority Exchange (PE), the Extended-Priority Exchange (EPE) [132, 200], the Deferrable (DS) [132, 218], and the Sporadic (SpS) [203] server. These methods represent important milestones as algorithms for the scheduling of aperiodic tasks. However, they are not feasibility algorithms per-se, and, thus, we do not discuss them any further.

1.2

Aperiodic Service Methods by Slack Stealing of the Periodic Schedule

Slack Stealing (SS) algorithms are based on the observation that a periodic task instance τij can be delayed at time t by an amount of time equal to its slack Si (t) without missing its deadline. When slack is available at all priority levels Smin = minni=1 {Si (t)} then a firm aperiodic task Ja can take precedence over all tasks during [t, t + Smin ). To provide on-line guarantees to Ja the SS algorithms search all consecutive slack intervals until they can find total spare capacity exceeding the execution requirements Ca . This approach leads to pseudo-polynomial time algorithms. Even

Michael E. Thomadakis miket AT tamu DOT edu 2

September 4, 2008 DRAFT

1.2

Aperiodic Service Methods by Slack 1 PREVIOUS Stealing of the WORK Periodic ON Schedule FEASIBILITY ALGORITHMS

after Ja is admitted, SS continuously searches for the immediately next slack interval to grant to Ja until it finishes. Furthermore, slack correctness requires continuous maintenance of Θ(n + m) slack variables, at the completion of every task or at each task switching. The high admission overhead and the continuous state maintenance make SS algorithms very hard to scale as the number of aperiodic requests increases. Therefore, as the aperiodic tasks become more fine-grained, the system spends proportionally more time deciding if a firm is schedulable, than serving aperiodic requests. Static Slack Stealing (SSS) [169, 134], upon arrival of a task Ja determines the amount of slack within time interval [ra , ra + Da ] based on precomputed information about slack. Task admission and subsequent scheduling of Ja each take O((H + Da )n2 ) time [169], where n is the number of periodic tasks and H is their hyper-period, which is pseudo-polynomial time. SSS algorithms place high run-time execution and storage requirements and require continuous maintenance of Θ(n + m) state variables. Davis et al., [66, 62] proposed an exact dynamic slack stealing (EDSS) method. EDSS locates slack dynamically at run time, using a forward search, fixed-point iteration algorithm, based on a method proposed by Joseph and Pandya in [113] to calculate busy period lengths. EDSS combines the preprocessing step of the SSS with dynamic slack determination and, thus, has very high run-time computation overhead. It determines the amount of slack at all n priority levels, in time O(G · n2 ). G is a linear function of the total number of busy periods in [t, t + maxi {di (t)}), which is pseudo-polynomial. EDSS theoretically must run at each scheduler clock-tick to keep current the slack variables. It runs at least at the completion of each periodic task instance and it requires O(n) slack variable maintenance at each context-switching time. Davis et al., [61, 7, 12] also proposed approximate DSS (ADSS) algorithms which compute pessimistic estimates of the slack Si (t). Slack computation at all n levels either for admission control, and or for the times at which the firm aperiodic task can run, each take O(n2 + n) time. On-line admission control for one firm aperiodic with a heuristically preselected priority takes Θ(n + m) time. Again, the slack for particular task must be recomputed at the completion of each one of its instances in O(n) time, and O(n + m) slack variables must be maintained at every context switching time. For larger periodic task sets the continuous slack maintenance becomes a significant source of overhead. Another problem is that in the proposed method the priority of a candidate aperiodic task Ja is heuristically preselected to be compatible with the priorities of the tasks that precede and follow Ja in deadline order. If the deadline Da of Ja is larger than the deadlines Di of all the periodic

Michael E. Thomadakis miket AT tamu DOT edu 3

September 4, 2008 DRAFT

1.3

Dynamic-Priority Scheduling Disciplines 1 PREVIOUS WORK ON FEASIBILITY ALGORITHMS

tasks, Da > maxni=1 {Di }, the EDSS and ADSS algorithms degrade into background processing. More recently, Binns [29] proposed algorithms based on SSS for the scheduling of sporadic tasks, resulting from incremental and design-to-time periodic tasks. An admission control decision for a l m sporadic task Js at a priority level k, requires time O(n) + (tk − 1) · max{O(n(n − 1)), O(n TTn1 )}, where tk is an integer number, the “transform” factor of Tk [29]. McElhone [150, 151] investigated two on-line algorithms which test the schedulability of one firm aperiodic task. The first one is pessimistic and takes Θ(n2 ) time to compute a lower bound on the spare capacity. The second is an exact algorithm taking pseudo-polynomial time. When the deadline Da of the firm task is Da > maxni=1 {Di } of the deadlines of the periodic tasks, the algorithms essentially schedule the firm at background (lowest) priority.

1.3

Dynamic-Priority Scheduling Disciplines

For comparison purposes we also summarize the complexity of firm task scheduling based on dynamic priority algorithms. Chetto and Chetto [55] and Schwan and Zhou [177] investigated on-line admission tests for systems scheduled by the Earliest Deadline First (EDF) algorithm. Their methods require time proportional to the number N of all periodic jobs released within the least common multiple of the periodic task periods, called the hyper-period H, and thus, they are pseudo-polynomial in time complexity. Further, Schwan and Zhou methods require all periodic and aperiodic tasks to go through continuous schedulability testing, leading to unnecessarily high overheads. The method by Schwan et al., is of the schedule construction type. Silly and Chetto [197] proposed feasibility algorithms, based on the EDL method, to guarantee firm tasks, taking O(N +m(ra )) time and storage O(N ), where N is the total number of invocations of all the periodic tasks within one hyper-period, and m(ra ) is the number of pending aperiodic tasks at the time of admission. Chetto et al., [54] presented an algorithm to admit groups of m firm aperiodic tasks related by precedence constraints. This algorithm takes O(N 2 ) time where N is the total number of periodic and aperiodic tasks active in [d′ , H + D ′ ), where d′ is the earliest deadline in the current group, D ′ is the latest deadline of all pending aperiodic tasks at the time of the group admission and H is the hyper-period of the system. Tia in [230] proposed a technique to compute slack in dynamic priority schedules of periodic tasks requiring Θ(N 2 +N ) storage overhead, which is Michael E. Thomadakis miket AT tamu DOT edu 4

September 4, 2008 DRAFT

REFERENCES

REFERENCES

impractical for all practical purposes. Silly in [196] presented an algorithm to schedule a single soft i {Di } aperiodic task in time O(n max mini {Ti } ), where maxi {Di } and mini {Ti } are the maximum deadline and

minimum period, respectively, among all periodic tasks. The storage complexity is O(N ), where N is the total number of periodic jobs released in one hyper-period. Other methods for feasibility testing include the methods by Ripoll et al., [173, 171] and Buttazzo et al., [44, 42] for dynamic priority systems. On-line feasibility algorithms based on their methods have pseudo-polynomial complexity in order to guarantee a single task. Hohler [81] and Isovi´c et al.[105, 106] have proposed the slot-switching method. Several schedule construction methods have been developed in the past. Buttazzo et al., in [46, 47] proposed the RED method, where tasks are serviced by the EDF discipline and feasibility testing is performed dynamically on every task in the system, periodic and aperiodic. The time complexity is O(M ) where M is the total number of tasks in the ready queue and it can be pseudopolynomial.

References [1] Tarek Abdelzaher, Bjorn Andersson, Jan Jonsson, Vivek Sharma, and Minh Nguyen. The aperiodic multiprocessor utilization bound for liquid tasks. In Proc. 8th IEEE Real-Time and Embedded Technology and Applications Symposium, pages 173–184, 25–27 September 2002. Real-Time scheduling theory has developed powerful tools for translating conditions on aggregate system utilization into per-task schedulability guarantees. The main break-through has been Liu and Layland s utilization bound for schedulability of periodic tasks. In 2001 this bound was generalized by Abdelzaher and Lu to the aperiodic task case. In this paper, we further generalize the aperiodic bound to the case of multiprocessors, and present key new insights into schedulability analysis of aperiodic tasks. We consider a special task model, called the liquid task model, representative of highperformance servers with aperiodic workloads, such as network routers, web servers, proxies, and real-time databases. For this model, we derive the optimal multiprocessor utilization bound, defined on a utilization-like metric we call synthetic utilization . This bound allows developing constant-time admission control tests that provide utilization-based absolute delay guaran-tees. We show that the real utilization of admitted tasks can be close to unity even Michael E. Thomadakis miket AT tamu DOT edu 5

September 4, 2008 DRAFT

REFERENCES

REFERENCES

when synthetic utilization is kept be-low the bound. Thus, our results lead to multiprocessor sys-tems which combine constant-time admission control with high utilization while making no periodicity assumptions re-garding the task arrival pattern. [2] B. H. Adelberg, H. Garcia-Molina, and B. Kao. Emulating soft real-time scheduling using traditional operating system schedulers. In Proc. of IEEE 1994 Real-Time Systems Symposium, pages 292–298. IEEE, 1994. [3] Thomas E. Anderson, David E. Culler, David A. Patterson, and the NOW Team. A case for now (networks of workstations). IEEE Micro, February 1995. We propose to build hardware and software to enable a network of workstations (NOW) to act as a single large-scale computer. Because of volume production, commercial workstations today offer much better price/performance than the individual nodes of MPPs; in addition, switch-based networks such as ATM will provide cheap, high-bandwidth communication. This price/performance advantage is increased if the NOW can be used for both the tasks traditionally run on workstations and large programs. We hope to demonstrate a practical 100 processor system in the next few years that delivers at the same time (1) better cost-performance for parallel applications than a massively parallel processing architecture (MPP) and (2) better performance for sequential applications than an individual workstation (by using more of the resources of the network). If projects like NOW are successful, they have the potential to redefine the high-end of the computing industry. To realize the potential of NOWs, we need to move two MPP technologies into the workstation community: low latency networking and global system software that treats a collection of processors, memory, and disks as if they were a single machine. Our approach is to leverage off-the-shelf technology as much as possible – workstation hardware, standard workstation operating systems on each node, and local area network ATM switches. To this, we will add communications protocol software and a global system layer that together provide low overhead communication, a single view of operating system services across the cluster, parallel file I/O, and robustness to individual node failures. We will demonstrate our results by using our system for the everyday computing needs, both sequential and parallel. [4] Karl-Erik ˚ Arz´en and Anton Cervin. Control and embedded computing: Survey of research directions. In Proc. 16th IFAC World Congress, Prague, Czech Republic, July 2005. [5] AT&T. Unix System V Release 4, Internals Student Guide. Addison–Wesley, Reading, MA, 1990. Michael E. Thomadakis miket AT tamu DOT edu 6

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[6] AT&T. System V Interface Definition, 3rd Edition. Addison–Wesley, Reading, MA, 1991. [7] N. Audsley, A. Burns, R. I. Davis, and A. J. Wellings. Integrating best effort and fixed-priority scheduling. In Proc. of 1994 Workshop on Real-Time Programming, pages 45–55, June 1994. Same paper as [12] with the only difference that the deadlines di of optional components do not have to be the same as the ones of the corresponding mandatory task, but they should still be: di ∈ [d(1) , d(n+m) ]. [8] N. Audsley, A. Burns, M. Richardson, K. W. Tindell, and A. J. Wellings. Applying new scheduling theory to static priority pre-emptive scheduling. Software Engineering Journal, 8(5):284–292, 1983. The paper presents exact schedulability analyses for real-time systems scheduled at runtime with a static priority pre-emptive dispatcher. The tasks to be scheduled are allowed to experience internal blocking (from other tasks with which they share resources) and (with certain restrictions) to release jitter, such as waiting for a message to arrive. The analysis presented is more general than that previously published and subsumes, for example, techniques based on the rate monotonic approach. In addition to presenting the relevant theory, an existing avionics case study is described and analyzed. The predictions that follow from this analysis are seen to be in close agreement with the behavior exhibited during simulation studies. [9] N. Audsley, A. Burns, M. Richardson, and A. J. Wellings. Incorporating unbounded algorithms into predictable real-time systems. Technical Report YCS–92–171, Department of Computer Science, University of York, UK, 1992. The TR where optional components are allowed in RT tasks. Explains (a) Imprecise computation, (b) sieve functions, and (c) multiple-version computation. Uses ADA 9X. Fixed-priority method, based on approximate slackstealing by Davis. [10] N. Audsley, A. Burns, M. Richardson, and A. J. Wellings. Incorporating unbounded algorithms into predictable real-time systems. Computer Systems Science and Engineering, 8(3):80–89, April 1993. [11] N. Audsley, A. Burns, M. Richardson, and A. J. Wellings. Integrating optional software components into hard real-time systems. Software Engineering Journal, 11(3), May 1996. The article about optional components that are allowed in RT tasks. Explains TAN. Discusses (a) Imprecise computation, (b) sieve functions, and Michael E. Thomadakis miket AT tamu DOT edu 7

September 4, 2008 DRAFT

REFERENCES

REFERENCES

(c) multiple-version computation. Refers to [9]. It mentions utility-based optional task admission control. See [65]. [12] N. Audsley, R. I. Davis, A. Burns, and A. J. Wellings. Appropriate mechanisms for the support of optional processing in hard real-time systems. In Proc. 11th IEEE RealTime Operating Systems and Software, pages 23–27, Seattle, WA, May 18–19 1994. Optional firm tasks, with the same deadlines as the mandatory periodic task. Tasks notify the system of their gain points. Application of the Approximate Slack Stealing by Davis [61]. This paper states more clearly the method for slack maintenance, compared to other papers. They admit that continuous slack variable updates would be impossible due to the excessive overhead. They opt to carry out the updates at admission decision and context-switch times. They check for sufficient processors time from (1) the gain gi (t), O(1); (2) the gain gk (t), k ∈ hp(i), O(n); (3) gain at gk (t), k ∈ lp(i), if dk (t) < di (t), O(n); (4) slack Simax (t) from tasks in lp(i), O(n); and finally, (5) recompute the approximate slack at all levels lp(i) and go back to (4), (n2 + n). All the above assume one hard task pending. [13] N. C. Audsley, A. Burns, M. F. Richardson, and A. J. Wellings. Hard real-time time scheduling: The deadline monotonic approach. In Proc. 8th IEEE Workshop on RealTime Operating Systems and Software, pages 15–17, Atlanta, GA, May 1991. The fixed-priority schedulability methods by Audsley, based on Joseph and Pandya’s method. It explains the recursive fixed-point iteration method for the determination of response times for synchronous tasks and busy periods. [14] Morris J. Bach. The Design of the Unix Operating System. Prentice-Hall, Englewood Cliffs, NJ, 1986. [15] Achim Bachem, Martin Gr¨otschel, and Bernhard Korte, editors. Mathematical Programming: The State of the Art, Bonn 1982, University of Bonn, 1983. Institut f¨ ur ¨ Okonometrie und Operations Research, Springer-Verlag, Berlin Heidelberg. A nice collection of state-of-the-art discrete optimization papers, as of 1982. [16] Joseph E. Baker. Value-based message multiplexing: An analytical model. In MILCOM ’90, Military Communications Conference, pages 525–531. IEEE, 30 September –3 October 1990. [17] T. P. Baker. Stack-based scheduling of real-time processes. The Journal of Real-Time Systems, 3(1):67–99, March 1991. Extends the Priority Ceiling Protocol (PCP) to handle multi-unit resources, such as, semaphores/readers-writer locks; dynamic priority schemes with Michael E. Thomadakis miket AT tamu DOT edu 8

September 4, 2008 DRAFT

REFERENCES

REFERENCES

static-preemption levels; sharing of run-time stack. Tasks share the run-time stack. There are m serially reusable resources R1 , . . . , Rm , with NR being the total number of resources of a type. (J, R, m) is the allocation request of m units of R by J. Resources are acquired and released LIFO order by each job J. [18] T. P. Baker, F. Mueller, and V. Rustagi. Experience with a prototype of the posix ‘minimal realtime system profile. In Proc. of 11th IEEE Workshop on Real-Time Operating Systems and Software, pages 12–17, Seattle, WA, May 1994. IEEE. [19] P. Balbastre, I. Ripoll, and A. Crespo. Control tasks delay reduction under static and dynamic scheduling policies. Seventh International Conference on Real-Time Computing Systems and Applications, pages 522–526, 2000. [20] Sanjoy K. Baruah, Johannes E. Gehrke, and C. Greg Plaxton. Fast scheduling of periodic tasks on multiple resources. In Proc. 9th Inte’l Parallel Processing Symposium, pages 280–288, 25–28 April 1995. Given n periodic tasks, each characterized by an execution requirement and a period, and m identical copies of a resource, the periodic scheduling problem is concerned with generating a schedule for the n tasks on the m resources. They present an algorithm that schedules every feasible instance of the periodic scheduling problem, and runs in O(min{m log2 n, n}) time per slot scheduled. [21] Sanjoy K. Baruah, Rodney R. Howell, and Louis E. Rosier. Feasibility problems of recurring tasks on one processor. Theoretical Computer Science, 118:3–20, 1993. The bound B on the length of the Synchronous BP that needs to be checked for h(dk ) ≤ dk for the sporadic tasks and the bound L for the asynchronous case. They show a bijection between the classes of sporadic anf synchronous task sets. [22] Sanjoy K. Baruah, Aloysius K. Mok, and Louis E. Rosier. Preemptively scheduling hard real-time sporadic tasks on one processor. In Proc. 11th IEEE Real Time Systems Symp., pages 182–190, 5–7 December 1990. The authors give necessary and sufficient conditions for a sporadic, hard-realtime sporadic task system to be feasible (i.e., schedulable), under preemptive scheduling on one processor. The conditions cannot, in general, be tested efficiently (unless P=NP). They define demand function h(t) := Pn Pn guaranteed t−Di Ci (⌊ ⌋ + 1)C . Letting ρ := , and D = max Di ni=1 , they prove i i=1 i=1 Ti Ti that if τ is infeasible then, either (i) ρ ≥ 1, or (ii) there exists t, such that ρ max{Ti − Di }ni=1 }. h(t) > t, for some t ∈ [0, B), where B = min{H + D, 1−ρ Michael E. Thomadakis miket AT tamu DOT edu 9

September 4, 2008 DRAFT

REFERENCES

REFERENCES

It is, thus, sufficient to check that h(d(k) ) ≤ d(k) for each d(k) ∈ {dij }, dij ≤ B. Note that the second term in the min becomes infinitely large as ρ → 1. [23] Sanjoy K. Baruah, Louis E. Rosier, and Rodney R. Howell. Algorithms and complexity concerning the preemptive scheduling of periodic, real-time tasks on one processor. Real-Time Systems, 2:301–324, 1990. They provide necessary and sufficient conditions for periodic task sets to be feasible in preemptive schedules. They show that the periodic task feasibility problem is co-NP conplete in the srong sense. They use an EDD algorithm for on-line scheduling for its optimality on single-processor systems. Leung and Merrill [139] proposed an exponential algorithm for the feasibility problem, and proved that this problem is co-NP-hard. Leung and Whitehead [140] gave a pseudopolynomial algortithm for feasibility in fixed-priority systems. For synchronous sets the sporadic feasibility test of [22] can be used, since the problem is identical. For asynchronous (complete) sets they prove that the feasibility problem is NP-Complete in the strong sense. Their main result states that, a complete task system is feasible in one processor P(asynchronous) n iff, (i) ρ ≤ 1, and (ii) i=1 ni (t1 , t2 )Ci ≤ (t2 −t1 ), for all 0 ≤ t1 < t2 ≤ r +2H, where r := maxi {ri }. Note that, if ρ ≤ 1, c(t) := c(t + H), ∀t ≥ r. If a task in τ fails at t2 , then there is a deadline-t2 BP that starts at t1 . Then, there is more released workload within [t1 , t2 ) than available time t2 −t1 . For ni (t1 , t2 ), T (n) = Θ(n). For synchronous task sets, we need to check that h(dij ) ≤ dij ≤ ρ B, where B := 1−ρ max{Ti − Di }ni=1 , ρ < 1. Complexity of the asynchronous systems problem is co-NP-complete in the strong sense. They reduce 3-SAT into SCP. The implications is that there cannot be a pseudopolynomial time algorithm for the feasibility P of the complete (asynchronous) task set. Indeed, we have to check N = ni=1 ni (d + 2H) deadlines for the feasibility of the system, and when Ti are relative primes N = O(sn ). [24] I. Bate and Alan Burns. Schedulability analysis of fixed priority real-time systems with offsets. In Proc. 9th Euromicro Workshop on Real-Time Systems, pages 153–160, 11–13 June 1997. For a number of years, work has been performed in collaboration with industry to establish improved techniques for achieving and proving the system timing constraints. The specific requirements encountered during the course of this work for both uniprocessor and distributed systems indicate a need for an efficient mechanism for handling the timing analysis of task sets which feature offsets. Little research has been performed on this subject. The paper describes a new technique tailored to a set of real world problems so that the results are effective and the complexity is manageable.

Michael E. Thomadakis miket AT tamu DOT edu 10

September 4, 2008 DRAFT

REFERENCES

REFERENCES

 n [25] Guillem Bernat and Alan Burns. Combining m -hard deadlines and dual priority scheduling. In Proc. 18th IEEE Real Time Systems Symp., pages 46–57, 2–5 December 1997. The problem of effectively scheduling soft tasks whilst guaranteeing the behaviour of hard tasks has been addressed in many papers and a large number of techniques have been proposed. The dual priority mechanism is an intuitively simple method with low overheads. A hard task is assigned two priorities. Upon invocation, the task starts executing with a low priority and it is promoted to a high priority at a time that will guarantee that its deadline is met. Soft tasks are assigned medium priorities; they can thus preempt any hard task that is executing before its promotion time. To increase the capacity for soft tasks, and therefore the effectiveness of the real-time system, hard tasks may be assigned a (/sub m//sup n/)-hard (read n in m) temporal constraint. This implies that the task must meet n deadlines in any m invocations. This paper addresses the combination of such constraints and dual priority scheduling. This approach reduces the gap between dynamic priority and fixed priority scheduling with the goal of reducing the average response time of soft tasks. [26] Enrico Bini and Giorgio Buttazzo. The space of edf feasible deadlines. In ECRTS ’07: Proceedings of the 19th Euromicro Conference on Real-Time Systems, pages 19–28, Washington, DC, USA, 2007. IEEE Computer Society. [27] Enrico Bini and Giorgio C. Buttazzo. A hyperbolic bound for the rate-monotonic algorithm. In Proc. 13th Euromicro Conf. on Real-Time Systems, pages 59–66, June 2001. In this paper we propose a novel schedulability analysis for verifying the feasibility of large periodic task sets under the rate monotonic algorithm, when the exact test cannot be applied on line due to prohibitively long execution times. The proposed test has the same complexity as the original Liu and Layland bound but it is less pessimistic, so allowing to accept task sets that would be rejected using the original approach. The performance of the proposed approach is evaluated with respect to the classical Liu and Layland method, and theoretical bounds are derived as a function of n (the number of tasks) and for the limit case of n tending to infinity. The analysis is also extended to include aperiodic servers and blocking times due to concurrency control protocols. Extensive simulations on synthetic tasks sets are presented to compare the effectiveness of the proposed test with respect to the Liu and Layland method and the exact response time analysis. [28] Enrico Bini and Giorgio C. Buttazzo. The space of rate-monotonic schedulability. In Proc. 23rd IEEE Real Time Systems Symp., pages 169–178, December 2002. Michael E. Thomadakis miket AT tamu DOT edu 11

September 4, 2008 DRAFT

REFERENCES

REFERENCES

Feasibility analysis of fixed priority systems has been widely studied in the real-time literature and several acceptance tests have been proposed to guarantee a set of periodic tasks. They can be divided into two main classes: polynomial time tests and exact tests. Polynomial time tests are used for an online guarantee of dynamic systems, where tasks can be activated at runtime. These tests introduce negligible overhead when executed on a new task arrival, but provide only a sufficient schedulability condition, which may cause poor processor utilization. On the other hand, exact tests, which are based on response time analysis, provide a necessary and sufficient schedulability condition, but are too complex to be executed on line for large task sets. As a consequence, for large task sets, they are often executed offline. This paper proposes a novel approach for analyzing the schedulability of periodic task sets under rate monotonic priority assignment. Using this approach, we derive a new schedulability test which can be tuned through a parameter to balance the complexity vs. acceptance ratio, so that it can be used online to better exploit the processor based on available computational power. Extensive simulations show that our test, when used in its exact form, is significantly faster than current response time analysis methods. Moreover, the proposed approach, for its elegance and compactness, offers an explanation of known phenomena of fixed priority scheduling and could be helpful for further work on rate monotonic analysis. [29] Pamm Binns. Incremental rate-monotonic scheduling for improved control system performance. In Proc. Third IEEE Real-time Technology and Applications Symposium, pages 80–90, Montreal, Canada, June 9–11 1997. Application of the Thuel’s Exact Slack Stealing to provide dynamic guarantees to optional/incremental components. This paper presents an algorithm and its run-time performance for scheduling periodic incremental and designto-time processes. The algorithm is based on the “slack stealer” which dynamically answers the question of how much execution time is available prior to a deadline, when all periodic processes are scheduled using Rate Monotonic Scheduling. An incremental process asks how much execution time is available after the baseline component has completed and prior to the execution of a process increment. A design-to-time process asks how much execution time is available before the process begins execution and selects a version which gives the greatest precision in the available time. For both incremental and designto-time processes, a minimum amount of time is statically reserved so that an acceptable but suboptimal solution will always be calculated. We identify and propose solution for the practical problem of supporting criticalities when scheduling slack and analyze the run-time overheads of this algorithm. The

Michael E. Thomadakis miket AT tamu DOT edu 12

September 4, 2008 DRAFT

REFERENCES

REFERENCES

analysis is applied to two real-world data sets. In certain cases, the execution time of this algorithm is found to be efficient. [30] Andrew Birrell. An introduction to programming with threads. Technical Report Technical Report #35, Digital Equipment Corporation Systems Research Center, 130 Lytton Ave., Palo Alto, CA, 1996. [31] Jacek Blazewicz. Selected Topics in Scheduling Theory, chapter 1, pages 1–60. Volume 132 of Martello et al. [148], July 8–19 1987. A survey. [32] Jacek Blazewicz, Klaus H. Ecker, Erwin Pesch an G¨ untler Schmidt, and Jan W¸eglarz. Scheduling Computer and Manufactoring Processes. Springer-Verlag, Heidelberg, 1996. A good reference book for most important aspects of scheduling. [33] R. D. Blumofe and D. S. Park. Scheduling large-scale parallel computations on networks of workstations. In Proc. of 3rd IEEE Int’l Symp. on High Performance Distributed Comp., pages 96–105, San Francisco, CA, 1994. [34] G. Bollela and K. Jeffay. Support for real-time computing within general purpose operating systems: Supporting co-resident operating systems. In Proc. of IEEE RealTime Technology and Applications Symp., pages 4–14, Chicago, IL, May 1995. [35] Alan Burns. Scheduling hard real-time systems: A review. Software Engineering Journal, 6(3):116–128, May 1991. Results of the application of scheduling theory to hard real-time systems are reviewed in this paper. The review takes the form of an analysis of the problems presented by different application requirements and characteristics. Issues covered include uniprocessor and multiprocessor systems, periodic and aperiodic processes, static and dynamic algorithms, transient overloads and resource usage. Protocols that limit and reduce blocking are discussed. Consideration is also given to scheduling Ada tasks. Overview of fixed-priority RT scheduling as of 1991. [36] Alan Burns. Preemptive priority based scheduling: An appropriate engineering approach. Technical Report YCS-93-214, Dept. of Computer Science, University of York, UK, 1993. Precursor of burn95:SPE and the article in Son’s Advances in Real-Time Systems.

Michael E. Thomadakis miket AT tamu DOT edu 13

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[37] Alan Burns. Preemptive priority-based scheduling: An appropriate engineering approach. In Sang H. Son, editor, Advances in Real-Time Systems, chapter 10, pages 225–248. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1995. The FPS engineering analysis [38] Alan Burns, Ken Tindell, and Andy J. Wellings. Effective analysis for engineering realtime fixed priority schedulers. IEEE Trans. on Software Engineering, 21(5):475–480, May 1995. A lean version of the burn95:SPE article. [39] Alan Burns and Andy J. Wellings. Criticality and utility in the next generation. Journal of Real-Time Systems, 3(4):351–354, 1991. Some ruminations on future generation real-time systems. [40] Alan Burns and Andy J. Wellings. Engineering a hard real-time system: From theory to practice. Software–Practice and Experience, 25(7):705–726, July 1995. A very good presentation of the techniques for the schedulability analysis of practical FPS-based executives. All the techniques are based on the critical instance of the tasks. [41] Giorgio Buttazzo and Anton Cervin. Comparative assessment and evaluation of jitter control methods. In Proc. 15th International Conference on Real-Time and Network Systems, Nancy, France, March 2007. [42] Giorgio C. Buttazzo and Marco Caccamo. Minimizing aperiodic response times in a firm real-time environment. IEEE Trans. on Software Engineering, 25(1):22–32, January/February 1999. EDD based scheduling. Total Bandwidth Server is used for aperiodic requests. FIFO aperiodic scheduling. Maximize the responsiveness of the system. Periodic tasks can skip instances. TBS is pessimistic and as periodic and aperiodic tasks finish earlier, they need to dynamically compute the true amount of interference from periodic tasks on the aperiodic server’s schedule. They use TBS to generate the initial tentative deadline d0k and subsequently, they try to find the shortest possible deadline to assign to Jak so that its (j+1) response time is minimized. They set dak = t + Cak + Ip (t, djak ), where Ip (t, d) = Ia (t, d) + If (t, d) is the total interference from currently active periodics at t and If (t, d) is the interference from periodics with t ≤ rij led. Time T (n) = O(Nn), where N = total number of corrective steps for the deadline, Pn Pn 1 i=1 ci +Uτ Cak for one outstanding aperiodic only. N = O(n + i=1 Ci ). There 1−Uτ Michael E. Thomadakis miket AT tamu DOT edu 14

September 4, 2008 DRAFT

REFERENCES

REFERENCES

may be a problem in their on-line guarantee algorithm, as they do not include processor time demanded by periodics τij released within [rk , dk ) with dij > dk , which may need to use some spare capacity within [rk , dk ), now taken away by the Ja . It combines methods from [44, 48]. [43] Giorgio C. Buttazzo and Giuseppe Lipary. Scheduling analysis of hybrid real-time task sets. In Proc. of the Ninth Euromicro Workshop on Real-Time Systems, pages 200–206, 11–13 June 1997. EDD based scheduling; hard periodic, sporadic and soft aperiodic sharing resources using the Stack Resource Policy (SRP). They propose the use of Total Bandwidth Server in order to bound the capacity allocated to aperiodic tasks. present an analysis for static schedulability of EDF+SRP+TBS; Pn CThey Bi i + + Us ≤ 1. i=1 Di Di [44] Giorgio C. Buttazzo and Fabrizio Sensini. Optimal deadline assignment for scheduling soft aperiodic tasks in hard real-time environments. In Proc. of the Third IEEE Int’l Conf. on Engineering of Complex Computing Systems, pages 39–48, Como, Italy, December 1997. This paper shows how a new, earlier deadline, with respect to the one chosen by the Total BW server, can be computed. They attempt to calculate the interference by the periodic tasks within [rk , jk ). They proceed iteratively, setting deadline dlk and then computing the finish time fkl , given that deadline, until dl+1 = dlk . Time complexity T (n) k Pn steps, Pn= O(Nn), with N corrective for one aperiodic task. N = O(n + ( i=1 Ci + Uτ Cak )/(1 − Uτ ) i=1 1/Ti ). Tasks are serviced in FIFO order. [45] Giorgio C. Buttazzo, Marco Spuri, and Fabrizio Sensini. Value vs. deadline scheduling in overload conditions. In Proc. 16th IEEE Real-Time Systems Symp., pages 90–99, December 1995. This is a comparative study among scheduling algorithms which use different priority assignments and different guarantee mechanisms to improve the performance of a real-time system during overload conditions. Tasks are characterized not only by a deadline, but also by an importance value. The performance of the scheduling algorithm is evaluated by computing the cumulative value gained on a task set, i.e., the sum of the values of those tasks that completed by their deadline. Purpose: First, to discover which priority assignment is able to achieve the best performance in overload conditions. Second, to understand how the pessimistic assumptions made in the guarantee test affect the performance of the scheduling algorithms, and how much a reclaiming mechanism can compensate this degradation. Simulation results show that, Michael E. Thomadakis miket AT tamu DOT edu 15

September 4, 2008 DRAFT

REFERENCES

REFERENCES

without any admission control, value-density scheduling performs best. Simple admission control based on worst case estimates of the load worsen the performance of all value based algorithms. EDF scheduling performs best if admission control is used along with a reclaiming mechanism that takes advantage of early completions. Finally, scheduling by deadline before overload and by value during overload works best in most practical conditions. TBS + guarantees is potentially very pessimistic. They consider (1) EDF plain. (2) HVF: Highest-Value First, among aperiodics, max{vi }. (3) HDF: HighestDensity First, max{ Cvii }. (4) Mixed rule, priority(Jk ) := αuk + (1 − α)dk . Under under-load the EDF maximizes the cumulative value, whereas under overload the HDF tends to maximize it. In terms of pessimism, guaranteed conditions are pessimistic and always tend to reduce the total value. However, robust strategies with reclaiming of unused time and second chance for the rejected tasks perform very well in all situations, as expected. [46] Giorgio C. Buttazzo and J. A. Stankovic. RED: Robust earliest deadline scheduling. Technical Report UM–CS–1993–025, Comp. Science Dep’t, Univ. of Massachusetts, Amherst, July 1993. EDD based scheduling of hard sporadic tasks. They maintain the active sporadics J = {J1 , . . . , Jm }, in deadline order, along with the cumulative residual times at Ri at their deadlines di , Ri := Ri−1 + (di − di−1 ) − ci . They perform dynamic admission control by checking the spare capacity available for the new task Jk . Therefore time T (N) = O(N), where N are the tasks in the pending queue. Tasks have intrinsic value associated, so a rejection policy can be developed. Rejection policies: 1) the newcomer, 2) the task with least value, 3) all the least value tasks, necessary to guarantee the task with the highest value. [47] Giorgio C. Buttazzo and J. A. Stankovic. Adding robustness in dynamic preemptive scheduling. In D. S. Fussell and M. Malek, editors, Responsive Computer Systems: Steps Toward Fault-Tolerant Real-Time Systems. Kluwer Academic Publishers, Boston, MA, 1995. The same as [46] [48] Marco Caccamo and Giorgio C. Buttazzo. Exploiting skips in periodic tasks for enhancing the aperiodic responsiveness. In Proc. 18th IEEE Real-Time Systems Symp., pages 330–339, San Francisco, CA, December 1997. EDD based scheduling on firm (skip-able) periodic and soft aperiodic. Schedulability analysis based on synchronous BP. Expand TBS to include task-skip calculations for shorter aperiodic task response times. Skip parameter si is Michael E. Thomadakis miket AT tamu DOT edu 16

September 4, 2008 DRAFT

REFERENCES

REFERENCES

the minimum distance between two consecutive skips. Di = Ti , all i. The aperiodic tasks are serviced in FIFO order. Their objective is to minimize the response time of aperiodics. Introduce processor time reclamation algorithm for the case aperiodics finish earlier either due to needing less time than their WCET and/or they exploited skips. They use fixed-point iteration algorithm, to locate the point t∗ where the (t∗ − t) ≥ total processor demand, and set the new deadline dk = t∗ . [49] Rosa Casta˜ n´e, Pau Mart´ı, Manel Velasco, Anton Cervin, and Dan Henriksson. Resource management for control tasks based on the transient dynamics of closed-loop systems. In Proceedings of the 18th Euromicro Conference on Real-Time Systems, Dresden, Germany, July 2006. [50] Anton Cervin and Johan Eker. Control-scheduling codesign of real-time systems: The control server approach. Journal of Embedded Computing, 1(2):209–224, 2005. [51] Jianer Chen and Chung-Yee Lee. General multiprocessor task scheduling. Naval Research Logistics, 46:57–74, 1999. Multiprocessor scheduling when a job can or must be processed by multiple processors. The objective is to minimize the completion time of all jobs. They present a psaeudopolynomial time algorithm for the 2-machine and a heuristic for the 3-machine problems. [52] Min-Ih Chen and Kwei-Jay Lin. A priority ceiling protocol for multiple-instance resources. In Proc. 12th IEEE Real-Time Systems Symp., pages 140 – 149, 4–6 December 1991. [53] Housine Chetto and Maryline Chetto. An adaptive scheduling algorithm for faulttolerant real-time systems. Software Engineering Journal, 6(3):93–100, May 1991. Periodic tasks have primaries and alternates. Algorithm attempts to maximize the number of guaranteed primaries. [54] Housine Chetto, Maryline Silly, and T. Bouchentouf. Dynamic scheduling of real-time tasks under precedence constraints. The Journal of Real-Time Systems, 2:181–194, 1990. Hard periodic and aperiodic tasks with precedence constraints. Using EDL to allocate spare capacity to hard “sporadic” (aperiodic really) and fault recovery tasks. Admission control decision for 1 task group with m precedence related firm tasks, T (n) = O(N 2 ), where N total number of periodic jobs and j ∗ k aperiodic tasks in [d′ak , d∗ + H), where d′ak := minm j=1 {dak }, d is the latest deadline of all pending firm tasks. Michael E. Thomadakis miket AT tamu DOT edu 17

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[55] Houssine Chetto and Maryline Chetto. Some results of the earliest deadline scheduling algorithm. IEEE Trans. on Software Engineering, 15(10):1261–1269, October 1989. The definition of EDE and EDL. Storage S(n) = O(N), time T (n) = O(N), where N is the total number of periodic jobs in one hyper-period. Only one firm aperiodic may be present for this method to guarantee it. [56] W. A. Christopher, S. J. Procter, and T. E. Anderson. The Nachos instructional operating system. In Proc. of 1993 Winter USENIX Conf., pages 479–488, San Antonio, TX, January 1993. [57] Jen-Yao Chung and Jane W. S. Liu. Algorithms for scheduling periodic jobs to minimize average error. In Proc. 9th IEEE Real Time Systems Symp., pages 142–151, 6–8 December 1988. Several preemptive, priority-driven algorithms for scheduling periodic jobs on systems that support imprecise computations are described and evaluated. The algorithms are designed to keep the average error in the results produced over consecutive periods small. The approach taken here is to consider each task as consisting of two parts: a mandatory part that must be completed in order for the task to produce an acceptable result, and an optional part that refines the result produced by the mandatory part to reduce the error in the result. The mandatory parts of all tasks have hard deadlines; the rate-monotone algorithm is used to schedule them to meet all deadlines. The optional parts have soft deadlines; different algorithms are used to schedule the optional parts to minimize the average error. The performance of these algorithms is evaluated in terms of the average error over all jobs as a function of their total utilization factor. [58] Jen-Yao Chung and Jane W. S. Liu. Performance of algorithms for scheduling periodic jobs to avoid timing faults. In Proc.of the 22nd Annual Hawaii International Conference on System Sciences, Vol. II, pages 683–692, 3–6 January 1989. The authors describe and evaluate a class of heuristic algorithms, called length-monotone algorithms, for scheduling periodic jobs on systems that support imprecise computations. The algorithms are designed to keep the cumulative error in the results produced over a number of consecutive periods below a threshold. The approach taken is to consider each task as consisting of a mandatory part followed by an optional part. The mandatory part must be completed before the deadline of the task for the task to produce an acceptable result. The optional part must be completed occasionally to keep the cumulative error from exceeding an upper limit. The rate-monotone algorithm is used to schedule the mandatory parts to meet all deadlines. Different algorithms are used to schedule the optional parts. The performance Michael E. Thomadakis miket AT tamu DOT edu 18

September 4, 2008 DRAFT

REFERENCES

REFERENCES

of these algorithms is evaluated, and the schedulability criteria for jobs with the same repetition period and simply periodic jobs are discussed. [59] Jen-Yao Chung and Jane W. S. Liu. Scheduling periodic jobs that allow imprecise results. IEEE Trans. on Computers, 39(9):1156–1174, September 1990. [60] T. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Electrical Engineering and Computer Science Series. MIT Press/McGraw-Hill, New York, 1990. The classic book on algorithms and data-structures. [61] R. I. Davis. Approximate slack stealing algorithms for fixed priority pre-emptive systems. Technical Report YCS–93–216, Comp. Science Dep’t, Univ. of York, UK, 1993. Optimal DSS or SSS algorithms offer significant performance improvements over previous BW preserving algorithms, such as, the extended priority exchange or sporadic servers. There are several practical problems in applying either the static or dynamic versions of these algorithms. Scheduling of soft aperiodic with the requirement to minimize their response times. Three approximate dynamic slack stealing (ADSS) algorithms. The slack is dynamically estimated at time t for a particular priority level i and until the deadline di (t) := di,ni(t) − t of a particular job τij , namely, zi (t) = ˜ i−1 (di(t)) − W ˜ i−1 (t) + (Ci − ξi (t))). The exact method uses fixeddi (t) − (W point iteration to find the remaining length of a level i−1 BP and the current job τij . The algorithm accumulates the lengths Si (t) of the level i idle periods (IP), up until time t + di (t). The final value Simax (t) is the total slack that job τij can tolerate within the same interval. They propose a generic SS algorithm for soft aperiodic tasks, as follows. Define Si (t) as a lower bound on the slack in [t, t + di (t)). If at t τk is the active task with the highest priority, then the available slack at t is Smin (t) := mini∈lp(k) {Si (t)}. If Smin (t) > 0, then a soft aperiodic task can execute at priority k instead of τk , for Smin (t). This slack formulation requires continuous maintenance, at each scheduling point for all the tasks in the system. At the end t′ of the contiguous execution of each task slack must be maintained with respect to current time t < t′ . The maintenance at level i assumes that all tasks in hp(i) are processed according to the FPF schedule. When this is not the case the slack becomes more pessimistic. For exact slack at level i we need to recalculate it when each τij completes. When the set contains sporadic tasks slack recalculation at all levels has to take place at each scheduler clock cycle. They propose a static and a dynamic approximation to the exact. (1) The static method basically pre-calculates the interference upon lower priority tasks within the

Michael E. Thomadakis miket AT tamu DOT edu 19

September 4, 2008 DRAFT

REFERENCES

REFERENCES

critical instance of the schedule, i.e.[0, L), where L is the length of the synchronous BP. This is overly pessimistic. (2) The dynamic method calculates a lower bound on the Si (t), based on a modification of the sufficient and not necessary schedulability by Audsley. The lower bound on the idle time Si (t) within [t, t + di (t)) is obtained by the same method as detailed in [64], inheriting its limitations and drawbacks. They present three algorithms, for slack approximation, as follows. (1) Static Approximate SS (SASS) which at the completion t of τij looks up the static table for Si [di (t)]. (2) Dynamic Approximate SS (DASS) calculates the level i slack each time τij completes, using the dynamic approximate method. (4) Periodic ASS (PASS) uses the SASS at τij completions and the dynamic slack calculation (DASS) at all levels periodically, in order to improve the estimates. With sporadic tasks, slack recalculation should take place at every clock cycle, something that is clearly impossible in practice. PASS is a compromise to this effect. For soft aperiodic processing at each time the system allows J to run for the entire duration of Smin (t), and then the next periodic continues. At its completion slack is recalculated or looked-up, and J is given its chance again. The PASS requires Θ(n2 ) for all n levels and the DASS Θ(n) at the completion of each job τij . In general this method maintains the slack Si (t) continuously for all i. At the completion of τij has to recompute the Si (t) and at every context switch and admission decision time it has to adjust all O(n + m) slack variables affected. Total time in every hyper-period TH (n) = O(N + M(n + O(m))), M is the total number of aperiodic tasks in one H. [62] R. I. Davis. Scheduling slack time in fixed priority pre-emptive systems. Technical Report YCS–93–217, Comp. Science Dep’t, Univ. of York, UK, 1993. It discusses the exact dynamic slack computation method for Si (t) := F F (t). See also [61]. They propose a fixed-point iteration Zi−1(d − Zi−1 i ,ni (t)) algorithm, based on Joseph and Pandya’s method [113], to compute the Si (t) which is the sum of level i − 1 IP within [t, t + di(t)), when the jobs τkj , k = 1, 2, . . . , i − 1 are scheduled under the non-idling FPF schedule. The complexity of the dynamic slack calculation is pseudo-polynomial in n, and for each level 1 ≥ i ≥ n, TEDSS (i) = O(Gi i), where the linear constant Gi is the number of iterations and depends on the parameters of the tasks τk , k = 1, 2, . . . , i. Essentially it is lower bounded Ω(total number of level i BPs) within [t, t + di (t)). Gi thus depends on the P number of BPs which can be as high as Θ( i−1 k=1 nk (t + di (t)) − nk (t)), for “sparse” schedules where each τkj makes up a single BP. For BPs with more than one jobs the number of iterations is at most so much. For all n levels complexity is O(Gn n2 ). Their basic objective is to use the EDSS to help schedule soft aperiodic tasks, as early as possible. Their generic algoMichael E. Thomadakis miket AT tamu DOT edu 20

September 4, 2008 DRAFT

REFERENCES

REFERENCES

rithm proceeds as follows. Whenever there are soft tasks pending at t, let k be the highest priority ready periodic, then allow a soft task for at most Smin (t) := minj∈lp(k)∪{k} {Sj (t)}. After that time, dispatch by highest-priority next (HPN). At each job τlm completion time flm recalculate the slack Sl (flm ) for that task and proceed with HPN. The algorithm would require that slack recalculation takes place at every time increment. Slack can be calculated for arbitrary times t′ ∈ [t, di (t)) with the slack maintenance rule for consistency, as long as, no task finishes within [t, t′ ). (1) Slack consumption: if the processor in [t, t′ ) is serving soft tasks or is idle, slack is consumed at all n+m levels, Sjmax − = (t′ − t), j ∈ lp(1). If it is serving a hard periodic τi then slack is consumed at all priorities higher than i, Sjmax − = (t′ − t), j ∈ hp(i). (2) Task τi completion: recalculate slack Simax (t). (3) Sporadic task gain time: when a sporadic arrives at rsk > rs,k−1 + Tsk we have to recompute the slack at its and all lower levels, at each time step within this interval. This is impractical in reality. We note that the slack computation and maintenance at a level i assumes that the schedule of all tasks j ∈ hp(i) is under the FPF. Slack maintenance must also take place at each task switching time to account for the partial execution of the preempted tasks. There are Ω(N) slack maintenances within each H. They also have some discussion to account for release jitter, arbitrary deadlines, shared resources among hard and soft tasks. This method is used in the notorious [66]. [63] R. I. Davis. Dual priority scheduling: A means of providing flexibility in hard realtime systems. Technical Report YCS–94–230, Comp. Science Dep’t, Univ. of York, UK, 1994. Proposes to give the hard, periodic tasks two priorities, the low and the high. While there are pending aperiodic tasks and there is still slack then they execute at their low, so they give precedence to firm or soft aperiodics. When a threshold is reached they switch to their high priority mode, taking precedence over the other tasks. This is an interesting alternative to the various SS methods that toil around trying to find some snippet of idle capacity. In a periodic task schedule there is always spare capacity, due to guaranteed tasks not taking their WCET, sporadic tasks not arriving at their maximum rates, and due to speed-ups from hardware features, such as, processor pipelining or cache memories. Several techniques have been developed that increase system utility by employing the spare capacity. These include, milestone methods, sieve functions, multiple versions or using approximate algorithms. Tasks fall into: (1) mandatory hard: must be a-priori guaranteed and have bounded and known WCET. An off-line feasibility test is needed. (2) Alternate hard: they can be guaranteed on-line and execute instead of the mandatory, with parameters that become known dynamically. An onMichael E. Thomadakis miket AT tamu DOT edu 21

September 4, 2008 DRAFT

REFERENCES

REFERENCES

line guarantee test is needed. (3) Alternate/Optional Firm: deadlines are firm but tasks are not critical, so a stochastic/statistical guarantee is acceptable. On-line test that can afford stochastic guarantees. (4) Optional Soft: usually need to execute as-soon-as possible. Responsive dynamic scheduling. On-line guarantee tests (1) DPS: [55, 177] have pseudo-polynomial complexity O(N). Schwan’s method requires all periodic and non-periodic tasks undergo dynamic scedulability testing continuously. EDD based tests under overload cannot discriminate between critical and non-critical tasks. (2) FPS: Thuel’s [168] based on SSS [130] requires pseudo-polynomial time [169] and space complexity Θ(N). The high time and space complexity makes these approaches unsuitable for many practical environments. This paper proposes a pessimistic admission control test with time O(n). Priorities are high, middle and low. Critical tasks get a high and a low priority. Firm and soft tasks get middle ones. Every task has a fixed promotion time ui at which its priority is promoted into the high. Initially, task τi executes at the low priority band, and at rij + ui its priority is promoted to the high one. The analysis is a critical instance worst case scenario, with all tasks promoted at the same time t = 0. The analysis assumes that the schedule of all tasks j ∈ hp(i) is under the FPF, with the release times adjusted, so that it happens that they have a common delayed release time. They employ fixed-point iteration of the recurrence relation based on Joseph and Pandya’s work in [113], as P wm follows. (1) Basic Analysis: wim+1 = Ci + j∈hp(i) ⌈ Tij ⌉, which terminates when wim+1 = wim or wim+1 > (Di − ui), where wi0 = 0. The response time for τi is set to Ri := ui + wim . (2) Synchronization based on the Immediate Priority Ceiling Protocol (IPCP). Priorities are assigned with respect to high band. Worst case is when a task j ∈ lp(i) locks a semaphore immediately P wm before τi is promoted at ui. Then, wim+1 = Bi + Ci + j∈hp(i) ⌈ Tij ⌉, where Bi is the maximum blocking time of any task j ∈ lp(i). Then, Ri := ui + wim. (3) Release Jitter is the latency between the release of a task and its start time on the processor. Maximum interference by tasks j ∈ hp(i) takes place when j P w m +J starts at the ui , that is, wim+1 = Bi + Ci + j∈hp(i) ⌈ i Tj j ⌉, and iteration terminates when, wim+1 = wim or wim+1 > (Di − ui − Ji ), with Ri := ui + wim + Ji . (4) Arbitrary deadlines, Di > Ti . In the worst case several jobs τik may be active at the same time. [64] R. I. Davis. Guaranteeing X in Y: On-line acceptance for hard aperiodic tasks scheduled by the slack stealing algorithm. Technical Report YCS–94–231, Comp. Science Dep’t, Univ. of York, UK, 1994. This TR reports the errors in Thuel’s ACC algorithms, and provides sufficient conditions for the ACC of hard deadline aperiodic tasks based on Michael E. Thomadakis miket AT tamu DOT edu 22

September 4, 2008 DRAFT

REFERENCES

REFERENCES

Thuel’s precomputed information. The method reported here is the same as the approximate DASS in [61]. Here it is used to guarantee the schedulability of incomming hard (firm) aperiodic tasks Ja . Their method requires slack maintenance at each scheduling point (context switching) Θ(n+m) that is not accounted for as overhead, and thus, may render Ja unschedulable in reality. To reduce the decision overhead per admission, they resort to the heuristic that Ja will receive priority(τ(k−1) ) < priority(Ja ) < priority(J(k) ), and they test only at this priority level. When Ja has da > d(n) the method becomes as effective as background execution. The sufficient test, upon arrival of Ja , proceeds as follows: (1) it determines a target priority level that is commensurate with its deadline da . All periodic taks and incomplete aperiodic ones (n + m + 1 total) are deadline ordered and if order of Ja is k it is given the k-th highest priority (k), with time overhead Θ(log(n + m)) per AC decision. (2) The AC then determines if Ja has sufficient capacity to be scheduled at priority level k. It computes a pessimistic upper bound on the interference Ij (ra , da ) caused by tasks τj ∈ {hp(k)} with priorities less than k, by: Ij (ra , da ) = c¯j (t) + fj (ra , da )Cj + min{Cj , [da − rj,nj (ra )+1 − fj (ra , da )Tj ]0 }, a where fj (ra , da ) = [⌊ daT−r ⌋]+ . Then a lower bound on the idle capacity Pj P Zk−1 (ra , da ) = [Da − j∈hp(k) Ij (ra , da )]+ − j∈Ahp(k) c¯aj , which is a pessimistic bound. (3) Then it proceeds to verify the schedulability of the last n + m − k + 1 tasks in deadline order, i.e., k ∪ (l ∈ lp(k)). That is, ∀j = k, . . . , n + m + 1, S(j) (t) ≥ Ca , where S(j) (t) is the slack of the j-th task ordered by deadline, including both periodic and pending previously guaranteed aperiodic tasks. If all n + m − k + 1 are met, Ja is admitted. (4) Upon admission of Ja the slack variable S(i) (t), i = k, k + 1, . . . , n + m + 1 are updated to reflect the new availability of slack at time t = ra , as follows. Set S(k) (ra ) = Zk−1 (ra , da ) − Ca , and ∀j = k + 1, k + 2, . . . , n + m + 1, S(j) (ra ) = Si (ra ) − Ca . Total time for AC decision for a new task, with m previously guaranteed aperiodic pending tasks is Tac (n, m) = Θ(log(n + m)) + Θ(n + m). Additional overhead O(n+ m) is incurred at each context switching time t, in order to update the n+m slack variables Si (t). Storage S(n, m) = Θ(2N +m). The method has several drawbacks. (1) It is inexact, as it can derive lower bounds on the idle capacity, only. Aperiodic tasks that could be schedulable have to rejected. (2) It preselects a specific priority level based on the da and not the actual idle capacity at that level. Checking at all priority levels would rise the complexity to O((n + m)2 ), per AC decision. They do not discuss the very legitimate case of a task having deadline Da > Di , ∀i, in which case their method behaves like the background. They do not provide an specific algorithm to guarantee the task. (3) There is a large amount of overhead O(n+m) that takes place at each context switching time to update the slack variables, which is not accounted for. This continuous updates removes valuable comMichael E. Thomadakis miket AT tamu DOT edu 23

September 4, 2008 DRAFT

REFERENCES

REFERENCES

putation time from the idle capacity, rendering a schedule with“guaranteed” tasks infeasible. There is no mention of this problem. (4) It needs twice as much the storage space 2N + m, as the exact methods, but it still has more overhead and it is inexact. (5) It does not utilize all the available idle capacity. They would had to extend their scheme to search forward for all slack pockets within the schedule, s := mini {Si (t)}, and then subsequently re-guarantee the remaining computation c¯a (t) − s. (6) They mention that if Si (t) > 0; ∀i, soft aperiodics can run at the highest priority. They do not account for the overhead to compute for how long the soft task can use the highest priority, nor the overhead for determination of subsequent slack-pockets at all levels. (7) They do not explain how the initial slack values Si (t0 ) are computed, presumably by the static slack stealing by Thuel. [65] R. I. Davis. Integrating best-effort policies into hard real-time systems based on fixed priority, pre-emptive scheduling. Technical Report YCS–94–240, Comp. Science Dep’t, Univ. of York, UK, 1994. Best-effort policies attempt to maximize system utility, unlike real-time scheduling which attempts to guarantee timely completion of time constraint tasks. He proposes heuristic algorithms that admit firm, optional, utilityenhancing components, along with the off-line, guaranteed RT tasks. Locke’s Heuristics: (1) Value-Density (VD), where V Di := Cyii , executes the task with the highest V D first; (2) Best Effort (BE): admit the tasks with the highest V D and execute them in EDD order, where, the task with a high probability to miss its deadline and the smallest V D is removed from the ready-queue and put back in the pending queue. The BE algorithm was shown to incur very high overheads, especially under overload conditions where it is called for to decide which task to reject. BE strategies cannot guarantee timing constraints. He discusses three basic scheduling modes. For admission decisions they use the approximate SS by Davis. (1) FIFO: check slack to admit optional task; serve them in FIFO. Time O(n + m). (2) Best Effort: weighted tasks, normalized by their execution times. Rescind guarantee from any optional task l with V Dl < V Da and move Jl to pending queue P Q; For all J ∈ P Q do admission control and move to ready queue RQ if they can be admitted. Time O(m(n+ m)). (3) Adaptive Value-Density Threshold: Admit (t) a task Ja with DVa ≥ W := the “rate” of utility of the system over the time S(t) spent obtaining it. Otherwise Ja is rejected. The utility rate is a moving average over a window of time determined heuristically. The spare time is taken either from the background capacity or via SS. Slack allocation combined with utility cognizant is –as expected– the best combinations. They didn’t include the overhead of actual decision making and how much it affects the derived system’s utility. Michael E. Thomadakis miket AT tamu DOT edu 24

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[66] R. I. Davis, K. W. Tindell, and A. Burns. Scheduling slack-time in fixed priority preemptive systems. In Proc. 14th IEEE Real Time Systems Symp., pages 222–231, Raleigh-Durham, NC, December 1993. Dynamic Exact Slack-Stealing for “faster” response times to soft-aperiodic tasks. This paper is based on [62]. Extremely high computation overhead. At the time of each periodic task completion they have to recompute the slack for this task. Tn (i) = O(Gii), where Gi is a function of the number of BPs within [t, t + di (t)). The slack of every task instance has to be recomputed at the completion of that instance. Between task completions they use slack maintenance for all n+m priority levels. For each context switching they have to maintain the slack of the active instances. They propose periodic slack recalculation at every level in order to mitigate the computation overhead. [67] R. I. Davis, K. W. Tindell, and A. Burns. Mechanisms for enhancing the flexibility and utility of hard real-time systems. In Proc. 15th IEEE Real Time Systems Symp., pages 12–21, San Juan, Puerto Rico, December 1994. Introduce fixed-priority based mechanisms for the on-line guarantee of optional computation. They assume the optional component has the same deadline as a mainline periodic task. Priority choice is pre-determined and approximate slack test is used, based on [61, 64]. They avoid the problem of multiple priority tests and target deadline d > Di , ∀i. The system can maintain the “gain time” gi (t) for job τij and allocate it dynamically to the optional component of this job. Continuous slack maintenance incurs high overheads. Time complexity of method with n periodic and m aperiodic active tasks: (1) task admission decision time Θ(n + m) checking and maintenance; (2) upon completion of τi time Θ(i + im ) = O(n + m) for slack recalculation at level i; (3) at every context switch time Θ(hp(i)) + O(lp(i)) = O(n + m) off task τi and gain time; (4) at the end of a IP Θ(n + m). If “exact” SS is used slack re-computation at the completion of a task takes O(Gi i) where Gi is O(number of BPs in interval), which is pseudo-polynomial time. [68] Robert Davis and Andy Wellings. Dual priority scheduling. In Proc. 16th IEEE Real Time Systems Symp., pages 100–109, 5–7 December 1995. In this paper, we present a new strategy for scheduling tasks with soft deadlines in real-time systems containing periodic, sporadic and adaptive tasks with hard deadlines. In such systems, much of the spare capacity present is due to sporadic and adaptive tasks not arriving at their maximum rate. Offline methods of identifying spare capacity such as the Deferrable Server or Priority Exchange Algorithm are unable to make this spare capacity available as anything other than a background service opportunity for soft tasks. Michael E. Thomadakis miket AT tamu DOT edu 25

September 4, 2008 DRAFT

REFERENCES

REFERENCES

Further, more recent methods such as dynamic Slack Stealing require computationally expensive re-evaluation of the available slack in order to reclaim such spare capacity. By comparison, the Dual Priority approach presented in this paper provides an efficient and effective means of scheduling soft task in this case. See [63]. [69] Robert I. Davis, Alan Burns, and W. Walker. Guaranteeing timing constraints under shortest remaining processing time scheduling. In Proc. 9th Euromicro Workshop on Real-Time Systems, pages 88–93, 11–13 June 1997. The scheduling scheme “shortest remaining processing time” (SRPT) has the advantage that it minimises mean response times. We present feasibility tests for SRPT that will enable this scheduling approach to be used for real time systems. Examples are given of task sets that are schedulable under SRPT but not by fixed priority based scheduling. [70] M. A. H. Dempster, J. K. Lenstra, and A. H. G. Rinnooy Kan, editors. Deterministic and Stochastic Scheduling: Proceedings of the NATO Advanced Study and Research Institute on Theoretical Approaches to Scheduling Problems, volume 84 of C – Mathematical and Physical Sciences, Durham, UK, July 6–17 1981. D. Reidel Publishing Company, Dordrecht: Holland. A collection of papers on deterministic and stochastic scheduling problems. [71] Jayanta K. Dey, James Kurose, and Don Towsley. On-line scheduling policied for a class of IRIS (increasing reward with increasing service) real-time tasks. IEEE Trans. on Computers, 45(7):802–813, July 1996. The model of optional computation where tasks generate more utility for the system when they are allocated more processor time. The problem is approached as a multi-dimensional optimization problem. The rewards are increasing functions of allocated time. They present an on-line time allocation algorithm for random tasks arriving on a single-processor at unpredetermined instances of time. The objective of the algorithm is to maxize the expected reward per task per unit time. They suggest that a two level algorithm, with level (1) allocating processor time, and level (2) scheduling by EDD, offers the best performance among other alternatives. [72] M. Di Natale and J.A. Stankovic. Scheduling distributed real-time tasks with minimum jitter. IEEE Transactions on Computers, 49(4):303–316, April 2000. [73] Jianzhong Du and Joseph Y. Leung. Minimizing mean flow time with release time and deadline constraints. In Proc. 9th IEEE Real Time Systems Symp., pages 24–32, 6–8 December 1988. Michael E. Thomadakis miket AT tamu DOT edu 26

September 4, 2008 DRAFT

REFERENCES

REFERENCES

The problem of preemptively scheduling a task system consisting of a set of n independent tasks on one processor so as to minimize the mean flow time is considered. The goal is to find a preemptive schedule such that the mean flow time is minimized subject to the constraint that task τi is executed within the interval between its release time and its deadline. Such a schedule, if it exists, is called an optimal schedule. It is shown that the problem of finding an optimal schedule is NP-hard. A greedy algorithm is given to find an optimal schedule for a large class of task systems. [74] Jianzhong Du and Joseph Y. Leung. On-line scheduling of real-time tasks. In Proc. 9th IEEE Real Time Systems Symp., pages 244–250, 6–8 December 1988. The problem of online scheduling of a set of n independent, real-time tasks on m ≥ 1 identical processors is addressed. An online scheduler is said to be optimal if it performs as well as the best offline scheduler. It is shown that no optimal online scheduler can exist if the tasks have more than one distinct deadline. An optimal online scheduler for tasks with one common deadline is given. An optimal online scheduler is also given for environments in which processors can go down unexpectedly. [75] Jr. E. G. Coffman, editor. Computer and Job-Shop Scheduling Theory. A WileyInterscience Publication. John Wiley & Sons, New York, 1976. The classic results on scheduling. [76] D. Eager, E. Lazowska, and J. Zahorjan. Adaptive load sharing in homogeneous distributed systems. IEEE Transactions on Software Engineering, SE-12(5):662–675, May 1986. [77] J. Eykholt, S. R. Kleiman, S. Barton, R. Faulkner, D. Stein, M. Smith, A. Shivalingiah, J. Voll, M. Weeks, and D. Williams. Beyond multiprocessing: Multithreading the SunOS kernel. In Proc. of the Summer 1992 USENIX Conf., pages 11–18, San Antonio, TX, June 1992. USENIX Association. [78] D. C. Feldmeier. A survey of high-performance protocol implementation techniques. In A. N. Tantawi, editor, High-Performance Networks, pages 29–50. Kluwer Academic Publishers, Boston, 1994. [79] Wu-Chun Feng and Jane W.-S. Liu. Algorithms for scheduling real-time tasks with input error and end-to-end deadlines. IEEE Trans. on Software Engineering, 23(2):93– 106, February 1997. This paper describes algorithms for scheduling preemptive, imprecise, composite tasks in real-time. Each composite task consists of a chain of component Michael E. Thomadakis miket AT tamu DOT edu 27

September 4, 2008 DRAFT

REFERENCES

REFERENCES

tasks, and each component task is made up of a mandatory part and an optional part. Whenever a component task uses imprecise input, the processing times of its mandatory and optional parts may become larger. The composite tasks are scheduled by a two-level scheduler. At the high level, the composite tasks are scheduled preemptively on one processor, according to an existing algorithm for scheduling simple imprecise tasks. The low-level scheduler then distributes the time budgeted for each composite task across its component tasks so as to minimize the output error of the composite task. [80] C. J. Fidge. Real-time schedulability tests for preemptive multitasking. Real-Time Systems, 14:61–93, 1998. A good synopsis of schedulability tests for fixed and dynamic priority schedules. All are based on the critical instance. [81] Gerhard Fohler. Joint scheduling of distributed complex periodic and hard aperiodic tasks in statically scheduled systems. In Proc. 16-th IEEE Real-Time Systems Symp., pages 152–161, December 1995. Periodic tasks are scheduled and assigned to processors statically. Periodic tasks have precedence constraints, and need to communicate. Static schedule is generated off-line and then idle capacities are located. Periodic tasks can be relocated so that aperiodic tasks can use the shifted idle periods. Time-line is discrete, consisting of time-slots with equal duration and PEs are synchronized at the clock. The algorithm records Busy and Idle Periods and then carries out a O(N) test, for admission control, where N is the number of time slots within [0, H), H is the hyper-period. Dispatcher is always invoked at the beginning of each slot to select the next task to dispatch. Tasks are preemptive; they receive input and write output at the beginning and end of each slot, respectively. MAXT := max execution time in slots. Task synchronization takes place exclusively via precedence constraints, which are DAGs. Each DAG PG has a repetition period, a ready time ri = i × P eriod(P G) and deadline di = ri + Deadline(P G). No deadlines among tasks of same DAG. There are also sporadic and aperiodic DAGs with hard or no deadlines. The dispatcher checks for non-periodic task arrivals at the beginning of each slot. Static scheduling takes place off-line, where sub-graphs of PG Tasks are assigned PEs so that precedence dependencies are maintained. Communication takes place between subgraphs. Subgraphs allocate adjacent time slots are coalesced into contiguous scheduling blocks. The online guaranteed firm task admission algorithm goes through all idle periods within [rf , df ) to accumulate the total idle capacity. The number of idle periods is N and in general is pseudo-polynomial. the worst case, each periodic task is a PIn n busy period and there can be i=1 ni (rf , df ) busy periods within [rf , df ), Michael E. Thomadakis miket AT tamu DOT edu 28

September 4, 2008 DRAFT

REFERENCES

REFERENCES

where ni (rf , df ) is the total number of periodic tasks releases within the same time period. The idle periods, start times and deadlines are recorded for each node in a table. This table is consulted at firm task arrival to determine feasibility. The table needs pseudo-polynomial space. Any number of firm tasks can be released at the beginning of each time slot. The scheduler needs to run a O(N) time algorithm, if there are no pending aperiodic tasks. This is overly expensive. Periodic tasks communicate via messages, M, which have start(M), end(M), start and end times, respectively. For intermediate task-nodes in DAG, dl(T ) = start(M), if the T sends a message and dl(T ) = end(M) if it receives a message. The communication schedule is build statically, off-line. For each node, the tasks are ordered by their deadlines into a resulting schedule which consists of a sequence of Busy and Idle periods. The schedule is partitioned by the deadlines of tasks into intervals Ii with the end of Ii being the deadlines of the tasks end(Ii ) in Ii and the start est(Ii ) of Ii the start time of the earliest task. Intervals may temporally overlap or not. The spare capacity of a disjoint P interval is the idle time within [est(Ii ), end(Ii )), given by sc(Ii ) = |Ii | − J∈Ii C + min{sc(Ii+1 ), 0}, where C is the maximum computation time for task J, and sc(Ii+1 ) < 0 in case the subsequent interval borrows from current one, so that, sc(Ii+1 ) < 0. During system operation, the scheduler executes at the beginning of each clock tick to check if aperiodics have arrived, perform admission control and select the task to run during the current tick. The admission control algorithm checks if the new aperiodic task Ja arrived  at t is schedulable P if Ca ≤ sc(Ic )t + {c≤i;end(Il )≤da and end(Il+1 )>da } sc(Ii ) + min{sc(Il+1 ), da − start(Il+1 )}. It seems that this last formula does not consider periodic task relocation. The discussion on the paper does not provide sufficient information on how the overlapping intervals are handled with respect to spare capacity. For one aperiodic Ja the algorithm needs to count all N idle-periods within [ra , da ). However, N is pseudo-polynomial as the total number of periodic instances within arbitrary intervals [ra , da ) is pseudo-polynomial. After spare capacity is allocated to aperiodic tasks the capacity variables need to be updated to reflect this. Since idle periods are pseudo-polynomial in number the update takes pseudo-polynomial time as well. When there are several aperiodic tasks pending the algorithm needs to accommodate the computation of all of them. It is not clear by the information provided in the paper how this is done. [82] B. Ford and S. Susarla. CPU inheritance scheduling. In Proc. of Second Symp. on Operating Systems Design and Implementation, pages 107–121, Seattle, WA, October 1996. [83] D. S. Fussell and M. Malek, editors. Responsive Computer Systems: Steps Toward Michael E. Thomadakis miket AT tamu DOT edu 29

September 4, 2008 DRAFT

REFERENCES

REFERENCES

Fault-Tolerant Real-Time Systems. Kluwer Academic Publishers, Boston, MA, 1995. The proceedings in a book version. [84] B. O. Gallmeister. POSIX.4 Programming for the Real World. O’Reilly and Associates, Inc., Newton, MA, 1995. [85] Bill O. Gallmeister and C. Lanier. Early experience with POSIX 1003.4 and POSIX1003.4A. In Proc. of IEEE 12th Real-Time Systems Symposium, pages 190– 198, San Antonio, TX, December 1991. [86] Mark K. Gardner and Jane W.-S. Liu. Analysing stochastic fixed-priority real-time systems. In W. Rance Cleaveland, editor, Lecture Notes in Computer Science 1579, pages 44–58. Springer-Verlag, March 1999. [87] Laurent George, Paul Muhlethaler, and Nicolas Rivierre. Optimality and nonpremptive real-time scheduling revisited. Rapport de Recherche 2516, INRIA, The French National Institute for Research in Computer Science and Control, Cedex, France, April 1995. A relatively introductory overview of schedulability results for non-idling, and idling dymamic priority, non-preemptive scheduling. Schedulability analysis based on the synchronous busy period. The solution for the idling, nonpreemptive EDD involves Branch and Bound solution. [88] Laurent George, Nicolas Rivierre, and Marco Spuri. Preemptive and non-premptive real-time uniprocessor scheduling. Rapport de Recherche 2966, INRIA, The French National Institute for Research in Computer Science and Control, Cedex, France, September 1996. A relatively thorough overview of schedulability results for non-idling, fixed vs dymamic priority, preemptive and non-preemptive scheduling. Schedulability analysis based on the synchronous busy period. [89] T. M. Ghazalie and T. P. Baker. Aperiodic servers in a deadline scheduling environment. Real-Time Systems, 9(1):31–67, July 1995. The present the EDD based versions of the Deferrable, Sporadic and Priority Exchange Servers. [90] Ahmed Gheith and Karsten Schwan. CHAOSarc : Real-time objects and atomicity for multiprocessors. In Sang H. Son, editor, Advances in Real-Time Systems, chapter 3, pages 39–71. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1995. [91] Donald W. Gillies and Jane W.-S. Liu. Greed in resource scheduling. In Proc. 10th IEEE Real Time Systems Symp., pages 285–294, 5–7 December 1989. Michael E. Thomadakis miket AT tamu DOT edu 30

September 4, 2008 DRAFT

REFERENCES

REFERENCES

An examination is made of the worst-case performance of a class of scheduling algorithms commonly known as priority-driven or list-scheduling algorithms. These algorithms have anomalous, unpredictable performance when used to suboptimally schedule nonpreemptive tasks with precedence constraints. A general method is presented for deriving the worst-case performance of these algorithms. This method is easy to use, yet powerful enough to yield tight performance bounds for many classes of scheduling problems. It is demonstrated that the method has wide applicability. Several task systems are presented for which list-scheduling algorithms have worst-case performance, and the general characteristics of these task systems are discussed. It is believed that these task systems are sometimes ignored in simulation studies; consequently, the results of these studies may be overly optimistic. [92] Berny Goodheart and James Cox. The Magic Garden Explained: The Internals of UNIX, System V Release 4, (An Open System Design). Prentice Hall, 1994. A very good reference to the internals of SVr4. [93] P. Goyal, G. Xingang, and H. Vin. A hierarchical CPU scheduler for multimedia operating systems. In Proc. of Second USENIX Symposium on Operating System Design and Implementation, pages 107–121, October 28–31 1996. [94] R. L. Graham, E. L. Lawler, J. K. Lenstra, and A. H. G. Ronnooy Kan. Optimization and approximation in deterministic sequencing and scheduling: a survey. In Hammer et al. [98], pages 287–326. Another survey on scheduling problems. [95] Jim Gray. The transaction concept: Virtues and limitations. In Proc. 7th International Conference on Very Large Data Bases, pages 144–154, September 1981. [96] Moncef Hamdaoui and Parameswaran Ramanathan. A service policy for real-time customers with (m, k)-firm deadlines. In FTCS-24, Twenty-Fourth International Symposium on Fault-Tolerant Computing Digest of Papers, pages 196–205, 15–17 June 1994. [97] Moncef Hamdaoui and Parameswaran Ramanathan. A dynamic priority assignment technique for streams with (m, k)-firm deadlines. IEEE Trans. on Computers, 44(12):1443–1451, December 1995. Introduce (m, k)-tasks to model periodic activities that may miss m out of k consecutive deadlines, and still maintain acceptable quality of service. Boost priority if close to violate the (m, k) requirement. Dynamically adjust priorities in order to minimize the number of violations of the (m, k) requirement. Michael E. Thomadakis miket AT tamu DOT edu 31

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[98] P. L. Hammer, E. L. Johnson, and B. H. Korte, editors. Discrete Optimization II: Proceedings of the Advanced Research Institute on Discrete Optimization and Systems Applications, volume 5 of Annals of Discrete Mathematics, Banff, Alta. and Vancouver, B.C., Canada, August 1977. North-Holland Publishing Co. – Amsterdam. A collection of discrete optimization papers. [99] Michael Gonzalez Harbour and Lui Sha. An application-level implementation of the sporadic server. Technical Report CMU/SEI-91-TR-26, Carnegie Mellon University, Software Engineering Institute, Pittsburgh, PA, 1991. 54 pages. The purpose of this paper is to introduce a sporadic server algorithm that can be implemented as an application-level task, and that can be used when no runtime or operating system level implementation of the sporadic server is available. The sporadic server is a simple mechanism that both limits and guarantees a certain amount of execution power dedicated to servicing aperiodic requests with soft or hard deadlines in a hard real-time system. The sporadic server is event-driven from an application viewpoint but appears as a periodic task for the purpose of analysis and, consequently, allows the use of analysis methods such as rate monotonic analysis to predict the behavior of the real-time system. When the sporadic server is implemented at the application-level, without modification to the runtime executive or the operating system, some of its requirements cannot be met strictly and, therefore, some simplifications need to be assumed. We show that even with these simplifications, the application-level sporadic server proposed in this paper has the same worst-case performance as the full-featured runtime sporadic server algorithm, although the average case performance is slightly worse. The implementation requirements are a runtime prioritized preemptive scheduler and system calls to change a task’s or thread’s priority. Two implementations are introduced in this paper, one for Ada and the other for POSIX 1003.4a, Threads Extension to Portable Operating Systems. [100] C. A. R. Hoare. The emperor’s old clothes. Communications of the ACM, 24(2), February 1981. [101] Nima Homayoun and Parameswaran Ramanathan. Dynamic priority scheduling of periodic and aperiodic tasks in hard real-time systems. Real-Time Systems, 6(3):207– 232, 1994. The EDD for periodic tasks and Deadline version of Deferrable Server for soft aperiodics. They propose method that selects the period and capacity of the DS, in order to minimize the response time of the aperiodic tasks.

Michael E. Thomadakis miket AT tamu DOT edu 32

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[102] D. Hull, W. Feng, and J. W. S. Liu. Enhancing the performance and dependability of hard real-time systems. In IEEE Computer Performance and Dependability Symposium, pages 174–182, Erlangen, Germany, April 1995. [103] IEEE Portable Applications Standards Committee. IEEE Std 1003.1b-1993, Real Time Extensions. IEEE, 1993. [104] IEEE Portable Applications Standards Committee. IEEE Std 1003.1c-1995, Threads Extensions. IEEE, 1995. [105] Damir Isovi´c and Gerhard Fohler. Handling sporadic tasks in off-line scheduled distributed real-time systems. In Proc. of the 11th Euromicro Conf. on Real-Time Systems, pages 60–67, June 9–11 1999. The on-line admission algorithm is explained more clearly in this paper. Pseudo-polynomial admission control, micro-management of spare capacity, discretized time-line, potentially large storage overhead Θ(N), where N is the total number of idle periods within the interval of interest. [106] Damir Isovi´c and Gerhard Fohler. Efficient scheduling of sporadic, aperiodic, and periodic tasks with complex constraints. In Proc. 21st IEEE Real-Time Systems Symp., pages 207–216, 27–30November 2000. Joint scheduling of tasks with: (1) simple constraints: a task parameter, such as release time, deadline, period, and (2) complex constraints: synchronization (precedence-order, distributed-systems where tasks have dependencies with tasks across PEs [e.g., holistic-scheduling by Tindel]), Jitter (the maximum variation of start and finish times is given), non-periodic constraints (minimum task inter-activation times for consecutive or non-consecutive task instances, multiple instance tasks [iterations]), non-temporal constraints (reliability, performance, etc.) and application-specific constraints (pinned-PE task allocation, etc.). The paper concerns systems with (a) periodic tasks (Ci , Di , Ti , no release times), (b) aperiodic: (i) firm tasks (ra , Ca , Da , with unpredetermined arrival times), (ii) soft aperiodic (no Da ), (iii) sporadic tasks (Cs , Ds , Ts , where Ti is the minimum inter-arrival period). The system is distributed, but no task migration takes place. It assumes discrete time model, with time monotonically increasing in discrete, fixed-size increments, same model as [81] above. Periodic tasks are preprocessed off-line, where schedules of time intervals are generated with start and end times for each one of the nodes. At run time EDD dispatching of statically and dynamically guaranteed tasks takes place, so that the deadline constraints are preserved. Let M be the number of idle periods and N the number of deadlines in one hyperperiod, respectively. During off-line preparation, the scheduling time-line is Michael E. Thomadakis miket AT tamu DOT edu 33

September 4, 2008 DRAFT

REFERENCES

REFERENCES

divided into execution intervals and the start, finish, and execution times are recorded into a table, for each one of these intervals. The table size is Θ(N). In essence, the deadlines mark the end of intervals and, in principle, we can find the idle capacity within [t, dk ) of a statically prepared schedule, where t is the current time and dk is the end of one of these intervals. This simply involves summing up the idle capacities of all intervals within [t, dk ). However, to find the exact amount of idle capacity within arbitrary intervals [rf , df ) requires pseudopolynomial effort, since a schedule simulation (construction) needs to take place within [rf , df ), which takes time O(N) as all intervals need to be examined at each task deadline points. During on-line operations the dispatcher is invoked at the beginning of each clock tick to select the next task to run. The sc(Ic ) variables are continuously adjusted to reflect the current availability of slack. (What’s the difference from the EDL method by Silly and Chetto?) When spare capacity reaches 0, an off-line guaranteed task must be dispatched. Otherwise, pending on-line tasks are dispatched (if any). Off-line tasks can execute as soon as the previous task finishes, and thus are “borrowing” idle capacity from the previous task, in which case the borrowed capacity is recorded, adjusting the sc(Ic ) by a negative amount. In the absence of aperiodic tasks idle capacities need to be continuously adjusted. The paper does not provide sufficient information on the way idle capacities are updated, making it very hard to determine the overhead. The on-line acceptance test hunts for spare capacity simulating EDD service for the aperiodics. To test Ai , let Ga := {Aj |dl(Aj ) < dl(Ai )}, which means that there may be n = |Ga | aperiodic tasks queued ahead ofAi , arranged in EDD order. For each Aj ∈ Ga , the algorithm sums up the spare capacities until dl(Aj ) and it checks if the finish times f t(Aj ) < dl(Aj ) for tasks Aj , after Ai is introduced in the schedule. If for some task k, dl(Ak ) < f t(Ak ), then Ai is rejected. The finish time f t(Aj ) of task Aj is found as follows. The f t(Aj−1 ) is found first, then it accumulates the spare capacity in all intervals Ic within [f t(Aj−1), t), t > f t(Aj−1 ) until the t for which this sum exceeds Cj , and then f t(Aj ) := t. Therefore, if there are O(m) idle periods within [f t(Aj−1 ), f t(Aj )) and Na pending aperiodic tasks in Ga , to guarantee a single aperiodic Ai the algorithm takes Θ(m × Na ) time, which is pseudopolynomial, since the total number M of idle periods within arbitrary intervals in schedules of n periodic tasks is pseudopolynomial in n. The algorithm has a number of sources for high overhead. [107] J. R. Jackson. Scheduling a production line to minimize maximum tardiness. Research Report 53, Management Science, University of California, Los Angeles, CA, 1955. The first time EDD is proposed and proven to minimize the maximum lateness/tardiness in a single machine environment Michael E. Thomadakis miket AT tamu DOT edu 34

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[108] P. Jacquet and P. Muhlethaler. A very simple algorithm for flow control on high speed networks via la palice queueings. In Eleventh Annual Joint Conference of the IEEE Computer and Communications Societies, IEEE INFOCOM, Vol. 1, pages 313–321, 4–8 May 1992. Flow control algorithms specially designed for high speed networks are introduced. They are based on a new queuing model called the La Palice queue. An algorithm that is simply an extrapolation of the classic flow control algorithm with a request to the destination and an answer to the source is presented. It is shown that the overflow occurrence is lowered to a certain probability by the application of the algorithm. An intermediate algorithm is presented that cancels overflow occurrence, but allows repetition of requests with a certain probability per multipacket message. [109] Kevin Jeffay. Scheduling sporadic tasks with shared resources in hard real-time systems. In Proc. 13th IEEE Real-Time Systems Symp., pages 89–99, Phoenix, AZ, December 1992. Sporadic tasks, sharing single-unit, reusable resources. They show correctness of their approach and they present an EDD based algorithm. [110] Kevin Jeffay, D. F. Stanat, and C. U. Martel. On non-preemptive scheduling of periodic and sporadic tasks. In Proc. 12th IEEE Real-Time Systems Symp., pages 129–139, San Antonio, TX, December 1991. Feasibility analysis for EDD based, non-preemptive scheduling a set of periodic or sporadic tasks on a uniprocessor without inserted idle time. Time-line is discrete. ri = 0, Ti = Di , ∀i. Tasks can be concrete, i.e., with ri > 0 or general (synchronous). The authors give a necessary and sufficient set of conditions C for a set of periodic or sporadic tasks to be schedulable for arbitrary release times of the tasks. They show that any set of periodic or sporadic tasks that satisfies C can be scheduled with an earliest-deadline-first (EDF) scheduling algorithm. The authors present the scheduling model, briefly review the literature in real-time scheduling, prove that the non-preemptive EDF algorithm is universal for sets of tasks, whether periodic or sporadic, and demonstrate the absence of a universal algorithm for periodic tasks with specified release times. It is proved that the problem of deciding schedulability of a set of concrete periodic tasks is intractable. [111] Kevin Jeffay and Donald J. Stone. Accounting for the interrupt handling costs in dynamic priority systems. In Proc. 14th IEEE Real-Time Systems Symp., pages 212– 221, Raleigh-Durham, NC, December 1993.

Michael E. Thomadakis miket AT tamu DOT edu 35

September 4, 2008 DRAFT

REFERENCES

REFERENCES

Feasibility and schedulability problems for periodic tasks that must compete for the processor with interrupt handlers - tasks that are assumed to always have priority over application tasks. Analysis is on deadline driven scheduling methods. They develop conditions that solve the feasibility and schedulability problems. Periodic schedule is preemptive, interrupt handlers are non-preemptive. Time complexity for feasibility decision T (n) = O(n2 + m + min{Tj }), where m is he number of interrupt handlers. [112] M. Jones, J. S. Barrera, A. Forin, P. J. Leach, D. Rosu, and M. C. Rosu. An overview of the Rialto real time architecture. In Proc. of the Seventh ACM SIGOPS European Workshop, pages 249–256, Connemara, Ireland, September 1996. [113] M. Joseph and P. Pandya. Finding the response times in a real-time system. Computer Journal, 29(5):390–395, 1986. The first paper to propose a fixed-point iterative algorithm to compute the busy period lengths. They use the critical-instant busy-period. Their method, recursive or iterative was relied upon by several authors. [114] Alexander H. G. Rinnooy Kan. Machine Scheduling Problems: Classification, Complexity and Computation. Martinus Nijhoff, The Hague, 1976. A concise presentation of 1/2/3 machine and general Job and Flow Shop problems. Summary of computation procedures, up until 1976. [115] D. I. Katcher, H. Arakawa, and J. K. Strosnider. Engineering and analysis of fixed priority schedulers. IEEE Trans. on Software Engineering, 19:920–934, 1993. The CMU FPS kernel engineering analysis. [116] Sandeep Khanna, Michael Sebree, and John Zolnowski. Real-time scheduling in SunOS 5.0. In Proc. of the Winter 1992 USENIX Conf., pages 375–390, San Francisco, CA, 1992. [117] Hyungill Kim, Sungyoung Lee, and Jongwon Lee. Alternative priority scheduling in dynamic priority systems. In Proc. Second IEEE International Conf. on Engineering of Complex Computer Systems, pages 239–246, 21–25 October 1996. EDD scheduling. Soft aperiodic and hard periodic. This is the soft version of the [119] where they omit the AC check. All other disadvantages hold. [118] Hyungill Kim, Sungyoung Lee, and Jongwon Lee. A near-optimal algorithm for scheduling soft-aperiodic requests in dynamic priority systems. In Proc. of the Eighth Euromicro Workshop on Real-Time Systems, pages 249–254, 12–14 June 1996.

Michael E. Thomadakis miket AT tamu DOT edu 36

September 4, 2008 DRAFT

REFERENCES

REFERENCES

EDD scheduling. Soft aperiodic and hard periodic. Motivation: All currently proposed EDD-based aperiodic service algorithms are not practical, due to high computation and/or storage costs. They modify the CTI-Dynamic in [127] to store the CTI as the computation requirements of the periodic tasks up to their next deadline, called zone periods, Zk , k = 1, . . . , N. They apply a clock-by-clock scheduling, at each step checking for the presence of aperiodic tasks and of sufficient slack by EDL. Only speed-up of the slack computation using the zones. Time T (n) = O(n) for each zone, until aperiodics finish, basically the idle time within [t, d(k) ), d(k) is the next periodic deadline. Therefore, scheduling decisions O(n) are made at preemption points (at-least) or clockticks (at-most), where the next task isPselected. Space overhead still very high S(n) = Θ((n + 1)H) or S(n) = Θ(n i ni (H) + H). [119] Hyungill Kim, Sungyoung Lee, Jongwon Lee, and Dougyoung Suh. Scheduling hardaperiodic requests in dynamic priority systems. In Proc. Third International Workshop on Real-Time Computing Systems and Applications, pages 314–319, 30 October –1 November 1996. Joint scheduling both hard deadline of periodic and aperiodic tasks in dynamic priority systems. Uses the Alternative Priority Scheduling, to develop an acceptance/rejection procedure for hard aperiodic tasks. Use CTI and alternate between EDF and Critical Exec. time First. Dynamically assign higher priority to task with maxi {Cri [d(k) ] − Cpi [t]}, i.e., the maximum deficit in computation time, until the next deadline d(k) . Basically, let P δi := Cri [d(k) ] − Cpi [t], then at each time t compute (d(k) − t) − i δi , which is Z L (d(k) ) − Z L (t), for EDL only, though. At decision points with aperiodic tasks, service periodics if for some j δj > 0. Several gaps in their method: (1) To dynamically compute the CET they need Cpi (t), Cri (t), ∀i, ∀t ∈ [0, H]. However, they can pre-compute Cri (t), only, but they don’t show how they do this in their on-line, algorithm. (2) The on-line feasibility check algorithm A(t), takes O(nm), clearly, but they claim it takes O(m), m the number of admitted aperiodics. (3) They claim the off-line pre-computation algorithm is O(n log n), but it is clearly Ω(N) if not O(H). Space for CTI and Cri [tn ] is still very high S(n) = O(H) + Θ(nN). [120] M. H. Klein, T. Ralya, B. Pollak, and M. G. Harbour. A Practitioner’s Handbook for Real-Time Analysis. Kluwer Academic Publishers, Boston, MA, 1993. [121] C. M. Krishna and Kang G. Shin. Real-Time Systems. Mc Graw-Hill, New-York, NY, 1997. [122] Tei-Wei Kuo, Yu-Hua Liu, and Kwei-Jay Lin. Efficient on-line schedulability tests for priority driven real-time systems. In Proc. 6th IEEE Real-Time Technology and Applications Symp., pages 4–13, May 2000. Michael E. Thomadakis miket AT tamu DOT edu 37

September 4, 2008 DRAFT

REFERENCES

REFERENCES

Many computer systems, such as those for open system environments or multimedia services, need an efficient schedulability test for online admission control of new jobs. Although various polynomial-time schedulability tests have been proposed, they often fail to decide the schedulability of the system precisely when the system is heavily loaded. This paper presents efficient online schedulability tests which are are shown to be more precise and efficient than any existing polynominal-time schedulability tests. Moreover, our proposed tests can be used for the multiframe model, where a task may have different computation times in different periods. We show the performance of the proposed schedulability tests in several simulation experiments. [123] J. Labettoulle. Some theorems on real-time scheduling. In E. Gelenbe, editor, Computer Architecture and Networks, pages 285–293. North-Holland, Amsterdam, 1974. Showed EDD to be optimal for preemptive deadline related scheduling problemd on single processor [124] B. W. Lampson. Hints for computer system design. In Proc. of the 9th ACM Symp. on Operating Systems Principles (SOSP-83), pages 33–48, St. Malo, France, October 1983. [125] E. L. Lawler. Recent results in the theory of machine scheduling. In Bachem et al. [15], pages 202–234. A good survey paper. Includes discussion on problems with deadline-related optimization criteria. [126] Chung-Yee Lee, Lei Lei, and Michael Pinedo. Current trends in deterministic scheduling. Annals of Operations Research, 70:1–41, 1997. A survey paper of the recent trends and techniques in deterministic scheduling theory. Specifically, they review local search techniques and discuss the 1-jobon-r-machine pattern and scheduling with machine availability constraints problem. [127] Sungyoung Lee, Hyungill Kim, and Jongwon Lee. A soft aperiodic task scheduling algorithm in dynamic-priority systems. In Proc. Second International Workshop on Real-Time Computing Systems and Applications, pages 68–72, October 1995. They generate CTI-table, the discrete-time EDL schedule of the periodic tasks, off-line, and possibly, the and CR[i, t], storing the ci (t), ∀i, t, in H. Storage S(n) = Θ((n + 1)H). They can find if Sn (t) = 0 or if CP [CT I[t]] − CR[CT I[t], t] > 0 in T (n) = Θ(1), as expected. They support a clock-byclock scheduling of tasks, at each checking the presence of aperiodic tasks. When a new soft aperiodic task arrive the allocate the next clock of IP under Michael E. Thomadakis miket AT tamu DOT edu 38

September 4, 2008 DRAFT

REFERENCES

REFERENCES

EDL immediately, if either, CT I[t] = 0 or CP [CT I[t]] − CR[CT I[t], t] > 0. The biggest problem is the storage overhead Θ((n + 1)H) which is unrealistic in most situations. Secondly, the schedule is inflexible and they cannot reclaim unused time from the periodics. Also, there is no mention of how the aperiodics are serviced within the time they are given. Many details are missing that can potentially impose heavy run time overhead and/or space requirements. [128] S. J. Leffler, M. K. McKusick, M. J. karels, and J. S. Quarterman. The Design and Implementation of the 4.3BSD Unix Operating System. Addison-Wesley, Reading, MA, 1989. [129] J. Lehoczky. Fixed priority scheduling of periodic task sets with arbitrary deadlines. In Proc. 11th IEEE Real Time Systems Symp., pages 201–209, 5–7 December 1990. Consideration is given to the problem of fixed priority scheduling of period tasks with arbitrary deadlines. A general criterion for the schedulability of such a task set is given. Worst case bounds are given which generalize the C. L. Liu and J. W. Layland (1973) bound. The results are shown to provide a basis for developing predictable distributed real-time systems. [130] J. Lehoczky and S. Ramos-Thuel. An optimal algorithm for scheduling soft-aperiodic tasks in fixed-priority preemptive systems. In Proc. 13th IEEE Real Time Systems Symp., pages 110–123, Phoenix, AZ, December 1992. [131] J. Lehoczky, L. Sha, and Y. Ding. The rate-monotonic scheduling algorithm – Exact characterization and average case behavior. In Proc. 10th IEEE Real Time Systems Symp., pages 166–171, December 1989. The first paper to propose an iterative algorithm that checks schedulability at the deadlines for fixed-priority based systems. An exact characterization of the ability of the rate monotonic scheduling algorithm to meet the deadlines of a periodic task set is represented. In addition, a stochastic analysis which gives the probability distribution of the breakdown utilization of randomly generated task sets is presented. It is shown that as the task set size increases, the task computation times become of little importance, and the breakdown utilization converges to a constant determined by the task periods. For uniformly distributed tasks, a breakdown utilization of 88% is a reasonable characterization. A case is shown in which the average-case breakdown utilization reaches the worst-case lower bound of C. L. Liu and J. W. Layland (1973). The worst case analysis is based on the critical-instance initial busy periodc of the periodic schedule. Asynchronous P tasks are not considered. They define Vi− := ij=1 Cj ⌈ Ttj ⌉, Li (t) := Vi− (t)/t, Michael E. Thomadakis miket AT tamu DOT edu 39

September 4, 2008 DRAFT

REFERENCES

REFERENCES

Li := min{Li (t) : t ∈ [0, Ti )}, and L := max{Li : i = 1, . . . i}. They verify schedulability of the set by examining a collection of points Si in the IBP of a schedule. Si := {kTj : j = 1, . . . , i; k = 1, . . . , ⌊Ti /Tj ⌋}, that is, the release times of tasks with higher priorities. [132] J. Lehoczky, L. Sha, and J. K. Strosnider. Enhanced aperiodic responsiveness in hard real-time environments. In Proc. 8th IEEE Real Time Systems Symp., pages 261–270, December 1987. [133] J. Lehoczky, Lui Sha, J. K. Strosnider, and Hide Tokuda. Fixed priority scheduling theory for hard real-time systems. In Andr´e M. van Tilborg and Gary M. Koob, editors, Foundations of Real-Time Computing: Scheduling and Resource Management, chapter 1, pages 1–30. Kluwer Academic Publishers, Norwell, MA, 1991. [134] John P. Lehoczky and Sandra Ramos-Thuel. Scheduling periodic and aperiodic tasks using the slack stealing algorithm. In Sang H. Son, editor, Advances in Real-Time Systems, chapter 8, pages 175–198. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1995. The (hopefully) final version of Thuel’s slack stealing algorithms. [135] Joseph Y. Leung, Tommy W. Tam, C. S. Wong, and Gilbert H. Young. Minimizing mean flow time with error constraint. In Proc. 10th IEEE Real Time Systems Symp., pages 2–11, 5–7 December 1989. In a new model of task systems studied by J.W.S. Liu et al. (1987) each task is logically decomposed into two subtasks, mandatory and optional. The authors discuss preemptively scheduling this kind of task system on p ≥ 1 identical processors so as to minimize the mean flow time. Given a task system and an error threshold K, the goal is to find a preemptive schedule such that each task is executed in the interval of its release time and deadline, the average error is no more than K, and the mean flow time of the schedule is minimized. Such a schedule is called an optimal schedule. It is shown that the problem of finding an optimal schedule is NP-hard for each p ≥ 1, even if all tasks have the same ready time and deadline. For a single processor, a pseudo-polynomial time algorithm and polynomial time algorithms for various special cases of the problem are given. [136] Joseph Y. Leung and C. S. Wong. Minimizing the number of late tasks with error constraint. In Proc. 11th IEEE Real Time Systems Symp., pages 32–40, 5–7 December 1990. The problem of minimizing the number of late tasks in the imprecise computation model is considered. Each task consists of two subtasks, mandatory and optional. A task is said to be on-time if its mandatory part is completed Michael E. Thomadakis miket AT tamu DOT edu 40

September 4, 2008 DRAFT

REFERENCES

REFERENCES

by its deadline; otherwise, it is said to be late. An on-time task incurs an error if its optional part is not computed by the deadline, and the error is simply the execution time of the unfinished portion. The authors consider the problem of finding a preemptive schedule for a set of tasks on p ≥ 1 identical processors, such that the number of on-time tasks is maximized, (or equivalently, the number of late task is minimized), and the total error of the on-time tasks is no more than a given threshold K. Such a schedule is called an optimal schedule. It is shown that the problem of finding an optimal schedule is NP-hard for each fixed p ≥ 1, even if all tasks have the same ready time and the same deadline. [137] Joseph Y.-T. Leung. A new algorithm for scheduling periodic, real-time tasks. Algorithmica, 4(2):209–219, 1989. Preemptive scheduling of real-time tasks on a mulitprocessor. They show that the slack-time (ST) algorithm is more effective on an MP system than the EDD. Slack time is the least-laxity-next algorithm. When there are x < m active tasks, they share x/m of each processor. A task set is feasible on m processor systems, if the ST algorithm produces a schedule within [0, B). If τ is synchronous, B = H, and if it is asynchronous P B =n rCi+ 2H. Their main result states that a task set τ is feasible if (i) i = 1 Ti ≤ m, (ii) in the ST schedule for m ≥ 1 processors all deadlines are met in [0, B], where B = r + 2H, and (iii) c(t1 ) = c(t2 ), t1 = r + H, t2 = r + 2H. The feasibility problem is co-NP hard by reduction of SGP into this problem. Note: in the paper the term “fixed-priority” scheduling is used inconsistently, as the ST and the EDD algorithms are all dynamic priority algorithms. [138] Joseph Y.-T. Leung. Research in real-time scheduling. In Andr´e M. van Tilborg and Gary M. Koob, editors, Foundations of Real-Time Computing: Scheduling and Resource Management, chapter 2, pages 31–62. Kluwer Academic Publishers, Norwell, MA, 1991. Algorithms to optimize various measures in RT systems on m ≥ 1 processors. Reference to the difficulty of on-line guaranteed scheduling. [139] Joseph Y.-T. Leung and M. L. Merrill. A note on preemptive scheduling of periodic, real-time tasks. Information Processing Letters, 11(3):115–118, 1980. Schedulability of periodic task set, when scheduled by the EDD algorithm. Tasks are asynchronous. Shows that c(t), t ∈ [s + kH, r + (k + 1)H) are identical for each k ≥ 1. Proves schedulability problem to be Co-NP Hard. To establish feasibility we construct an initial schedule withn [0, B) and check whether all deadlines are met. For synchronous sets they propose B = H, and Michael E. Thomadakis miket AT tamu DOT edu 41

September 4, 2008 DRAFT

REFERENCES

REFERENCES

for asynchronous ones, B = r + 2H. Their basic results states that, a task set τ is feasible on one processor, iff on its EDD schedule, (i) all deadlines are met within [0, B) and (ii) c(t) = c(t + H), t ≥ r + H. They reduce KSCP into the infeasibility of a EDD schedule. They leave open the question of feasibility on m > 1 processors and the existence of optimal scheduling algorithms for m > 1. [140] Joseph Y.-T. Leung and J. Whitehead. On the complexity of fixed-priority scheduling of periodic real-time tasks. Performance Evaluation, 2:237–250, 1982. The famous Deadline-Monotonic (DM) fixed-priority scheduling rule. For synchronous task sets, the DM fixed priority rule is optimal, i.e., if S(k) is a valid schedule for τ , and for some k, Dk > Dk+1, then we obtain a valid schedule S ′ (k) if we interchange the priorities of τk and τk+1 . With a finite number of interchanges we can convert any valid schedule into a DM schedule. A feasibility procedure involves verifying that the response time Ri ≤ Di for each τi , that Pi is, k=1 (nk (Dk −)Ck ) = Vi (Di −) ≤ Di . Time T (n) = O(n log n) + O(n2 ). Leung claims that a pseudopolynomial algorithm is necessary, namely one that checks all {dij }, however this is not necessary. In multi-processor systems, DM is not optimal. Partitioning and non-partitioning methods are not superior to each other. Feasibility on m processors is NP-hard in the strong sense, by reduction to 3-PARTITION, and thus there cannot be any pseudopolynomial time algorithm to answer it. The DM is no longer optimal for asynchronous systems. In asynchronous systems it is not immediately obvious whether the tasks are released at the same time instance, which is the SCP problem. The algorithm has to verify a longer schedule. For m = 1, a FP schedule is feasible if (i) all the deadlines in [0, r + 2H) are met and (ii) c(t) = c(t + H) for t ≥ r + H. The feasibility problem is NP-hard in the strong sense, by reduction of K-SCP into it. K-SCP is shown NP-complete by reduction of CLIQUE into K-SCP. Then a K-SCP can be resolved if there exist a polynomial time solution to the feasibility problem of K tasks, under a perticular priority assignment, by letting infeasibilty imply that the K pairs have a common congruence. The existence of a priority assignment with a feasible schedule is NP-hard, as all n! permutations of priorities have to be evaluated. When m > 2 the problem is NP-hard in the strong sense. They leave open the following questions. (1) Complexity of obtaining an optimal fixed priority assignment on uniprocessors, in asynchronous sets. (2) Feasibility checking for m > 1 of particular priority assignment in asynchronous sets. (3) Complexity of decision of feasibility of particular priority assignment in synchronous sets for m > 1. [141] H. M. Levy and P. H. Lipman. Virtual memory management in the VAX/VMS operating system. IEEE Computer, 15(3):35–41, March 1982. Michael E. Thomadakis miket AT tamu DOT edu 42

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[142] C. Liu and J. Layland. Scheduling algorithms for multi-programming in a hard realtime environment. Journal of the ACM, 20:46–61, 1973. [143] J. W. S. Liu, K. J. Lin, W. K. Shih, A. C. S. Yu, J. Y. Chung, and W. Zhao. Algorithms for scheduling imprecise computation. IEEE Computer, 24(5):58–68, May 1991. The imprecise computation technique, which prevents timing faults and achieves graceful degradation by giving the user an approximate result of acceptable quality whenever the system cannot produce the exact result in time, is considered. Different approaches for scheduling imprecise computations in hard real-time environments are discussed. Workload models that quantify the tradeoff between result quality and computation time are reviewed. Scheduling algorithms that exploit this tradeoff are described. These include algorithms for scheduling to minimize total error, scheduling periodic jobs, and scheduling parallelizable tasks. A queuing-theoretical formulation of the imprecise scheduling problem is presented. [144] J. W. S. Liu, K. J. Lin, W. K. Shih, A. C. S. Yu, J. Y. Chung, and W. Zhao. Algorithms for scheduling imprecise computations. In Andr´e M. van Tilborg and Gary M. Koob, editors, Foundations of Real-Time Computing: Scheduling and Resource Management, chapter 8, pages 203–249. Kluwer Academic Publishers, Norwell, MA, 1991. [145] Jane W. S. Liu. Real-Time Systems. Prentice-Hall, Upper Saddle River, New Jersey 07458, 2000. A recent overview of real-time system algorithms and techniques. [146] Jane W. S. Liu, Wei-Kuan Shih, Kwei-Jay Lin, Riccardo Bettati, and Jen-Yao Chung. Imprecise computations. Proceedings of the IEEE, 82(1):83–94, January 1994. The imprecise computation technique has been proposed as a way to handle transient overload and to enhance fault tolerance of real-time systems. In a system based on this technique, each time-critical task is designed in such a way that it can produce a usable, approximate result in time whenever a failure or overload prevents it from producing the desired, precise result. This paper describes ways to implement imprecise computations, models to characterize them and algorithms for scheduling them. An imprecise mechanism for the generation and use of approximate results can be integrated in a natural way with a traditional fault-tolerance mechanism. An architectural framework for this integration is described. [147] Douglas C. Locke. Software architecture for hard real-time applications: Cyclic executives vs.fixed priority executives. The Journal of Real-Time Systems, 4(1):37–53, March 1992. Michael E. Thomadakis miket AT tamu DOT edu 43

September 4, 2008 DRAFT

REFERENCES

REFERENCES

A practical comparison between cyclic and FPS executives in favor of FPS [148] Silvano Martello, Gilbert Laporte, Michel Minoux, and Celso Ribeiro, editors. Surveys in Comninatorial Optimization, volume 132 of Annals of Discrete Mathematics. Elsevier Science Publishers B. V., North-Holland–Amsterdam, July 8–19 1987. A collection of discrete optimization survey papers. [149] Anmol Mathur, Ali Dasdan, and Rajesh K. Gupta. Rate analysis for embedded systems. ACM Trans. Des. Autom. Electron. Syst., 3(3):408–436, 1998. [150] Charlie McElhone. Adapting and evaluating algorithms for dynammic schedulability testing. Technical Report YCS–94–225, Department of Computer Science, University of York, UK, 1994. This TR explains the famous dynamic version of schedulability tests by Audsley and Davis. The aperiodic tasks are assumed to have deadlines dk ≤ dij for some active periodic job τij , and thus, the algorithms are trying to locate slack at different priority levels. There is no mention of the case where Dak > Di ; ∀i. Two algorithms are proposed: an exact pseudopolynomial (PP) and an approximate O(n2 ). Several heuristic variations on the PP are proposed in order to make its average case run faster. [151] Charlie McElhone, A. Burns, and R. Davis. Hybrid algorithms for dynamic schedulability testing. In Proc. 7th Euromicro Workshop on Real-Time Systems, pages 254–261, 14–16 June 1995. The proceedings version of [150]. Two basic algorithms: (1) a pessimistic/approximate T (n) = O(n2), and (2) an exact, “optimal”, pseudopolynomial algorithm. A few hybrids are proposed to trade-off computation cost vs.schedulability decision quality. [152] J. M. Moore. An n–job, one machine sequencing algorithm for minimizing the number of late jobs. Management Science, 15:102–109, 1968. The EDD algorithm that reduces the max Cj for all pending tasks j. It minimizes the total number of late jobs. [153] Robert Morris and Ken Thompson. Password security: A case history. Communications of the ACM, 22(11), November 1979. [154] F. Mueller, V. Rustagi, and T. P. Baker. MiThOS – A micro-kernel threads operating system. Technical Report 94–091, Comp. Science Department, Florida State University, Tallahassee, FL, 1994.

Michael E. Thomadakis miket AT tamu DOT edu 44

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[155] P. Muhlethaler and K. Chen. Two classes of effective heuristics for time value functions based scheduling. In Proc. of the Third Workshop on Future Trends of Distributed Computing Systems, pages 354–361, 14–16 April 1992. In real-time computing system each task is associated with a function (time value function TVF) whose value at time t gives the award that the system receives if the corresponding task is achieved by this time. The authors investigate the NP-hard scheduling problem which consists in maximizing the sum of these awards. First on a theoretical point of view, they define an optimal decomposition: the set of the tasks to be scheduled is divided into a ranked collection of subsets so that an optimal scheduling has to respect the ranks of subsets. They also introduce two classes of polynomial scheduling algorithms which provide sequences respecting this optimal decomposition. On a practical point of view, simulation results show that these algorithms yield optimal or nearly optimal sequences in many cases. [156] Guido Nerjes, Peter Muth, M. Paterakis, P. Triantafillou, and G. Weikum. Scheduling strategies for mixed workloads in multimedia information servers. In Proc. 8th Int’l Workshop on Research Issues in Data Engineering: Continuous-Media Databases and Applications, pages 121–128, 23–24 February 1998. In contrast to pure video servers, advanced applications such as digital libraries or teleteaching exhibit a mixed workload with massive access to conventional, ”discrete” data such as text documents, images and indexes as well as requests for ”continuous data”. In addition to the service quality guarantees for continuous data requests, quality-conscious applications require that the response time of the discrete data requests stay below some user-tolerance threshold. We study the impact of different disk scheduling policies on the service quality for both continuous and discrete data. We identify a number of critical issues, present a framework for describing the various policies in terms of few parameters and finally provide experimental results, based on a detailed simulation testbed, that compare different scheduling policies. [157] Guido Nerjes, Peter Muth, and Gerhard Weikum. Stochastic performance guarantees for mixed workloads in a multimedia information system. In Proc. 7th Inte’l Workshop on Research Issues in Data Engineering, pages 131–140, 7–8 April 1997. We present an approach to stochastic performance guarantees for multimedia servers with mixed workloads. Advanced multimedia applications such as digital libraries or teleteaching exhibit a mixed workload with accesses to both “continuous” and conventional, “discrete” data, where the fractions of continuous data and discrete data requests vary over time. We assume that a server shares all disks among continuous and discrete data, and we develop a Michael E. Thomadakis miket AT tamu DOT edu 45

September 4, 2008 DRAFT

REFERENCES

REFERENCES

stochastic performance model for the resulting mixed workload, using a combination of analytic and simulation based modeling. Based on this model we devise a round based scheduling scheme with stochastic performance guarantees: for continous data requests, we bound the probability that “glitches” occur and for discrete data requests, we bound the probability that the response time exceeds a certain tolerance threshold. We present early results of simulation studies. [158] J. Nieh, J. G. Hanko, J. D. Northcutt, and G. A. Wall. SVR4 unix scheduler: Unacceptable for multimedia applications. In Proc. of the 4th Inte’l Workshop in Network and Operating Systems Support for Digital Audio and Video, pages 35–68, Lancaster, U.K., November 1993. [159] J. Nieh and M. Lam. Integrated processor scheduling for multimedia. In Proc. of the 5th Inte’l Workshop on Network and Operating System Support for Digital Audio and Video, 1995. [160] J. Nieh and M. Lam. The design of SMART: a scheduler for multimedia applications. Technical Report CSL-TR-96-697, Computer Systems Laboratory, Stanford University, June 1996. [161] J. Nieh and M. Lam. The design, implementation and evaluation of SMART: a scheduler for multimedia applications. In Proc. of the 16th ACM Symp. on Operating Systems Principles (SOSP-97), pages 184–197, St. Malo, France, October 1997. [162] J. Nieh and M. Lam. SMART unix SVR4 support for multimedia applications. In Proc. of the IEEE Inte’l Conference on Multimedia Computing and Systems, pages 404–414, Ottawa, Canada, 1997. [163] Michael Pinedo. Scheduling: Theory, Algorithms, and Systems. Prentice-Hall, Englewood Cliff, NJ, 1995. Good reference textbook for deterministic and decent introduction to stochastic scheduling. [164] M. L. Powell, S. R. Kleiman, S. Barton, D. Shah, D. Stein, and M. Weeks. SunOS multi-thread architecture. In Proc. of the Winter 1991 USENIX Conf., pages 65–80, Dallas, TX, January 21–25 1991. [165] Ragunathan Rajkumar, Lui Sha, and John P. Lehoczky. Real-time synchronization protocols for multiprocessors. In Proc. 9th IEEE Real Time Systems Symp., pages 259–269, December 1988.

Michael E. Thomadakis miket AT tamu DOT edu 46

September 4, 2008 DRAFT

REFERENCES

REFERENCES

The problem of uncontrolled priority inversion becomes sharper in MP systems. Semaphores, monitors or ADA rendezvous mechanisms cannot be directly applied. They study priority inheritance protocols for MP systems. The MP priority ceiling guarantees bound blocking and deadlock prevention. [166] K. Ramamritham, J. Stankovic, and W. Zhao. Distributed scheduling of tasks with deadlines and resource requirements. IEEE Trans. on Computers, 38(8):1110–1123, August 1989. A set of four heuristic algorithms is presented to schedule tasks that have deadlines and resource requirements in a distributed system. When a task arrives at a node, the local scheduler at that node attempts to guarantee that the task will complete execution on that node before its deadline. If the attempt fails, the scheduling components on individual nodes cooperate to determine which other node in the system has sufficient resource surplus to guarantee the task. Simulation studies are performed to compare the performance of these algorithms with respect to each other as well to two baselines. The first baseline is the noncooperative algorithm where a task that cannot be guaranteed locally is not sent to any other node. The second is an (ideal) algorithm that behaves exactly like the bidding algorithm but incurs no communication overheads. The simulation studies examine how communication delay, task laxity, load differences on the nodes, and task computation times affect the performance of the algorithms. The results show that distributed scheduling is effective even in a hard real-time environment and that the relative performance of these algorithms is a function of the system state. [167] Sandra Ramos-Thuel. Enhancing Fault Tolerance of Real-Time Systems through Time Redundancy. PhD thesis, Dep’t of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, May 1993. [168] Sandra Ramos-Thuel and John P. Lehoczky. On-line scheduling of hard deadline aperiodic tasks in fixed-priority preemptive systems. In Proc. 14th IEEE Real Time Systems Symp., pages 160–171, Raleigh-Durham, NC, December 1993. [169] Sandra Ramos-Thuel and John P. Lehoczky. Algorithms for scheduling hard aperiodic tasks in fixed-priority systems using slack stealing. In Proc. 15th IEEE Real Time Systems Symp., pages 22–33, San Juan, Puerto Rico, December 1994. A version of Thuel’s slack stealing algorithms, for hard aperiodic tasks. It is not certain if it can truly guarantee the firm aperiodic tasks. [170] Ola Redell and Martin T¨orngsten. Calculating exact worst case response times for static priority scheduled tasks with offsets and jitter. In Proc. 8th IEEE Real-Time

Michael E. Thomadakis miket AT tamu DOT edu 47

September 4, 2008 DRAFT

REFERENCES

REFERENCES

and Embedded Technology and Applications Symposium, pages 164–172, San Jose, CA, 25–27 September 2002. A method to perform exact worst case response time analysis for fixed priority tasks with offsets and release jitter is described. Available methods are either pessimistic or inefficient as they incorporate numerous time consuming schedule simulations in order to find the exact response times of tasks. The method presented here is based on the creation of the worst case conditions for the execution of each individual task instance within a hyperperiod. The worst case is built by choosing release jitter of higher priority tasks appropriately. Given these conditions, the corresponding task instance response time is calculated using partly iterative algorithms. Experiments comparing the efficiency of the proposed method to an alternative exact method based on schedule simulation show that the new method outperforms the latter. The analysis is expected to be particularly useful when analysing response times and schedulability of tasks that form transactions in distributed systems. [171] Ismail Ripoll, Alfons Crespo, and Ana Garc´ıa-Formes. An optimal algorithm for scheduling soft aperiodic tasks in dynamic-priority preemptive systems. IEEE Trans. on Software Engineering, 23(6):388–400, June 1997. Soft Aperiodic task scheduling under EDF for periodic task dispatching. FIFO service for aperiodic tasks. Approximates WnL (t) by H(t) curve. Requires heavy adjustments for slack maintenance, when the IP in approx S EL are used Pn by periodics. Time T (n) = O(Ca( i=1 Ci /(1 − Ut au))); Space S(n) = O(N). Any other service besides FIFO for aperiodics will impose heavier overhead. [172] Ismail Ripoll, Alfons Crespo, and Aloysius K. Mok. Improvement in feasibility testing for real-time tasks. Real-Time Systems, 11(1):19–39, July 1996. Provides several upper bounds on the maximum length of a synchronous Busy Period for feasibility testing of periodic tasks under an optimal, preemptive scheduling method. [173] Ismail Ripoll, Ana Garc´ıa-Formes, and Alfons Crespo. Optimal aperiodic scheduling for dynamic-priority preemptive systems. Proc. Third Int’l Workshop in Real-Time Computing Systems and Applications, pages 294–300, 1996. Soft Aperiodic task scheduling under EDF for periodic task dispatching. FIFO service for aperiodic tasks. Approximates WnL (t) by H(t) curve. Requires heavy adjustments for slack maintenance, when the IP in approx S EL are used Pn by periodics. Time T (n) = O(Ca ( i=1 Ci /(1 − Uτ ))); Space S(n) = O(N). Any other service besides FIFO for aperiodics will impose heavier overhead.

Michael E. Thomadakis miket AT tamu DOT edu 48

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[174] Dennis M. Ritchie and Ken Thompson. The UNIX timesharing system. Communications of the ACM, 17(7), July 1974. [175] Sven Gesteg˚ ard Robertz, Dan Henriksson, and Anton Cervin. Memory-aware feedback scheduling of control tasks. In Proceedings of the 11th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA06), Prague, Czech Republic, September 2006. [176] Manas Saksena, James da Silva, and Ashok Agrawala. Design and implementation of Maruti-II. In Sang H. Son, editor, Advances in Real-Time Systems, chapter 4, pages 73–102. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1995. [177] Karsten Schwan and Hongyi Zhou. Dynamic scheduling of hard real-time tasks and real-time threads. IEEE Trans. on Software Engineering, 18(8):736–748, August 1992. The time complexity of acceptance test is T (n) = O(M + N), where M is the total number of BPs (continuous, preempted fragments of jobs) and N the total number of scheduled jobs in a hyper-period H. They maintain a (job fragment) slot list generated off-line and subject periodic and aperiodic tasks to the admission test. They also maintain an EL list in deadline order for the next eligible job and a balanced tree for searching slot list (ordered by start times). They use Static algorithm to manipulate the BP list when a new task is added, T (n) = O(N 2 ) + O(N log N). Dynamic algorithm is used to schedule dynamically incoming tasks. It applies the static in the [ra k, da k). Time T (n) = O(N) + O(N log N), where N is the number of tasks in the EL. With precedence constraint tasks, admission test takes T (n, m) = O((m + N) log(m + N)), with m tasks in the group. Disadvantage: high run-time overhead from the admission tests. [178] L. Sha, R. Ragunathan, and S. Sathaye. Generalized rate-monotonic scheduling theory: A framework for developing real-time systems. Proceedings of the IEEE, 20(1):68–82, January 1994. [179] L. Sha, R. Rajkumar, and J. P. Lehoczky. Priority inheritance protocols: An approach to real-time synchronizations. IEEE Trans. on Computers, 39(9):1175–1185, 1990. [180] L. Sha, R. Rajkumar, S. H. Son, and C. H. Chang. A real-time locking protocol. IEEE Trans. on Computers, 40(7):793–800, 1991. [181] Lui Sha, Tarek Abdelzaher, Karl-Erik ˚ Arz´en, Anton Cervin, Theodore Baker, Alan Burns, Giorgio Buttazzo, Marco Caccamo, John Lehoczky, and Aloysius K. Mok. Realtime scheduling theory: A historical perspective. Real-Time Systems, 28(2–3):101–155, November 2004. A good overview of Real-Time systems concerns Michael E. Thomadakis miket AT tamu DOT edu 49

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[182] Lui Sha, Mark H. Klein, and John B. Goodenough. Rate monotonic analysis for realtime systems. In Andr´e M. van Tilborg and Gary M. Koob, editors, Foundations of Real-Time Computing: Scheduling and Resource Management, chapter 5, pages 129– 155. Kluwer Academic Publishers, Norwell, MA, 1991. [183] Lui Sha, Mark H. Klein, and John B. Goodenough. Rate monotonic analysis for real-time systems. Technical Report CMU/SEI–91–TR–6, Carnegie Mellon University, Software Engineering Institute, Pittsburgh, PA, 1991. [184] Wei-Kuan Shih and Jane W.-S. Liu. On-line scheduling of imprecise computations to minimize error. In Proc. 13th IEEE Real Time Systems Symp., pages 280–289, Phoenix, AZ, 2–4 December 1992. Three algorithms for scheduling preemptive, imprecise tasks on a processor to minimize the total error are described. Each imprecise task consists of a mandatory task followed by an optional task. Some of the tasks are online; they arrive after the processor begins execution. The algorithms assume that when each new online task arrives, its mandatory task and the portions of all the mandatory tasks yet to be completed at the time can be feasibly scheduled to be computed by their deadlines. The algorithms produce for such tasks feasible schedules whose total errors are as small as possible. The three algorithms are designed for three types of task systems: (1) when every task is online and is ready upon its arrival; (2) when every task is online and is ready upon arrival but there are also offline tasks with arbitrary ready times; and (3) when online tasks have arbitrary ready times. Their running times are O(n log n), O(n log n), and O(n log2 n), respectively. [185] Wei-Kuan Shih and Jane W. S. Liu. Algorithms for scheduling imprecise computations with timing constraints to minimize maximum error. IEEE Trans. on Computers, 44(3):466–471, March 1995. [186] Wei-Kuan Shih, Jane W. S. Liu, and Jen-Yao Chung. Fast algorithms for scheduling imprecise computations. In Proc. 10th IEEE Real Time Systems Symp., pages 12–19, Phoenix, AZ, 12–19 December 1989. Consideration is given to the problem of scheduling tasks each of which is logically decomposed into a mandatory subtask and an optional subtask. The mandatory subtask must be executed to completion. If the available processor time is insufficient, the optional subtask can be left incomplete. The error in the result of a task is equal to the processing time of the unfinished portion of the optional subtask. A description is given of a preemptive algorithm for scheduling n dependent tasks with rational ready times, deadlines, and processing times on a uniprocessor system. This algorithm determines Michael E. Thomadakis miket AT tamu DOT edu 50

September 4, 2008 DRAFT

REFERENCES

REFERENCES

whether feasible schedules that meet the timing constraints of all tasks exist; when feasible schedules exist, it finds one that has the minimum total error. The complexity of this algorithm is O(nlogn). A schedule is said to satisfy the 0/1 constraint when every optional subtask is either completed or discarded. The problem of finding an optimal feasible schedule that satisfies the 0/1 constraint and minimizes the number of discarded optional subtasks is NP-complete. Two algorithms are presented for finding optimal schedules of dependent tasks on a uniprocessor system for the special case when all optional subtasks have identical processing times. [187] Kang G. Shin and Yi-Chieh Chang. A reservation-based algorithm for scheduling both periodic and aperiodic real-time tasks. IEEE Trans. on Computers, 44(12):1405–1419, December 1995. This paper considers the problem of scheduling both periodic and aperiodic tasks in real-time systems. A new algorithm, called reservation-based (RB), is proposed for ordering the execution of real-time tasks. This algorithm can guarantee all periodic-task deadlines while minimizing the probability of missing aperiodic-task deadlines. Periodic tasks are scheduled according to the rate monotonic priority algorithm (RMPA), and aperiodic tasks are scheduled by utilizing the processor time left unused by periodic tasks in each unit cycle. The length, u, of a unit cycle is defined as the greatest common divisor of all task periods, and a task is assumed to be invoked at the beginning of a unit cycle. For a set S of periodic tasks, the RB algorithm reserves a fraction Rs of processor time in each unit cycle for executing aperiodic tasks without missing any periodic-task deadline. The probability of meeting aperiodic-task deadlines is proved to be a monotonic increasing function of Rs . We derive the value of Rs that maximizes the processor time reservable for the execution of aperiodic tasks without missing any periodic-task deadline. We also show that if the ratio of the computation time to the deadline of each aperiodic task is bounded by Rs , the RB algorithm can meet the deadlines of all periodic and aperiodic tasks. Our analysis and simulation results show that the RB algorithm outperforms all other scheduling algorithms in meeting aperiodic-task deadlines. [188] Kang G. Shin, Dilip D. Kandlur, Daniel L. Kiskis, Paul S. Dodd, Harold A. Rosenberg, and Atri Indiresan. A software overview of HARTS: A distributed real-time system. In Sang H. Son, editor, Advances in Real-Time Systems, chapter 1, pages 3–22. PrenticeHall, Inc., Englewood Cliffs, NJ, 1995. [189] K.G. Shin and Xianzhong Cui. Computing time delay and its effects on real-time control systems. Control Systems Technology, IEEE Transactions on, 3(2):218–224, Jun 1995. Michael E. Thomadakis miket AT tamu DOT edu 51

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[190] Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne. Applied Operating System Concepts. John Wiley and Sons, first edition, 2000. ISBN 0-471-36508-4; URL: http://jws-edcv.wiley.com/college/tlp/0,9842,ECSC-CXC-CXXCC-CXB0C 0471365084 BKS,00.html

. [191] Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne. Operating System Concepts. John Wiley and Sons, sixth edition, 2002. ISBN 0-471-41743-2. [192] Silicon Graphics. IRIX 6.2 technical report. Technical report, Silicon Graphics Inc., Mountain View, CA, 1995. [193] Silicon Graphics. REACT real-time programmer’s guide. Silicon Graphics Inc., Mountain View, CA, 1996. [194] Silicon Graphics. Topics in IRIX programming. Silicon Graphics Inc., Mountain View, CA, 1996. [195] Silicon Graphics. REACT in IRIX 6.4 technical report. Technical report, Silicon Graphics Inc., Mountain View, CA, 1997. [196] Maryline Silly. The EDL server for scheduling periodic and soft aperiodic tasks with resource constraints. Real-Time Systems, 17(1):87–111, July 1999. The EDL algorithm keeps track of the IP and BP in a hyper-period H. It goes hunting for IP in the EDL schedule, in order to provide shorter response i Di times to the aperiodic tasks. Time T (n) = O(n max ) for a single aperiodic. mini Ti Space S(n) = O(N), N := the total number of job releases within a H. It updates the variables for IP as the schedule proceeds in the non-idling really fashion. [197] Maryline Silly, Houssine Chetto, and Nadia Elyounsi. An optimal algorithm for guaranteeing sporadic tasks in hard real-time systems. In Second IEEE Symposium on Parallel and Distributed Processing, pages 578–585, 9–13 December 1990. Scheduling of hard deadline periodic and sporadic tasks on the uniprocessor model. Hard aperiodic and periodic tasks are guaranteed. On-line schedulability test based on EDD. Storage complexity S(n) = O(N), and admission test time T (n) = O(N + m(τ )), m(τ ) := Number of pending aperiodics. Synchronous task model, Di = Ti , all i. EDS and EDL schedules, described in ~ = e0 , ..., ep , D ~ = d0 , . . . , dp (EDS) and D ~ ∗ = d∗ , . . . , d∗ (EDL) vectors. E 0 p Basically, uses EDL while aperiodics are active. [198] W. E. Smith. Various optimizers for single stage production. Naval Research Logistics Quarterly, 3:59–66, 1956.

Michael E. Thomadakis miket AT tamu DOT edu 52

September 4, 2008 DRAFT

REFERENCES

REFERENCES

The first proof that the Weighted Shortest-Processing Time (WSPT) rule is optimal for the 1||Swj Cj scheduling problem. [199] Sang H. Son, editor. Advances in Real-Time Systems. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1995. Collection of papers for Real-Time Task Scheduling and Communication scheduling. [200] B. Sprunt, J. Lehoczky, and L. Sha. Exploiting unused periodic time for aperiodic service using the extended priority-exchange algorithm. In Proc. 9th IEEE Real Time Systems Symp., pages 251–258, December 1988. Unused periodic time is given to aperiodic tasks at their initial priority levels. Response aperiodic times are improved. [201] B. Sprunt and L. Sha. Scheduling sporadic and aperiodic events in a hard real-time system. Technical Report CMU/SEI-89-TR-11 ADA211344, Software Engineering Institute (Carnegie Mellon University), Pittsburgh, PA, 1989. [202] B. Sprunt and L. Sha. Implementing sporadic servers in ada. Technical Report CMU/SEI-90-TR-6 ADA26723, Software Engineering Institute (Carnegie Mellon University), Pittsburgh, PA, 1990. [203] B. Sprunt, L. Sha, and J. Lehoczky. Aperiodic task scheduling for hard real-time systems. The Journal of Real-Time Systems, 1:27–60, 1989. The Sporadic Server algorithm provides improved response time compared to the poll server and the background execution methods for aperiodic tasks. It can guarantee a hard sporadic task τs only if, Tss ≤ Ts . When Ds < Ts we can consider the interference to periodic tasks τj , j ∈ hp(ss) as their blocking times Bj and carry out analysis much like the priority inversion blocking time. The SS replenishes server time only after some portion has been consumed. Next replenishment time Ri = si + Ti , if SS executes at priority level i, si is the time SS started serving the sporadic. Replenishment amount is equal to the amount consumed. While SS is idle (no sporadic tasks) no replenishment takes place. The schedulability analysis is based on the blocking time Bi that a periodic tasks τi , i < s can experience when the SS task τs is executing, and Bi = Cs . For τi , i ≥ s we can use the regular schedulability conditions as if τs were a regular periodic task. [204] Marco Spuri. Analysis of deadline scheduled real-time systems. Rapport de Recherche 2772, INRIA, The French National Institute for Research in Computer Science and Control, Cedex, France, January 1996. Michael E. Thomadakis miket AT tamu DOT edu 53

September 4, 2008 DRAFT

REFERENCES

REFERENCES

A relatively thorough overview of schedulability results for non-idling, dymamic priority, preemptive scheduling. Schedulability analysis based on the synchronous busy period. It covers sporadically-periodic tasks, shared resources, arbitrary deadlines, release jitter and accounts for the overhead of tick-schedulers. The analysis computes the maximum response time of each task, on an EDD schedule. Soft aperiodic are included by means of the Total BW Server. [205] Marco Spuri. Holistic analysis of deadline scheduled real-time distributed systems. Rapport de Recherche 2873, INRIA, The French National Institute for Research in Computer Science and Control, Cedex, France, April 1996. An overview of schedulability results for non-idling, dymamic priority, preemptive scheduling. Schedulability analysis based on the synchronous busy period. Distributed tasks communicate over some medium that uses the Timed-Token MAC protocol. Local message buffers are ordered EDD. They set the message jitter equal to the response time of its transmitter, and they add the max communication delay to find the jitter of the receiver. [206] Marco Spuri and Giorgio C. Buttazzo. Efficient aperiodic service under earliest deadline scheduling. In Proc. 15th IEEE Real-Time Systems Symp., pages 2–11, 1994. They present the EDD version of five aperiodic service algorithms, previously proposed under FPS scheduling. (1) DPE: Dynamic Priority Exchange. (2) DSS: Dynamic Sporadic Server. (3) TBS: Total Bandwidth Server, when Jk k arrives set dk = max{dk−1 , rk } + UCtbs , thus allocating the total BW of the server to Jk . (4) EDL Algorithm: by Chetto et al.in [55]. Set Uedl := 1 − Uτ , but no mention how to guarantee the periodics. (5) IPE: Improved Priority Exchange. Pre-compute the idle times ej under EDL, and schedule server replenishments equal to the IPj at that times (with a good design this seems promising). The storage complexity may become high, especially when the schedule is sparsely populated. There are open problems in the schedule switching EDE¡-¿EDL, and how the periodic tasks receive service individually. [207] Marco Spuri, Giorgio C. Buttazzo, and Fabrizio Sensini. Robust aperiodic scheduling under dynamic priority systems. In Proc. 16th IEEE Real-Time Systems Symp., pages 210–219, December 1995. EDD based scheduling. Hard periodic and firm aperiodic. Expand TBS to be able: (1) to guarantee firm aperiodic Jk , and (2) reclaim unused time from k aperiodics, by setting: d′k = max{rk , d¯k1 , fk−1 } + UCtbs , and at Jk completion, ¯ set d¯k = r¯k + Ck . Under overload conditions some Jk have to be rejected. Utbs

Michael E. Thomadakis miket AT tamu DOT edu 54

September 4, 2008 DRAFT

REFERENCES

REFERENCES

Introduce the RED [46] to verify the schedulability of Jk and decide if least value task subset should be rejected in order to guarantee Jk . In overload situation the RED+TB+reclaim offers the highest total system value, compared to plain TB and guaranteed TB. [208] Marco Spuri and John A. Stankovic. How to integrate precedence constraints and shared resources in real-time scheduling. IEEE Trans. on Computers, 43(12):1407– 1412, December 1994. Provides scedulability conditions for tasks with precedence constraints (DAG) and shared resource (priority ceiling protocol PCP, and stack resource policy SRP). [209] J. A. Stankovic and K. Ramamritham. The Spring Kernel: a new paradigm for realtime systems. IEEE Software, 8(3):62–72, May 1991. A real-time operating system kernel, called the Spring kernel, that provides some of the basic support required for large, complex, next-generation realtime systems, especially in meeting timing constraints, is presented. The approach meets the need to build predictable yet flexible real-time systems. Most current real-time operating systems contain the same basic paradigms found in time-sharing operating systems and often use a basic priorityscheduling mechanism that provides no direct support for meeting timing constraints. Spring uses two criteria to classify tasks’ interaction with and effects on the environment: importance and timing requirements. Implementation experience with Spring is described. [210] John Stankovic. Strategic directions in real-time and embedded systems. ACM Computing Surveys: Special Issue Proceedings of the ACM Workshop on Strategic Directions in Computing Research, 28(4):751–763, December 1996. [211] John A. Stankovic. A serious problem for next-generation systems. IEEE Computer, 22(10):10–19, October 1988. [212] John A. Stankovic and K. Ramamritham. What is predictability for real-time systems. Real-Time Systems, 2(4):247–254, 1990. Discusses Several RT design issues at a higher, survey-style, level. [213] John A. Stankovic and K. Ramamritham. A reflective architecture for real-time operating systems. In Sang H. Son, editor, Advances in Real-Time Systems, chapter 2, pages 23–38. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1995. [214] John A. Stankovic, Marco Spuri, Krithi Ramamritham, and Giorgio C. Buttazzo. Deadline Scheduling for Real-Time Systems: EDF and Related Algorithms. Series in Engineering and Computer Science. Kluwer Academic Publishers, Boston, MA, 1998. Michael E. Thomadakis miket AT tamu DOT edu 55

September 4, 2008 DRAFT

REFERENCES

REFERENCES

A rather comprehensive and concise introduction to EDD based scheduling results, for hard periodic tasks and discussion about the “holistic” distributed systems approach. Guaranteed aperiodic tasks and hard real-time communications are glaringly missing. The Dynamic versions of PES, SS, EDL, TBS are discussed. [215] D. Stein and D. Shah. Implementing lightweight threads. In Proc. of the Summer 1992 USENIX Conf., pages 1–10, San Antonio, TX, 1992. [216] W. Richard Stevens. Advanced Programming in the UNIX Environment. AddisonWesley, May 1992. ISBN 0-201563177. A classic reference to system level programming for Unix and POSIX.1 [217] I. Stoica, H. Abdel-Wahab, K. Jeffay, S. Baruah, J. Gehrke, and C. G. Plaxton. A proportional share resource allocation algorithm for real-time, time-shared systems. In Proc. of 21st IEEE Real Time Systems Symp., pages 288–299, Washington, DC, December 4–6 1996. [218] J. K. Strosnider, J. P. Lehoczky, and L. Sha. The deferrable server algorithm for enhanced aperiodic responsiveness in hard real-time environments. IEEE Trans. on Computers, 44(1):73–91, January 1995. Most existing scheduling algorithms for hard real-time systems apply either to periodic tasks or aperiodic tasks but not to both. In practice, real-time systems require an integrated, consistent approach to scheduling that is able to simultaneously meet the timing requirements of hard deadline periodic tasks, hard deadline aperiodic (alert-class) tasks, and soft deadline aperiodic tasks. This paper introduces the Deferrable Server (DS) algorithm which will be shown to provide improved aperiodic response time performance over traditional background and polling approaches. Taking advantage of the fact that, typically, there is no benefit in early completion of the periodic tasks, the Deferrable Server (DS) algorithm assigns higher priority to the aperiodic tasks up until the point where the periodic tasks would start to miss their deadlines. Guaranteed alert-class aperiodic service and greatly reduced response times for soft deadline aperiodic tasks are important features of the DS algorithm, and both are obtained with the hard deadlines of the periodic tasks still being guaranteed. The results of a simulation study performed to evaluate the response time performance of the new algorithm against traditional background and polling approaches are presented. In all cases, the response times of aperiodic tasks are significantly reduced (often by an order of magnitude) while still maintaining guaranteed periodic task deadlines.

Michael E. Thomadakis miket AT tamu DOT edu 56

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[219] SunSoft. SunOS Reference Manual, Section 2. Sun Microsystems Inc., Mountain View, CA, 1994. [220] SunSoft. Systems Services Guide. Sun Microsystems Inc., Mountain View, CA, 1994. [221] M. Thomadakis and J. C. S. Liu. Development of portable middleware services in POSIX environments. In Proc. of IEEE Workshop on Middleware for Distributed Real-Time Systems and Services, pages 146–153. held in conjunction with 18th IEEE RTSS 1997, December 1997. [222] M. Thomadakis and J. C. S. Liu. Development of portable middleware services in POSIX environments. Technical Report CS-TR-97-012, Computer Science Department, Texas A&M University, College Station, TX, November 25 1997. [223] Michael E. Thomadakis. Algorithms and Techniques for Joint Task Scheduling in Open Computing Systems. PhD thesis, Texas A&M University, College Station, TX 778433112, December 1999. Emerging and future open real-time systems are faced with the Joint-Task Scheduling Problem (JTSP), where tasks with mixed timing constraints and workload parameters, must be guaranteed timely access to system resources. This dissertation investigates the JTSP focusing on preemptive, priority driven scheduling methods. We decompose the JTSP into a number of subproblems, all of which require the guaranteed scheduling of critical periodic tasks along with different types of aperiodic tasks. First, we address the firm aperiodic scheduling, in which the scheduler must guarantee on-line the timely execution of firm tasks. Second, we address the soft aperiodic scheduling, where the scheduler must guarantee to service soft tasks at the earliest possible time. Both problems are challenging, with present stateof-the-art methods requiring pseudo-polynomial complexity to provide these guarantees to each single aperiodic task. In this research we develop linear time algorithms for these two problems, capitalizing on our Extreme-Schedule (ExS) framework and Workload-Matrix (WM) methods. The ExS characterizes quantitatively the two unique, fixed-priority schedules of a periodic task set, the Fixed-Priority First (FPF) and the Latest-Deadline Last (LDL). FPF and LDL are non-idling and idling schedules, respectively, which capture the dynamic temporal relocation limits of the periodic tasks. We prove that switching between them, at the appropriate time, enables the optimal allocation of service to non-periodic tasks, without violating deadlines. The WM method quantifies the idle processor capacity precisely within arbitrary schedule intervals, in time Θ(n). It is the first published method to accomplish this in linear time, with previous methods requiring pseudo-polynomial time.

Michael E. Thomadakis miket AT tamu DOT edu 57

September 4, 2008 DRAFT

REFERENCES

REFERENCES

For interactive (IA) tasks we propose a two-level capacity reservation to guarantee acceptable response times to fine-grained tasks and minimum progress of the overall IA workload. For each task type we provide several scheduling algorithms optimizing their service with different criteria. Finally, we discuss the details of a scheduler for our algorithms that could be implemented efficiently in the OS kernel. Experiments show that the WM computation and the proposed algorithms incur minimal actual execution time overhead. [224] Michael E. Thomadakis and J-C. S. Liu. On the efficient scheduling of non-periodic tasks in hard real-time systems. In Proc. 20th IEEE Real Time Systems Symp., Phoenix, AZ, December 1–3 1999. The proceedings version of the paper that derives the first linear time guarantee algorithm for hard aperiodic tasks in Real-Time systems scheduled with fixed-priority schedulers. [225] Michael E. Thomadakis and J-C. S. Liu. On the efficient scheduling of non-periodic tasks in hard real-time systems. Technical Report TR-99-012, Dept. of Computer Science, Texas A&M University, College Station, TX 77843-3112, May 10 1999. The full-version of the [224] paper. [226] Michael E. Thomadakis and J-C. S. Liu. Linear time on-line feasibility testing algorithms for fixed-priority, hard real-time systems. Technical Report TR-2000-006, Dept. of Computer Science, Texas A&M University, College Station, TX 77843-3112, January 21 2000. The feasibility testing algorithms based on workload-matrix method and subsequent to [224] paper. [227] T. S. Tia, W.-S. Liu, and M. Shankar. Aperiodic request scheduling in fixed–priority preemptive systems. Technical Report UIUCDCS-R-94-1859, Comp. Science Dep’t, Univ. of Illinois, Urbana-Champaign, July 1994. The full version of [228]. The slack stealing methods by Thuel and Davis are greedy, in that, when there is an amount of slack available it is given immediately to the pending aperiodic, which is scheduled at the highest priority. Tia’s method differs from Thuel’s in that, it varies the priorities of the soft tasks, according to the priority of next periodic task (job) τij that can be delayed by ∆ij . It is similar to Davise’s method in this respect (What’s the difference? The effective deadlines? They are mentioned by Davis, already?). Again, this method requires a number of tedious updates to keep the slack variables current and incurs the high overheads for storage (static SS) or computation (dynamic SS). Tia proposes two service methods for aperiodics: the Michael E. Thomadakis miket AT tamu DOT edu 58

September 4, 2008 DRAFT

REFERENCES

REFERENCES

local and global. The local minimizes the response time of the HOL aperiodic in a FIFO queue. Apparently, it is continuously looking for the next slack pocket to allocate to the first aperiodic in the queue, ignoring the others. The global method uses SRPT to service the aperiodics. The performance results claim that the difference between these two methods is marginal (5%) and that the local behaves exactly as the Thuel’s SS, outperforming it only occasionally. The global method underperforms due to its heavier overhead and the marginal performance improvement over the local one. It is suggested not to use the global one. What were realy the new things we learned from this paper? There is no discussion on the time and space complexity of the algorithms and the algorithms themselves are not presented in any detail. Storage complexity is S(n) = O(2N) for the (Sij (tH ), ∆ij ) points. [228] T. S. Tia, W.-S. Liu, and M. Shankar. Algorithms and optimality of scheduling soft aperiodic requests in fixed-priority preemptive systems. The Journal of Real-Time Systems, 10(1):23–43, January 1996. See notes on [227]. [229] T. S. Tia, W.-S. Liu, J. Sun, and R. Ha. A linear–time optimal acceptance test for scheduling of hard real-time tasks. Never published., 1994. Proposes O(n + m) admission control test for EDD scheduled hard periodic and aperiodic tasks. It is not clear how slack variables are dynamically maintained while slack is consumed by aperiodic tasks, or when periodic ones execute at their non-idling schedule. The space complexity is inhibitive: S(n) = O(N 2 + N), which makes it unrealistic, especially for asynchronous task sets. It requires slack maintenance after the admission of each aperiodic task O(m). Un-published. [230] Too-Seng Tia. Utilizing Slack Time for Aperiodic and Sporadic Requests Scheduling in Real-Time Systems. PhD thesis, University of Illinois at Urbana-Champaign, Urbana, IL, April 1995. Available as Technical Report UIUCDCS-R-95-1906. [231] K. Tindell, A. Burns, and A. J. Wellings. An extendible approach for analyzing fixed priority hard real-time tasks. Real-Time Systems, 6(2):131–151, 1994. Fixed-Priority schedulability analysis of hard periodic task sets. Analysis is based on the maximum response time of a task, and includes (1) arbitrary deadlines Di >< Ti , (2) maximum response time calculation, (3) release jitter, (4) sporadically periodic tasks, (5) overhead from tick-scheduling. It is based on the critical instance of the schedule. He employes the iterative version of the method by Joseph and Pandya to compute the maximum reponse times. For Di > Ti , they verify the schedulability of task τi , by examining a level-i Michael E. Thomadakis miket AT tamu DOT edu 59

September 4, 2008 DRAFT

REFERENCES

REFERENCES

BP, with q pending jobs τi . R∗ := maxq=0,1,... {wi(q) − qTi }, where wi (q) is the length of level-i BP with q active τi jobs. During a BPi there can be at most one priority inversion by any lower priority task. Jitter time is really blocking time and it is included as such in the BP calculations. In tick scheduling, a periodic clock interrupt executes a scheduler which checks for sporadic tasks arrivals. A sporadic may suffer jitter equal to at most Tclk . At each clock at most the entire pending (forthcoming sporadic tasks) can be moved to the run-queue. [232] K. Tindell, A. Burns, and A. J. Wellings. Analysis of hard real-time communications. Real-Time Systems, 9(2):147–171, September 1995. Fixed-Priority analysis for communications between hard tasks. MAC protocols considered (1) the simple time-token and (2) RT priority broadcast bus. [233] K. W. Tindell, A. Burns, and A. J. Wellings. Mode changes in priority pre-emptively scheduled systems. In Proc. 13th IEEE Real Time Systems Symp., pages 100–109, Raleigh-Durham, NC, December 2–4 1992. It is noted that in many hard real-time systems, the set of functions that a system is required to provide may change over time. One way of providing this change is to allow currently running hard real-time tasks to be deleted or changed, or new tasks to be added. The authors define this change as a mode change, and seek to guarantee a priori the timing constraints of all tasks across the change from one mode to another. The authors derive a scheduling theory for static priority preemptive scheduling that can be used to make such guarantees. The schedulability test discussed could easily be incorporated into engineering support tools. The authors also discuss some of the approaches that could be taken to extend the analysis to cope with more complex and interesting scheduling problems, and to handle distributed hard real-time systems. [234] U. Vahalia. UNIX Internals: The New Frontiers. Prentice-Hall, Upper Saddle River, NJ, 1996. [235] Andr´e M. van Tilborg and Gary M. Koob, editors. Foundations of Real-Time Computing: Scheduling and Resource Management. Kluwer Academic Publishers, Norwell, MA, 1991. [236] C. A. Waldspurger. Lottery and Stride Scheduling: Flexible Proportional-Share Resource Management. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, September 1995. also appears as Technical Report MIT/LCS/TR-667.

Michael E. Thomadakis miket AT tamu DOT edu 60

September 4, 2008 DRAFT

REFERENCES

REFERENCES

[237] Peter Wegner and Jon Doyle, editors. ACM Workshop on Strategic Directions in Computing Research, Laboratory for Computer Science, MIT, 14–15 June 1996. [238] Qin Zheng and Kang G. Shin. Mixed time-constrained and non-time-constrained communications in local area networks. IEEE Trans. on Communications, 41(11):1668– 1676, November 1993. Fixed-priority scheduling in Token-Ring and Token-Bus, with an attempt to provide better service to non time-constraint tasks. [239] Qin Zheng and Kang G. Shin. On the ability of establishing real-time channels in pointto-point packet-switched networks. IEEE Trans. on Communications, 42(2/3/4):1096– 1105, February/March/April 1994. EDD feasibility conditions for preemptive and non-preemptive scheduling. Uses the initial synchronous busy period. Provides bounds on the length of the initial BP that needs to be checked for schedulability of the periodic tasks, and proposes the natural subset of points to verify schedulability, namely the deadlines dij of the jobs. Well written, clear, succinct and to-the-point.

Michael E. Thomadakis miket AT tamu DOT edu 61

September 4, 2008 DRAFT

A NOTATION

A

Notation Table 1: Brief Definition of Relevant Quantities Notation τ

Definition Task system with n tasks τi (Ti , Ci , Di , ri )

τij

Job τij , the jth instance of task τi

Ja

Aperiodic job sequences

vi (t)

Total workload for task τi in [0, t]

Vi (t)

Aggregate released workload for tasks τk , k = 1, 2, . . . , i in [0, t]

S(t)

Schedule Mapping S : IR −→ {0, 1, . . . , n}

si (t)

Indicator function for task τi in schedule S(t)

wi (t)

Total serviced workload for task τi in [0, t)

Wi (t)

Aggregate serviced workload for tasks τk , k = 1, 2, . . . , i in [0, t)

Zi (t)

Idle processor capacity in [0, t), after tasks τk , k = 1, 2, . . . , i have been serviced

sij , fij

Start and finish time for τij , with rij ≤ sij and fij ≤ dij

ni (t)

Response time of τij , with Rij := fij − rij j k t−ri + 1, current period number of τi at time t Ti

mi (t)

(t − ri ) (mod Ti ), “translated” remainder

Rij

[x]+

max{x, 0}.

Michael E. Thomadakis miket AT tamu DOT edu 62

September 4, 2008 DRAFT