Pfair Scheduling of Fixed and Migrating Periodic Tasks on Multiple Resources Mark Moir Department of Computer Science University of Pittsburgh Pittsburgh, PA 15260
[email protected]
Abstract This paper concerns the problem of scheduling sets of preemptable, periodic tasks on multiple resources. We consider a task model that allows arbitrary mixes of fixed and migratable tasks, and prove the existence of an optimal pfair scheduler in this model. Fixed tasks must always be scheduled on a given resource, while migratable tasks can be scheduled on different resources at different times. A pfair scheduler produces a periodic schedule in which the times each task is allocated a processor are approximately evenly spread throughout its period. This work extends work of Baruah et al, who proved a similar result for systems in which all tasks are migratable.
1 Introduction In the past decade, a strong real-time scheduling theory for preemptive, priority-driven uniprocessor systems has been developed [2, 12, 13, 15]. However, the outlook is not as promising when periodic tasks must be scheduled on multiple processors. Two approaches have emerged for scheduling periodic tasks on multiple processors, namely partitioning and global scheduling [9]. In a partitioning scheme, each task is fixed to a particular processor. Finding an optimal assignment of tasks to processors is known to be NP-hard [14], so task distribution for partitioning schemes has typically been done manually, or by non-optimal heuristics. These heuristics typically use a bin-packing algorithm [8, 17] combined with uniprocessor bounds for admission control on each processor [2, 7, 11, 12]. Furthermore, because partitioning requires each task to be scheduled on the same resource each time, some task sets cannot be scheduled on a set of resources using this approach, even if their total utilization is less than the number of resources available. Supported in part by an NSF CAREER Award, CCR 9702767 and by an Oak Ridge Associated Universities Junior Faculty Development Award.
Srikanth Ramamurthy Transarc Corporation 707 Grant Street Pittsburgh, PA 15219
[email protected]
With global scheduling, all task instances are stored in a single queue and processors repeatedly execute the highest priority instances in the queue. However, it has been shown that using this approach with common scheduling schemes such as global rate monotonic scheduling (RMS) and global earliest deadline first (EDF) can give arbitrarily low processor utilization [9]. Thus, neither partitioning nor global scheduling has yielded an optimal scheme for scheduling periodic tasks on multiple processors. Baruah, Cohen, Plaxton, and Varvel proved the existence of an optimal scheduler for scheduling periodic, migratable tasks on multiple processors [4]. In fact, they considered a stronger definition of a periodic schedule, namely pfair schedules. As described in detail below, a pfair scheduler produces schedules in which the time slots allocated to each task are “evenly spread” throughout that task’s period. To achieve this, Baruah et al showed how to specify a flow graph based on a task set such that if the flow graph has an integer solution, then a periodic schedule for the specified task set can be constructed from the solution. They then showed that a fractional flow for the graph must exist, which by results of Ford and Fulkerson [10] implies that an integer solution exists. Therefore, using the Ford-Fulkerson algorithm [10], a schedule can be found. (Baruah, Howell, and Rosier previously proved the existence of an optimal scheduling algorithm for periodic tasks on a single processor using similar techniques [6].) While this result demonstrates the existence of an optimal scheduler for periodic tasks, the implied solution takes time that is exponential in the number of tasks to compute a schedule (because the size of the constructed flow graph may be exponential in the number of tasks). This prompted Baruah et al [4] to also present a pfair scheduling algorithm that makes each scheduling decision in polynomial time. (The advantage of such an algorithm is that it can schedule tasks online, without needing to do an exponential amount of computation before making the first scheduling decision.) They used their existence result to prove this algorithm correct. Later, Baruah, Gehrke, and Plaxton pre-
sented a more efficient algorithm, with time complexity of O(minfn; m logng) per time slot, where m is the number of processors and n is the number of tasks [5]. The pfair scheduling algorithms presented in [4, 5] are based on the assumption that tasks (or parts thereof) can be executed on any processor and that tasks can be suspended on one processor and resumed on another at any time; in other words tasks must be able to migrate between processors if necessary. However, a task model in which all tasks are able to migrate may be unrealistic, because some tasks may need to run on a particular processor. Suppose, for example, that we wish to schedule tasks on the processors of a set of workstations in a LAN. If some task reads data from a device that is physically connected to one of the machines, then that task must execute in its entirety in that particular machine and cannot be migrated to another processor. We are therefore motivated to investigate the possibility of extending the results of Baruah et al [4, 5] to accommodate “fixed” tasks. As described below, we study this problem in the more general context of scheduling periodic, preemptive tasks (a mix of fixed and migrating tasks) on multiple “resources”. We now describe in more detail the problem considered in this paper. We consider a set T of n periodic tasks and a set of m resources labeled 0; : : :; m ? 1. Each task x has an integer period px > 0 and an integer execution requirement ex px . A periodic schedule schedules each task x for exactly ex time units in the interval [pxk; px(k + 1)) for all k 2 N. Each time a task is scheduled, it is allocated some resource. For some F T , each x 2 F is “fixed” to a specific resource r, which means that task x can only be allocated resource r. We denote the weight of task x as wx = ex =px . We make the following assumptions.
ists for any task set that satisfies our Assumptions 1 and 2. First, for a given task set, we define a network flow graph based on it. Next we show that, if a unitary flow of a certain size through that graph exists (in a unitary flow, each edge carries a flow of either 0 or 1), then a pfair schedule for the task set exists. We then show that a flow of the same size — but possibly with some fractional flow assignments — exists. This implies that a unitary flow exists [10], which in turn implies that a pfair schedule exists. Allowing fixed tasks complicates the proof of our result in two ways. First, previous work has assumed that wx < 1 for each task x. This is justified because if for some task x, wx = 1 (i.e., task x must be scheduled at all times), then they can simply dedicate one resource to that task, and consider a version of the problem with one fewer resources and task x removed. However, if some tasks may be fixed to particular resources, then this approach may not work. For example, suppose we have three tasks x0, x1 , and x2 , and two resources. Further suppose that tasks x0 and x1 both have weights of 1=2, and are fixed to resources 0 and 1 respectively, and that task x2 has weight 1 (but is not fixed to any resource). This task set satisfies Assumptions 1 and 2, and must therefore be scheduled by our algorithm. However, there is clearly no schedule in which task x2 is always allocated the same resource. We could eliminate a task x with weight 1 from the task set, and then schedule the remaining tasks on the original set of resources, leaving one resource free at each slot (not necessarily the same resource each time), and then allocate the remaining resource to x. However, we thought it cleaner to make our proof more general by allowing tasks of weight 1. Second, because tasks must be allocated to specific resources, the flow graph used in our existence proof (in Section 3) must be structured to ensure that (a) two tasks cannot be allocated the same resource at the same time, and (b) fixed tasks are always allocated the correct resource. Because previous results did not consider fixed tasks, they simply needed to ensure that at most m tasks are scheduled at a time; they did not need to assign specific tasks to specific resources. Having established that an optimal algorithm for pfair scheduling of fixed and migrating tasks exists, we next turn our attention to finding an algorithm that schedules tasks online, taking polynomial time to make the scheduling decisions for each time slot. We first discuss an approach based on “supertasks”, in which the set of tasks fixed to each resource is combined into a single supertask. The idea is to use an algorithm that schedules only migrating task sets to schedule the migrating tasks and the supertasks, and to then schedule the subtasks of each supertask within the slots allocated to the supertask. Unfortunately, we show that such an approach does not work in general because in some cases it is impossible
Assumption 1: The total weight of all tasks in T is m. Assumption 2: For each resource r, the total of the weights of tasks fixed to r is at most 1. Clearly, if either of these bounds is exceeded, then no schedule exists. Also, if the total weight of all tasks in T is less than m, then we can create a set of “artificial” tasks of appropriate weights to make the total weight up to m without violating Assumption 2. Therefore, an algorithm that produces a periodic schedule for any task set that satisfies Assumptions 1 and 2 is optimal. Following Baruah et al, we achieve a periodic schedule by using pfairness. A schedule is pfair iff for any time t > 0, each task x has been scheduled either bwx tc or dwx te times. (Observe that, for any k 2 N, if t = px k, then wxt is an integer, so bwx tc = dwx te = ex k. Thus, every pfair schedule is also a periodic schedule.) Our approach is similar to that of Baruah et al [4]. In particular, we show in two steps that a pfair schedule ex2
P
fxjx2F ^ R(x)=rg wx . P Definition 2.3: MU = x2M wx .
to schedule the subtasks in a pfair manner, even though the supertask has been scheduled in a pfair manner. We also show that this situation does indeed arise from all known pfair algorithms. Nonetheless it is conceivable that a pfair scheduling algorithm can be found that guarantees that supertasks are scheduled in such a way that the subtasks can always be scheduled. We also present a very simple algorithm for scheduling subtasks within the slots allocated to the supertask, and show that it correctly schedules subtasks provided it is possible to do so. However, because we know of no pfair algorithm that can guarantee to schedule supertasks in such a way that its subtasks can be scheduled, we leave open the problem of online pfair scheduling of fixed and migrating periodic tasks. The remainder of this paper is organized as follows. In Section 2, we formally define the problem, and introduce notation that is used in the rest of the paper. Then, in Section 3, we prove that a pfair schedule exists for any task set that satisfies Assumptions 1 and 2. In Section 4, we describe the supertask approach, and show that it does not work in general. Then, in Section 5, we present our algorithm for scheduling the subtasks of a supertask, and prove that it schedules all the subtasks in a pfair manner, provided it is possible to do so. Concluding remarks appear in Section 6. Throughout the paper, proofs have been abbreviated and summarized for the sake of space. Full, formal proofs of our results can be found in [16].
Throughout the rest of the paper, we assume an arbitrary instance = (F; M; m; R) that satisfies Assumptions 1 and 2. Below we formally define a schedule. The intuition corresponding to this definition is as follows. All scheduling decisions are made at integral values of time, starting at zero. For i 2 N, we refer to the period of time between time i and i + 1 as slot i. For a schedule S , S(x; i; r) is 1 if task x is scheduled on resource r during slot i, and 0 otherwise. Note that the definition of a schedule requires that at most one task is scheduled on each resource at a time, each task is scheduled on at most one resource at a time, and fixed tasks are scheduled only on the resources to which they are fixed.
2 Preliminaries
Definition 2.5: A schedule [0; m) ! f0; 1g, where
Definition 2.4: For each r 2 [0; m),
MU (r) =
Assumption 1:
P
x2T wx = m.
Assumption 2: For each
r 2 [0; m), 0 FU (r) 1.
S is a function S : T N
(8x; y 2 T; i 2 N; r; s 2 [0; m) :: (x 6= y ^ S(x; i; r) = 1 ) S(y; i; r) 6= 1) ^ (r 6= s ^ S(x; i; r) = 1 ) S(x; i; s) 6= 1) ^ (x 2 F ^ S(x; i; r) = 1 ) R(x) = r))
Definition 2.1: An instance is a tuple (F; M; m; R), where F is a set of fixed tasks, M is a set of migrating tasks (M is disjoint from F ), m is the number of resources, and R is a function R : F ! [0; m), such that for x 2 F , R(x) = r iff task x is fixed to resource r. We use T to denote F [ M , and n to denote jT j. Each task x has an associated integer period px , and an integer execution requirement ex 2 (0; px]. We define the weight of task x as wx = ex =px .
For convenience, we define the following notation. Informally, S(x; i) is true iff task x is scheduled on some resource in slot i by S . Definition 2.6: For any schedule S ,
S(x; i) =
1 if (9r 2 [0; m) :: S(x; i; r) = 1) 0 otherwise
Given this notation, a schedule S is periodic iff, for P ?1 S(x; i) = e . That is, task x 2 T; k 2 N, ip=x (pkx+1) x k x must be scheduled in exactly ex slots out of every px
We formalize the assumptions from Section 1 below. We first introduce some definitions: FU (r) is the total utilization of tasks fixed to processor r, MU is the total utilization for all migrating tasks, and MU (r) is resource r’s fraction of the total utilization available over all resources for migrating tasks. For each
if MU = 0 ?FU (r) if MU 6= 0 MU
1
Given the above definitions, the assumptions stated in Section 1 can be formalized as follows.
In this section, we formally state the problem, and introduce definitions, notation, and properties that are used in the rest of the paper. We begin by defining an instance of the problem.
Definition 2.2:
0
slots. As discussed earlier, we prove the existence of a periodic schedule by considering a stronger fairness property, namely pfairness, which we define below. The definition of pfairness is based on the concept of a task’s lag. Because the goal of pfairness is to spread each task’s execution evenly over its period, the ideal is for tasks to progress at a
r 2 [0; m), FU (r) = 3
Definition 3.1: earliest (x; j) = bj=wx c.
perfectly uniform rate, so that at time t, each task x would have executed for wxt time units. However, because tasks are scheduled in slots, it is only possible to approximate this behaviour. The lag of a task x at time t in a schedule S denotes the magnitude of x’s deviation from the uniform rate at time t, as follows.
Definition 3.2: latest (x; j) = d(j + 1)=wxe ? 1. Definition of G. We define the digraph follows. V = V0 [ V1 [ V2 [ V3 [ V4 E0 [ E1 [ E2 [ E3 [ E4 , where:
Definition 2.7: The lag of a task x at time t in schedule S is defined by
lag (S; x; t) = wx t ?
X
i2[0;t)
V0 = fsource g V1 = fh1; x; j i j x 2 T; j 2 [0; wxL)g V2 = fh2; x; ii j x 2 T; i 2 [0; L)g V3 = fh3; x; ii j x 2 T; i 2 [0; L)g V4 = fh4; r; ii j r 2 [0; m); i 2 [0; L)g V5 = fsink g
S(x; i):
Pfairness requires that the absolute value of the lag of each task is always less than one. This is captured by the following definition. Informally, pf (S; t) is true iff schedule S is pfair at least until time t. Formally,
E0 = f(source ; h1; x; j i) j x 2 T; j 2 [0; wxL)g E1 = f(h1; x; j i; h2; x; ii) j x 2 T; j 2 [0; wxL); i 2 [earliest (x; j); latest (x; j)]g E2 = f(h2; x; ii; h3; x;ii) j x 2 T; i 2 [0; L)g E3 = f(h3; x; ii; h4; r;ii) j x 2 F; i 2 [0; L); r = R(x)g [ f(h3; x; ii; h4; r;ii) j x 2 M; i 2 [0; L); r 2 [0; m)g E4 = f(h4; r; ii; sink ) j r 2 [0; m); i 2 [0; L)g
pf (S; t) , (8x; t0 : t0 2 [0; t] : ?1 < 0 lag (S; x; t ) < 1). We say that a schedule S is pfair at time t iff pf (S; t) holds. We use pf (S; 1) as shorthand for (8t 2 N :: pf (S; t)), and we say that a schedule S is pfair iff pf (S; 1) holds. Definition 2.8:
It is easy to see that these definitions imply that in any pfair schedule, at time t, task x has been scheduled either bwx tc or dwx te times. It is easy to see that any pfair schedule is also periodic. In the following section, we prove that a pfair schedule exists for any instance that satisfies Assumptions 1 and 2. In the rest of the paper, we denote the least common multiple of fpxjx 2 T g by L. Also, unless otherwise specified, S is universally quantified over all schedules, x and y are universally quantified over T , r and s are universally quantified over [0; m), i, j , and j 0 are universally quantified over Zand t is universally quantified over N.
Below we show the graph for an example task set, and describe the intuition behind the structure of the graph. First, however, we present several definitions regarding flow graphs and flow functions. Definition 3.3: For a digraph G = (V; E), a flow function for G is a function f : f(v; w)jv; w 2 V g ! R, such that for all v; w 2 V , if (v; w) 2 = E , then f((v; w)) = ?, and if (v; w) 2 E , then 0 f((v; w)) 1. Definition 3.4: For a digraph G = (V; E), a unitary flow function for G is a flow function f for G such that, for each e 2 E , f(e) 2 f0; 1g. Definition 3.5: Given a flow function P f for a digraph G = (V; E), for v 2 V , out (f; v) = fwj(v;w)2E g f((v; w)). Definition 3.6: Given a flow function P f for a digraph G = (V; E), for v 2 V , in (f; v) = fwj(w;v)2E g f((w; v)). Definition 3.7: Given a digraph G = (V; E) with two dis-
3 Existence of a Pfair Schedule In this section, we prove the existence of a pfair schedule for any instance that satisfies Assumptions 1 and 2. We first define the digraph G and prove that if G has a unitary flow of size mL, then instance has a pfair schedule. (More accurately, we use the unitary flow to prove the existence of a sub-schedule from time 0 until time L. We also prove that the lag of each tasks is zero at time L in this sub-schedule, so it can be repeated infinitely to acquire a complete schedule.) We later prove that such a unitary flow exists. The definition of the digraph depends on the following definitions. Intuitively, earliest (x; j) (latest (x; j)) is the earliest (latest) slot in which task x can be scheduled for the (j + 1)-st time1 without violating pfairness. 1 For example, earliest (x; 1) is the first slot in which task scheduled for the second time.
x
G = (V; E) as [ V5 and E =
tinguished nodes labeled source and sink, a (unitary) flow function f for G is a (unitary) flow of size H in G iff in (f; source ) = 0, out (f; source ) = H , in (f; sink ) = H , out (f; sink ) = 0, and for each v 2 V ? fsource ; sink g, in (f; v) = out (f; v). We say G = (V; E) has a (unitary) flow of size H iff there exists a (unitary) flow f of size H for G. Example: Figure 1 shows the graph G for a task set consisting of four tasks v, x, y, and z , where wv = 1=2, wx = 1=6, wy = 1=2, and wz = 5=6. In the example, we have two resources (0 and 1) and tasks v and x are both
may be
4
source
sink
Figure 1. Example flow graph for a task set consisting of four tasks v, x, y, and z , where wv = 1=2, wx = 1=6, wy = 1=2, and
wz = 5=6. We have two resources (0 and 1) and tasks v and x are both fixed to resource 0, while tasks y and z are migratable. The bold lines depict an integer flow of size 12 (m = 2 and L = 6) through this graph, which represents a pfair schedule for this task set. fixed to resource 0, while tasks y and z are migratable. As discussed later, the bold lines depict an integer flow of size 12 (m = 2 and L = 6) through this graph, which represents a pfair schedule for this task set.
task x is scheduled at time i, then because there is a flow of size 1 into node h3; x; ii, there must also be a flow of size 1 into h4; r; ii for some resource r; this corresponds to task x being scheduled at time i on resource r. Because fixed tasks have edges only to the resource to which they are fixed, any flow in this graph must respect the fixed tasks. Finally, because each node h4; r; ii has an edge that can carry a flow of size 1 to sink, we are assured that the same resource is not allotted to two tasks at the same time. We now show how a schedule can be defined based on a unitary flow of size mL through G, and then prove that the defined schedule is pfair. We later show that a unitary flow of size mL exists.
The intuition for the structure of the graph G is as follows. V1 contains one node for each time each task must be scheduled and V2 contains one node for each possible time each task could be scheduled (recall that we are constructing a schedule for t 2 [0; L), which can be repeated to produce a complete schedule). In any flow of size mL in G, each node in V1 gets a flow of 1 from source. (Each task x has wx L nodes in V1 , and the sum over all x of wx is m.) Thus, each node h1; x; j i in V1 will have an outgoing flow of 1 to exactly one node h2; x; ii in V2 . This flow corresponds to task x being scheduled for the (j + 1)-th time (when j = 0, x is scheduled for the first time). As described so far, the potential exists for a task to get scheduled twice at the same time (if, say earliest (x; j) = latest (x; j ? 1)). By putting an edge from h2; x; ii to h3; x; ii, we avoid this possibility because this edge can only carry a flow of 1, and all flows into h2; x; ii must go onto this edge. V4 contains one node for every time slot i 2 [0; L) for every resource. If
Definition 3.8: Given a unitary flow g of size mL in G, we define Sg as follows. For x 2 T; i 2 N; r 2 [0; m),
8 1 if i 2 [0; L) ^ (9j 2 [0; wxL) :: > > g((h1; x; j i; h2; x; ii)) = 1 ^ < g((h2; x; ii; h3; x; ii)) = 1 ^ Sg (x; i; r) = > > g((h3; x; ii; h4; r; ii)) = 1) > : 0 otherwise
In the remainder of this proof, we assume a uni5
g of size mL through G. Observe that if g((h1; x; j i; h2; x; ii)) = 1, then g((h2; x; ii; h3; x; ii)) = 1 (because node h2; x; ii must conserve flow and because all edge capacities are 1). Similarly, if g((h2; x; ii; h3; x; ii)) = 1, then for some r, g((h3; x; ii; h4; r; ii)) = 1. Thus, by the above definition, task x is scheduled in slot i iff g((h1; x; j i; h2; x; ii)) = 1 for some j . By Definition 2.5, the following three lemmas show that Sg is a schedule.
It is easy to show that, if
tary flow
j bwx (t ? 1)c ? 1, then
latest (x; j) t ? 1. Thus, for all such j , node h1; x; j i has
a flow of 1 to a node h2; x; ii, where i t ? 1. Also, if j bwx (t ? 1)c +2, then it is easy to show that earliest (x; j) t. Thus, for all such j , node h1; x; j i does not have a flow of 1 to a node h2; x; ii, where i t ? 1. It remains to consider nodes h1; x; bwx(t ? 1)ci and h1; x; bwx(t ? 1)c +1i. If exactly one of these nodes — call it v — has a flow of 1 to a node h2; x; ii, where i t ? 1,
then the total number of times x is scheduled before time t is bwx (t ? 1)c + 1 (one for each j bwx (t ? 1)c ? 1 plus node v). In this case, lag (Sg ; x; t) = wxt ?bwx (t ? 1)c? 1. Observe that 0 wx (t ? 1) ? bwx (t ? 1)c. Therefore, wx ? 1 lag (Sg ; x; t) < wx. Thus, because 0 < wx 1, ?1 < lag (Sg ; x; t) < 1, as required. We must now consider the case in which both of nodes h1; x; bwx(t ? 1)ci and h1; x; bwx(t ? 1)c + 1i have a flow of 1 to a node h2; x; ii, where i t ? 1, and the case in which neither of them do. If they both do, then earliest (x; bwx(t ? 1)c +1) t ? 1, and if neither of them do, then latest (x; bwx (t ? 1)c) t. In the full paper, we use these facts to prove that ?1 < lag (Sg ; x; t) < 1 in each case. Thus, we have the following lemma.
Lemma 3.1: At most one task is scheduled on each resource at a time. Proof: Suppose two tasks x and y are both scheduled on resource r in slot i. Then, by Definition 3.8, g((h3; x; ii; h4; r; ii)) = 1) and g((h3; y; ii; h4; r; ii)) = 1). Thus, the flow in to node h4; r; ii is at least 2. Therefore, the flow out of this node is at least 2. However, this node has only one outgoing edge, which contradicts the assumption that g is a unitary flow. Lemma 3.2: Each task is scheduled on at most one resource at a time. Proof: Suppose task x is scheduled on both resources r and s in slot i. Then, by Definition 3.8, g((h3; x; ii; h4; r; ii)) = 1) and g((h3; x; ii; h4; s; ii)) = 1). Thus, the total flow out of node h3; x; ii; h4; r; ii is at least 2. However, this node has only one incoming edge, which contradicts the assumption that g is a unitary flow.
Lemma 3.4: If G has a unitary flow of size stance has a pfair schedule. We now define a flow function flow of size mL in G.
Lemma 3.3: Fixed tasks are scheduled only on the resources to which they are fixed.
f
mL, then in-
and show that
f
is a
Definition of flow function f. For each edge in e 2 E0 [ E4 , f (e) = 1. For each edge in e 2 E2, if e = (h2; x; ii; h3; x; ii), then f(e) = wx . For x 2 T; j 2 [0; wxL); i 2 [earliest (x; j); latest(x; j)],
Proof: Suppose a task x is fixed to resource r, and is scheduled on resource s 6= r in slot i. Then, by Definition 3.8, g((h3; x; ii; h4; s; ii)) = 1. However, by the definition of G, no such edge exists in G, so this cannot be true.
f (h1; x; j i; h2; x; ii) =
8 w ? j + w bj=w c if i = earliest (x; j) ^ x x x > > < i = latest (x; j ? 1) j + 1 ? w b (j + 1)=w c if i = latest (x; j) ^ x x > i = earliest (x; j + 1) > :w otherwise x For i 2 [0; L); r 2 [0; m), f (h3; x; ii; h4; r; ii) = 8 < wx MU (r) if x 2 M if x 2 F ^ R(x) = r : 0wx otherwise
The previous three lemmas show that Sg is a schedule. It remains to show that it is also a pfair schedule. As noted above, task x is scheduled in slot i iff g((h1; x; j i; h2; x; ii)) = 1 for some j . Thus, we need to show that, for each task x and for each t 2 (0; L], the lag of task x at time t is greater than -1 and less than 1. Because Pg is a unitary flow of size mL, and because there are x2T wx L = mL nodes in level 1 of G (i.e., nodes h1; ; i), it follows that each node in level 1 has an inflow of exactly 1. Therefore, each node h1; x; j i has a flow of 1 to exactly one node h2; x; ii, where i 2 [earliest (x; j); latest (x; j)]. Furthermore, because each node in level 2 of G has an outflow of at most 1, it is not possible for two different nodes in level 1 to have a flow of 1 to the same node in level 2. Therefore, to count the number of times x is scheduled in slots 0 through t ? 1, it suffices to count the number of nodes h1; x; j i that have a flow to some node h2; x; ii where 0 i < t.
For v; w
2 V , such that (v; w) 2= E , f((v; w)) = ?.
Using Definitions 3.1 and 3.2 and the fact that F and M are disjoint, it is easy to show that f is well defined. We now introduce two lemmas that are used to prove that f is a flow of size mL in G. The first lemma states that the total flow out of each node in h1; x; j i is 1, and 6
the second states that the total flow into each node h2; x; ii is wx. The intuition behind these properties is as follows. From Definitions 3.1 and 3.2, it can be seen that latest (x; j) ? earliest (x; j) is roughly 1=wx ? 1. Therefore, there are roughly 1=wx outgoing edges from a node h1; x; j i. The flow function f splits the incoming flow of 1 into nodes evenly amongst these 1=wx edges, so each edge gets a flow of wx . However, because there are not always exactly 1=wx outgoing edges, and because in some cases, earliest (x; j) = latest (x; j ? 1) and/or latest (x; j) = earliest (x; j + 1), we need to make some adjustments at the boundaries in order to ensure that a) the total outgoing flow from each node h1; x; j i is 1, and the total incoming flow to each node h2; x; ii is wx . In the full paper [] , we prove the following two lemmas. The proofs are not hard, but each one requires a detailed case analysis.
MU 6= 0. Thus, by Definition 2.4, out (f; v) = wx r2[0;m) (1 ? FU (r))=MUP. For x 2 F , R(x) 2 [0; m). Thus, P by Definition 2.2, r2[0;m) (1 ? FU (r))=MU = (m ? x2F wx )=MU . Because F and
tion 2.3 implies P that
M arePdisjoint andPT = F [ M , Assumption 1 implies m ? x2F wx = x2M wx, which equals MU (by Definition 2.3). Therefore, out (f; v) = wx , as required. Proof of P6a: Consider a node h4; r; ii. By Definition 2.2 and the definitions of G and f , in (f; h4; r; ii) = FU (r) + P MU (r) x2M wx. It is easy to show that if MU = 0, then FU (r) = 1. Thus, by Definition 2.4, in (f; h4; r;P ii) = 1. If A 6= 0, then by Definitions 2.3 and 2.4, MU (r) x2M wx = 1 ? FU (r), so in (f; h4; r; ii) = 1. Lemma 3.7 implies the existence of a flow of size mL in G. This implies the existence of a unitary flow of size mL in G [10]. Together with Lemma 3.4, this yields the
2 [0; wxL), out (f; h1; x; j i) = 1. Lemma 3.6: For i 2 [0; L), in (f; h2; x; ii) = wx .
Lemma 3.5: For j
following theorem. Theorem 3.1: A pfair schedule exists for any instance that satisfies Assumptions 1 and 2.
We now prove the final lemma needed to establish the existence of a pfair schedule. Lemma 3.7:
f is a flow of size mL in G.
4 The Supertasks Approach
Proof: It is easy to show that for every edge e, 0 f(e) 1. Therefore, f is a flow function for G. Thus, by Definition
In this section, we describe an idea for scheduling mixed task sets by breaking the problem into two easier problems — scheduling a migrating task set, and scheduling of a single resource. We also show that this idea does not work in general, and discuss the implications of this observation. The idea is to replace all tasks that are fixed to a resource i with a single task Xi — called a supertask — whose weight is the sum of the weights of the tasks it replaces (hereafter referred to as the subtasks of Xi ). We then schedule the new task set using any pfair scheduling algorithm for task sets that consist of only migrating tasks (we treat the supertasks as migrating tasks). Each time a supertask is scheduled, we schedule one of the fixed tasks associated with the supertask. As described in the next section, we have a very simple algorithm that will correctly schedule the fixed tasks provided the supertask is scheduled in such a way that this is possible. Unfortunately, as we show next, there are task sets for which a schedule that is pfair for the supertask cannot be refined into one that is pfair for all of the subtasks that make up the supertask. We show that the supertask approach does not work in general through a counterexample. Suppose we have three resources on which we wish to schedule a task set consisting of seven tasks, T0 through T6 , with weights 9=12, 8=12, 7=12, 3=12, 3=12, 3=12, and 3=12, respectively. The schedule depicted in Figure 2(a) is a pfair schedule for this task set. In fact, this is the schedule produced by the algorithm presented in [5]. Now, suppose that this task set was obtained by combining fixed tasks into supertasks, as
3.7, the following properties — which are proved below — imply that the lemma holds. P1a: in (f; source ) = 0. P1b: out (f; source ) = mL. P2a: in (f; sink ) = mL. P2b: out (f; sink ) = 0. P3a: For v 2 V1 , in (f; v) = 1. P3b: For v 2 V1 , out (f; v) = 1. P4a: For v 2 V2 , if v = h2; x; ii, then in (f; v) = wx . P4b: For v 2 V2 , if v = h2; x; ii, then out (f; v) = wx . P5a: For v 2 V3 , if v = h3; x; ii, then in (f; v) = wx . P5b: For v 2 V3 , if v = h3; x; ii, then out (f; v) = wx . P6a: For v 2 V4 , in (f; v) = 1. P6b: For v 2 V4 , out (f; v) = 1. P1a, P2a, P2b, P3a, P4b, P5a, and P6b follow directly from the definitions of G and f . P3b and P4a follow directly from Lemmas 3.5 and 3.6 respectively. We prove the remaining properties below.
P
Proof of P1b: There are L x2T wx edges out of source , and for each such edge e, f(e) = 1. Thus, by Assumption 1, out (f; source ) = mL.
Proof of P5b: If x 2 F , then for some r 2 [0; m), f((h3; x; ii; h4; x; ri)) = wx and for s 6= r, f((h3; x; ii; h4; x; si)) = 0, so out (f; v) = wx as required. If x 2 M , then because 0 < wx 1, Defini-
7
described above, and that task T2 (with weight 7=12) was acquired by combining two fixed tasks of weight 1=2 and 1=12, respectively (all other tasks are migrating tasks). Observe that task T2 is not scheduled at time 2 or at time 3 in the schedule shown in Figure 2(a). Thus, using the scheduling approach described above, the subtasks (with weights 1=2 and 1=12) are also not scheduled at time 2 or at time 3. Also observe that at time 2, the subtask with weight 1=2 must have been scheduled exactly once (because b1=2 2c = d1=2 2e = 1). Similarly, at time 4, this task must have been scheduled exactly twice. Therefore, the task with weight 1=2 must be scheduled at either time 2 or time 3. The above counterexample shows that guaranteeing that the supertasks are scheduled in a pfair manner is not sufficient to guarantee that it is possible to schedule the associated subtasks in a pfair manner. Nonetheless, it is conceivable that for a particular pfair scheduling algorithm, this situation would never arise. Unfortunately, this situation can arise with all known pfair scheduling algorithms [1, 4, 5]. In fact, the task set used in the counterexample above serves as a counterexample for all of these algorithms. In particular, Figures 2(b) and 2(c), respectively, depict the pfair schedules produced by the algorithms in [4] and [1]. Observe that in Figure 2(b), task 1 (with weight 8=12) is not scheduled at time 2 or at time 3, and in Figure 2(c), task 2 (with weight 7=12) is not scheduled at time 2 or at time 3. Thus, as in the first counterexample above, in each case, we can split a task into subtasks — one of which has weight 1=2 — in such a way that the subtasks cannot be scheduled within the slots assigned to the supertask. In the next section, we show that if a pfair scheduling algorithm can be found that guarantees that each supertask is scheduled in such a way that its subtasks can be scheduled within its slots, then there is a very simple algorithm for scheduling the subtasks in a pfair manner within the slots assigned to the supertask.
way that the subtasks of each supertask can be scheduled within the slots assigned to that supertask, then that algorithm can be composed with our Subtask Algorithm to solve the fixed/migrating pfair scheduling problem. The Subtask Algorithm uses the following definitions. Definitions:
C(x; t) the number of times subtask x was scheduled in [0; t).
invalid (x; t) C(x; t) wx (t + 1) dl (x; t) d(C(x; t) + 1)=wx e We also use these definitions with a subscript S to indicate that the definitions apply to a given schedule S . For example, CS (x; t) is the number of times S schedules task x in the interval [0; t). Henceforth, we say x is valid at time t in schedule S if :invalid S (x; t) holds, and x is not valid (or is invalid) at time t in schedule S otherwise. The intuition behind these definitions is as follows. A task x is valid at time t if scheduling x at time t will not violate pfairness for x (although scheduling x instead of another task y might violate pfairness for task y). If a task x is not valid at time t, it means that if x were scheduled at time t, then x would have been scheduled too many times in [0; t], resulting in a violation of pfairness. Roughly speaking, each task x must be scheduled once out of every 1=wx slots. dl (x; t) is the next “deadline” of task x after time t. In the correctness proof for this algorithm, we show that, for any time t, x must be scheduled in the interval [t; dl (x; t)) in order to maintain pfairness. We now present our Subtask Algorithm. Subtask Algorithm: If the supertask X is assigned a slot at time t, then schedule the subtask x of X such that dl (x; t) is minimal among all subtasks of X that are valid at time t.
5 Scheduling Subtasks
The correctness proof for our Subtask Algorithm is presented in full in [16]. Here, we give a high-level overview of the proof, and present some of the main lemmas used. Because our algorithm makes scheduling decisions only for slots assigned to the supertask, it does not affect the schedules of migrating processes. Therefore, the correctness proof for this algorithm concentrates on showing that our algorithm schedules the subtasks in a pfair manner (again, provided it is possible to do so). The proof is by induction on t. In particular, we inductively show that at every time instant t, there exists a pfair schedule for the original task set that extends the schedule produced by our algorithm up until time t. We achieve this by using several lemmas to show that if our algorithm makes a decision that is different from the decision made by the existing pfair schedule,
In this section, we present a simple Subtask Algorithm that can be used to schedule the subtasks of a supertask X within the slots assigned to X . Given such an algorithm, we can use a pfair scheduler to schedule the migrating tasks and the supertasks, and then use the Subtask Algorithm to schedule a subtask of X each time X is assigned a slot. As shown in the previous section, guaranteeing that the supertask is scheduled in a pfair manner is not sufficient to guarantee that there is a pfair schedule for the subtasks. The Subtask Algorithm presented in this section guarantees to schedule the subtasks in a pfair manner, provided it is possible to do so. Thus, if a pfair scheduling algorithm can be found that guarantees to schedule supertasks in such a 8
2 2 4 6 2 2 4 2 2 3 2 4
P0 2 2 4 2 3 2 2 4 2 3 2 4
1 1 3 1 1 3 1 6 1 1 1 6
P1 1 1 3 5 1 1 1 5 1 1 1 5
P2 0 0 0 5 0 0 0 5 0 0 0 5
P2 0 0 0 6 0 0 0 6 0 0 0 6
P0 P1
0
4
8 (a)
12
0
4
8 (b)
P0 P1 P2 12
2 2 6 4 2 2 5 2 2 6 2 5 1 1 5 1 1 6 1 4 1 1 1 4 0 0 0 3 0 0 0 3 0 0 0 3 0
4
8
12
(c)
Figure 2. Pfair schedules produced by algorithms in [5], [4], and [1], respectively, for a task set consisting of tasks T0 through T6 , with weights 9=12, 8=12, 7=12, 3=12, 3=12, 3=12, and 3=12, respectively. without violating pfairness for task x.
we can manipulate the existing pfair schedule to produce a new pfair schedule that makes the same decision as our algorithm at each time before t and at time t. By applying these lemmas inductively, we can conclude that our algorithm produces a pfair schedule for all subtasks. Thus, this proof has some similarity to the optimality proof of EDF on uniprocessors by Liu and Layland [15], in which it is shown that the optimal schedule can be inductively manipulated to be the same as the one produced by EDF without causing any task to miss its deadline. However, their proof is not directly applicable to our problem, because we must also show that this manipulation does not violate pfairness. We now present the main lemmas used in the correctness proof of our Subtask Algorithm. Lemmas 5.1 through 5.3 below capture some basic properties about pfair schedules, which are used in the proofs and applications of the other lemmas.
Lemmas 5.6 and 5.7 relate the scheduling of subtasks and supertasks, which allows us to show that subtasks can be swapped within the slots allocated to the supertask. Lemma 5.6: If a supertask X is scheduled at time t in a schedule that is pfair to X , then at least one of the subtasks of X is valid at time t. Lemma 5.7: Suppose there exists a pfair schedule S for the subtasks of a supertask X that allocates a slot for some subtask only when a slot is allocated to the supertask. Then S schedules some subtask of X in every slot assigned to X . Lemma 5.8 is the main swapping lemma. We present the proof of this lemma here, as it captures the essence of the correctness proof of our Subtask Algorithm. This lemma can be applied inductively to yield the theorem that follows it.
Lemma 5.1: If task x is scheduled at time t in a pfair schedule S , then x is valid in S at time t.
Lemma 5.8: Suppose there exists a pfair schedule S for the subtasks of a supertask X that allocates slots for subtasks only when a slot is allocated to the supertask, and that S is identical to the schedule produced by our algorithm until time t. Then, there exists another such schedule that is identical to the schedule produced by our algorithm until time t + 1.
Lemma 5.2: In a pfair schedule S , for any time t, task x is scheduled at least once in the interval [t; dl (x; t)). Lemma 5.3: If task x is scheduled at time t in a pfair schedule S , then x is not scheduled again in S before time d(CS (x; t) + 1)=wx e ? 1.
Proof: If X is not allocated a slot at time t, then S does not schedule any subtask of X at time t, and neither does our algorithm. Otherwise, by Lemma 5.7, S schedules some subtask y of X at time t. Also, by Lemma 5.6, our algorithm schedules some subtask x of X at time t. If x = y, then S is identical to the schedule produced by our algorithm until time t + 1, as required. Otherwise, x 6= y. By Lemma 5.1, task y is valid at time t, and by the algorithm, task x is valid at time t. (Note that S is identical to the schedule produced by our algorithm until time t, so x and y are valid in both schedules.) By Lemma 5.2, S schedules x at some time in the interval (t; dl (x; t)). Let t0 be the first such time at which S schedules x. Be-
Lemmas 5.4 and 5.5 lay the groundwork for the inductive “swapping” proof technique we use by characterizing circumstances under which tasks can be scheduled later (Lemma 5.4) or earlier (Lemma 5.5), without violating pfairness. Lemma 5.4: If task x is scheduled at time t in a pfair schedule S , then we can delay the scheduling of x until any time in the interval (t; dl (x; t)) at which x is not scheduled by S , without violating pfairness for task x. Lemma 5.5: If a task x is valid at time t in a pfair schedule S and x is not scheduled at time t in S , then we can schedule x at time t instead of at the next time x is scheduled by S 9
cause our algorithm chooses x over y at time t, it follows that dl (x; t) dl (y; t), so t0 < dl (y; t). Thus, by Lemma 5.4, S can schedule y at time t0 instead of time t and by Lemma 5.5, S can schedule x at time t instead of time t0 without violating pfairness. The modified pfair schedule is identical to the schedule produced by our algorithm until time t + 1, as required.
useful discussions. We would particularly like to thank Sanjoy Baruah for encouraging this line of work.
References [1] J. Anderson and A. Srinivasan. A new look at pfair priorities. In submission, 1999. [2] N. Audsley, A. Burns, M. Richardson, and K. Tindell. Applying New Scheduling Theory to Static Priority Pre-emptive Scheduling. Software Engineering Journal, 8(5):284–292, Sept 1993. [3] S. Baruah. Personal communication, 1998. [4] S. Baruah, N. Cohen, G. Plaxton, and D. Varvel. Proportionate progress: A notion of fairness in resource allocation. Algorithmica, 15:600–625, 1996. [5] S. Baruah, J. Gehrke, and G. Plaxton. Fast scheduling of periodic tasks on multiple resources. In Proceedings of the 9th International Parallel Processing Symposium, pages 280– 288, 1995. [6] S. Baruah, R. Howell, and L. Rosier. Algorithms and complexity concerning the preemptive scheduling of periodic, real-time tasks on one processor. Journal of Real-Time Systems, 2:301–324, 1990. [7] A. Burchard, J. Liebeherr, Y. Oh, and S. H. Son. New Strategies for Assigning Real-Time Tasks to Multiprocessor Systems. IEEE Trans. on Computers, 44(12):1429–1442, 1995. [8] S. Davari and C. L. Liu. An On-Line Algorithm for RealTime Tasks Allocation. In IEEE Real Time Systems Symposium, pages 194–200, 1986. [9] S. K. Dhall and C. L. Liu. On a Real-Time Scheduling Problem. Operations Research, 26(1):127–140, 1978. [10] L. Ford and D. Fulkerson. Flows in Networks. Princeton University Press, 1962. [11] S. Lauzac, R. Melhem, and D. Moss´e. An efficient rms admission control and its application to multiprocessor scheduling. In International Parallel Processing Symposium, pages 511–518, 1998. [12] J. P. Lehoczky, L. Sha, and Y. Ding. The Rate Monotonic Scheduling Algorithm: Exact Characterization and Average Case Behavior. In IEEE Real Time Systems Symposium, pages 166–171, Dec. 1989. [13] D. W. Leinbaugh. Guaranteed response time in a hard realtime environment. IEEE Trans. on Software Engineering, Jan. 1980. [14] J. Y.-T. Leung and J. Whitehead. On The Complexity of Fixed-Priority Scheduling of Periodic Real-Time Tasks. Performance Evaluation, 2:237–250, 1982. [15] C. L. Liu and J. Layland. Scheduling Algorithm for Multiprogramming in a Hard Real-Time Environment. Journal of the ACM, 20(1):46–61, Jan. 1973. [16] M. Moir and S. Ramamurthy. Pfair scheduling of fixed and migrating periodic tasks on multiple resources. Technical Report TR-99-18, Department of Computer Science, University of Pittsburgh, 1999. [17] Y. Oh and S. H. Son. Tight performance bounds of heuristics for a real-time scheduling problem. Technical Report CS93-24, University of Virginia, 1993.
Theorem 5.1: Assume there exists a schedule S for the subtasks of a supertask X , such that S schedules the subtasks only in slots assigned to the supertask. Then, our algorithm schedules the subtasks of X within the slots assigned to X such that each subtask is scheduled in a pfair manner.
6 Concluding Remarks We have extended the results of Baruah et al. [4] to show the existence of an optimal pfair scheduler for periodic tasks on multiple resources, even if some tasks are fixed to specific resources. Because the implied solution relies on finding an integer solution to a flow graph whose size may be exponential in the number of tasks, the scheduling algorithm based on our proof must do an exponential amount of computation before making any scheduling decisions. In [4, 5], Baruah et al. present optimal pfair scheduling algorithms for periodic preemptive tasks on multiple resources that require only a polynomial amount of computation per scheduling decision. The correctness proofs for these algorithms rely on the existence of a pfair schedule. Our results may be similarly useful in the proof of a future online pfair scheduling algorithm for fixed and migrating tasks. Sanjoy Baruah has pointed out that it is possible to treat some migrating tasks as fixed in order to reduce the number of migratable tasks [3]. In fact, it is easy to show that an arbitrary task set can be transformed into one that has at most m ? 1 migrating tasks. This might be desirable if the per-task overhead of supporting migration is high. Note, however, that this approach does not necessarily reduce the number of migrations; indeed, arbitrarily restricting the scheduler from scheduling given pairs of tasks at the same time may in fact force it to cause more migrations in order to produce a pfair schedule. It would be interesting to investigate techniques for reducing or minimizing the number of task migrations. Acknowledgements: This work was partially carried out while Mark Moir was visiting the School of Mathematical and Computing Sciences at Victoria University of Wellington, New Zealand. Mark is grateful to the school for their hospitality. The authors would also like to thank Jim Anderson, Sanjoy Baruah, Libin Dong, and Daniel Moss´e for 10