TWO APPROXIMATION SCHEMES FOR ...

2 downloads 0 Views 209KB Size Report
jianping@ynu.edu.cn. TONGQUAN ..... every u ∈ ψi, denoted by xu ... Two Approximation Schemes for Scheduling on Parallel Machines. ∑ u∈ψ1 xu. 1 u +. ∑.
4th Reading October 10, 2012 2:51 WSPC/S0217-5959

APJOR

1250029.tex

Asia-Pacific Journal of Operational Research Vol. 29, No. 5 (2012) 1250029 (12 pages) c World Scientific Publishing Co. & Operational Research Society of Singapore  DOI: 10.1142/S0217595912500297

TWO APPROXIMATION SCHEMES FOR SCHEDULING ON PARALLEL MACHINES UNDER A GRADE OF SERVICE PROVISION

WEIDONG LI Department of Atmospheric Science Yunnan University, Kunming 650091, P. R. China [email protected] JIANPING LI∗ Department of Mathematics Yunnan University, Kunming 650091, P. R. China [email protected] TONGQUAN ZHANG School of Mathematics and Computer Science Yunnan Nationalities University Kunming 650031, P. R. China [email protected]

We consider the offline scheduling problem to minimize the makespan on m parallel and identical machines with certain features. Each job and machine are labeled with the grade of service (GoS) levels, and each job can only be executed on the machine whose GoS level is no more than that of the job. In this paper, we present an efficient polynomial-time approximation scheme (EPTAS) with running time O(n log n) for the special case where the GoS level is either 1 or 2. This partially solves an open problem posed in (Ou et al., 2008). We also present a simpler full polynomial-time approximation scheme (FPTAS) with running time O(n) for the case where the number of machines is fixed. Keywords: Makespan; EPTAS; FPTAS.

1. Introduction Given a set M = {M1 , . . . , Mm } of machines and a set J = {J1 , . . . , Jn } of jobs, each job Jj has the processing time pj and it is labeled with the grade of service (GoS) level g(Jj ). Each machine Mi is also labeled with the GoS level g(Mi ). Job Jj is allowed to be executed on machine Mi only when g(Jj ) ≥ g(Mi ). The goal is to partition the set J into m disjoint bundles, S1 , . . . , Sm , such that max1≤i≤m Ci ∗ Corresponding

author 1250029-1

4th Reading October 10, 2012 2:51 WSPC/S0217-5959

APJOR

W. Li, J. Li & T. Zhang

 is minimized, where Ci = Jj ∈Si pj and Jj ∈ Si only if g(Jj ) ≥ g(Mi ). Using the three-field notation in (Graham et al., 1979), we denote this scheduling model as the problem P |GoS|Cmax . Obviously, the problem P |GoS|Cmax is strongly NP-hard. The research focuses on designing polynomial-time algorithm to produce a feasible solution close to the optimal one. The best result is to obtain a family of polynomial-time algorithms which produce a feasible solution with objective value within a factor of 1 +  of the optimum value, for any  > 0. Such a family is called a polynomial-time approximation scheme (PTAS, for short). Moreover, an efficient PTAS (EPTAS) is an approximation scheme running in time O(f (1/)nc ), where f is an arbitrary function and c is a constant independent of . Especially, when f (1/) is polynomial in 1/, the EPTAS is called full polynomial-time approximation scheme (FPTAS). In the class PTAS, the class of problems admitting EPTAS plays an important role, as EPTAS can produce a good approximation solution for an enormous instance in a reasonable amount of time. Some problems admit EPTAS, e.g., the multiple knapsack problem (Jansen, 2009), and some problems do not admit EPTAS, e.g., the two-dimensional knapsack problem, unless W [1] = F P T (Kulik and Shachnai, 2010). The classic problem P ||Cmax admits an EPTAS (Alon et al., 1998), which is a special case of the problem P |GoS|Cmax . This motivates us to design an EPTAS for the problem P |GoS|Cmax . Denoted by P |GoS2 |Cmax the variant of the problem P |GoS|Cmax , where g(Jj ), g(Mi ) ∈ {1, 2}, and by Pm |GoS|Cmax the variant of the problem P |GoS|Cmax , where the number m of machines is fixed. Hwang et al. (2004) designed a strongly polynomial two-approximation algorithm for the problem P |GoS|Cmax . Zhou et al. (2007) proposed a 43 + ( 21 )r -approximation algorithm for the problem P |GoS2 |Cmax , where r is the desired number of iterations. Jiang (2008) studied the online version of the problem P |GoS2 |Cmax and proposed an online algorithm with competitive √ 2 . Other related online algorithms can be found in (Tan and Zhang, ratio 12+4 7 2010; Zhang et al., 2009). Ou et al. (2008) designed a PTAS with running time n 8 4 O(mn2+  log2  log P + m log m) for the problem P |GoS|Cmax , where P = j=1 pj . They also posed an interesting problem of whether there is a PTAS for P |GoS|Cmax with an improved running time. Note that the problem P |GoS|Cmax is a generalization of identical parallel scheduling problem, which possesses an EPTAS with running time O(n) (Alon et al., 1998). Other related results can be found in the recent survey (Leung and Li, 2008). Ji and Cheng (2008) designed a FPTAS for the problem Pm |GoS|Cmax . Note that Pm |GoS|Cmax is a special case of the unrelated parallel machines scheduling problem with fixed number of machines, which possesses many FPTAS, c.f., Fishkin et al. (2008), Horowitz and Sahni (1976), Jansen and Porkolab (2001), (Woeginger, 2000, 2009). In this paper, we design an EPTAS with running time O(n log n) to solve P |GoS2 |Cmax , which improves the previous best result. This partially answers the

1250029-2

1250029.tex

4th Reading October 10, 2012 2:51 WSPC/S0217-5959

APJOR

1250029.tex

Two Approximation Schemes for Scheduling on Parallel Machines

open question of whether there is a PTAS for P |GoS|Cmax with an improved running time in (Ou et al., 2008). We also design a FPTAS with running time O(n) for the problem Pm |GoS|Cmax . Our FPTAS is simpler than previous approximation schemes. 2. An EPTAS for P |GoS2|Cmax Hwang et al. (2004) proposed a simple two-approximation algorithm for the problem P |GoS|Cmax , called lowest grade-longest processing times first (LG-LPT, for short), which can be done within O(n log n) (Ou et al., 2008). For a given instance I = (M, J ) of the problem P |GoS2 |Cmax , let L be the value of the solution produced by the algorithm LG-LPT, we have L ≤ 2OPT, where OPT denotes the optimal value of the instance I = (M, J ). Clearly, L is an upper bound on OPT, and L/2 is a lower bound on OPT. By dividing every processing time by L, we may assume that pj ≤ 1 and 1/2 ≤ OPT ≤ 1.

(1)

For convenience, we assume that g(M1 ) = g(M2 ) = · · · = g(Mm1 ) = 1 and g(Mm1 +1 ) = g(Mm1 +2 ) = · · · = g(Mm ) = 2. Let J[i] denote the set of jobs with GoS level i, i.e., J[i] = {Jj | g(Jj ) = i} (i = 1, 2). The jobs in J[2] can be processed by any machines, and the jobs in J[1] can only be executed on the first m1 machines with GoS level 1. For any given constant  > 0, let λ =  5  and δ = λ1 . We partition the jobs into two subsets: large jobs set J L and small jobs set J S , where J L = {Jj |pj > δ} and J S = {Jj |pj ≤ δ}. For any given instance I = (M, J ) of the problem P |GoS2 |Cmax and a given constant , we construct an auxiliary instance Iˆ = (M, JˆL ∪ JˆA ) as follows: • Iˆ includes m machines, which are the same as that in I; • For each job in Jj ∈ J L , we round its processing time pj to pˆj , where, p  j pˆj = 2 δ 2 ≤ pj + δ 2 ≤ (1 + δ)pj . δ

(2)

Let JˆL denote the set of jobs with scaled-up processing times; S S • Let J[i] = J S ∩ J[i] (i = 1, 2). Thus, J[i] contains those small jobs with GoS  S level i. We replace these jobs in J[i] by  Jj ∈J S pj /δ auxiliary jobs with GoS [i] level i, where each auxiliary job has a processing time of δ. Let JˆA be the set of [i]

A A ∪ Jˆ[2] be the set of all auxiliary auxiliary jobs with GoS level i and JˆA = Jˆ[1] jobs.

Note that all jobs in the instance Iˆ = (M, JˆL ∪ JˆA ) have processing times of the form kδ 2 , where k ∈ {λ, λ + 1, . . . , λ2 }. Thus, JˆL ∪ JˆA contains jobs with at most λ2 − λ + 1 different processing times. From now on, for a job in any instance, 1250029-3

4th Reading October 10, 2012 2:51 WSPC/S0217-5959

APJOR

1250029.tex

W. Li, J. Li & T. Zhang

we refer it as a large job if it has the processing time more than δ and otherwise as a small job. ˆ T of the instance Iˆ is at most (1 + δ)OPT + δ ≤ Lemma 2.1. The optimal value OP 1 + 2δ. Proof. In the optimal schedule (S1 , S2 , . . . , Sm ) for the instance I where Si denote the set of jobs executed on the machine Mi , replace every large job Jj in the instance ˆ This may increase some machine I by its corresponding job Jˆj in the instance I. completion times by a factor of at most 1 + δ, as pˆj ≤ (1 + δ)pj . Let si denote the total processing times of small jobs which are processed by machine Mi in the  solution (S1 , S2 , . . . , Sm ), i.e., si = Jj ∈Si ∩J S pj . We assign auxiliary jobs with ˆ GoS level 2 in I to the last m − m1 machines with GoS level 2 as many as possible, subjected to that Mi is assigned at most si /δ auxiliary jobs. Then, we assign at most si /δ remaining auxiliary jobs to first m1 machines. It is easy to see that every auxiliary jobs in Iˆ can be assigned in this way. Hence, we conclude that the completion time of machine Mi is at most (1 + δ)OPT + δ ≤ 1 + 2δ. Next, we will show how to solve the instance Iˆ optimally in linear time. The jobs in the instance Iˆ can be represented as a set N = {ni | ni = (niλ , niλ+1 , . . . , niλ2 ); i = 1, 2}, where nik denote the number of jobs with GoS level i whose processing times are equal to kδ 2 . An assignment to a machine is a vector v = (vλ , vλ+1 , . . . , vλ2 ), where, vk , k = λ, λ + 1, . . . , λ2 , are the numbers of jobs with processing time kδ 2 λ2 assigned to that machine. The length l(v) of assignment v is defined as k=λ kvk δ 2 . Denoted by F = {v | l(v) ≤ 1 + 2δ} the set of all possible assignment vectors with length less than 1 + 2δ. By the fact that the processing times of all jobs in instance Iˆ are at least δ, each assignment in F contains at most λ + 2 jobs. Therefore  |F | ≤ λ2λ+4 holds. Denoted by ψi = {u ∈ F | u ≤ 2t=i nt ; } the set of all possible vectors that can be allocated to machines with GoS level i, where i = 1, 2. For every u ∈ ψi , denoted by xu i the numbers of machines with GoS level i that are assigned u. We use the following ILP formulation to obtain an optimal solution of the instance I. min z  xu 1 = m1 ;

(3) (4)

u∈ψ1



xu 2 = m − m1 ;

(5)

u∈ψ2



2 xu 2u ≤ n ;

u∈ψ2

1250029-4

(6)

4th Reading October 10, 2012 2:51 WSPC/S0217-5959

APJOR

1250029.tex

Two Approximation Schemes for Scheduling on Parallel Machines

 u∈ψ1

xu 1u +



1 2 xu 2u = n + n ;

(7)

u∈ψ2 u xu 1 ≤ y 1 m1 ,

∀ u ∈ ψ1 ;

u xu 2 ≤ y2 (m − m1 ),

l(u) · yiu ≤ z,

∀ u ∈ ψi ,

xu i ∈ {0, 1, . . . , m}, yiu ∈ {0, 1},

(8)

∀ u ∈ ψ2 ;

(9)

i = 1, 2;

(10)

∀ u ∈ ψi ,

∀ u ∈ ψi ,

i = 1, 2;

i = 1, 2.

(11) (12)

Here, the constraints (4) and (5) guarantee that each machine is assigned exactly one vector (a set of jobs). The set of constraints (6) guarantee that the machines with GoS level 2 are assigned jobs with GoS level 2. The set of constraint (7) guarantees that each job is used exactly once. In the set of constraints (8) and (9), the zero–one variables yiu indicate whether u ∈ ψi is used or not. In the set of constraints (10), in case the assignment u ∈ ψi is used, then the term l(u) · yiu contributes to the objective function. The number of integer variables in the ILP is 2|ψ1 | + 2|ψ2 | + 1 ≤ 4λ2λ+4 + 1, and the number of constraints is at most 2 + 2(λ2 − λ + 1) + 8λ2λ+4 . Both values are constants that do not depend at all on the input size, as λ is a fixed constant depending on the reciprocal value of the desired precision . By utilizing Lenstra’s algorithm (Lenstra, 1983) whose running time is exponential in the dimension of the program but polynomial in the logarithms of the coefficients, we can obtain the optimal solution of ILP. As the analysis in (Mastrolilli, 2003), since the coefficients in ILP are at most max{n, 1 + 2δ}, the ILP can be solved within an overall time complexity of O(logO(1) n), where the hidden constant depend exponentially on λ. An optimal solution for the instance Iˆ is obtained by assigning jobs according to the optimal solution of ILP. Hence, the following theorem holds. Theorem 2.1. For any fixed constant  and a given instance I = (M, J ), an optimal solution for the corresponding auxiliary instance Iˆ = (M, JˆL ∪ J A ) can be computed in time O(n), where the hidden constant depends exponentially on . Lemma 2.2. There exists a schedule for the instance I with maximum machine ˆ T + δ. completion time at most OP ˆ replace every Proof. In the optimal schedule (Sˆ1 , Sˆ2 , . . . , Sˆm ) for the instance I, ˆ ˆ large job Jj in the instance I by its corresponding job Jj in the instance I. This can never increase the machine completion times, as pj ≤ pˆj . Let sˆi denote the number of auxiliary jobs which are processed by machine Mi in the solution (Sˆ1 , Sˆ2 , . . . , Sˆm ), i.e., sˆi = |Sˆi ∩ JˆA |. We assign small jobs with GoS level 2 in I to the last m − m1 1250029-5

4th Reading October 10, 2012 2:51 WSPC/S0217-5959

APJOR

W. Li, J. Li & T. Zhang

machines with GoS level 2 as many as possible, subjected to that the total size of small jobs assigned to each machine is no more than (ˆ si + 1)δ. Then, we assign the remaining small jobs to first m1 machines subjected to the same constraint. It is no hard to verify that every small jobs in I must be assigned in this way. Hence, we obtain a feasible solution for instance I with maximum machine completion time ˆ + δ. no more than OPT Our approximation scheme for the problem P |GoS2 |Cmax can be formulated as follows. For any given instance I, we first compute a bound on OPT using the algorithm LG-LPT (Hwang et al., 2004) in time O(n log n), and construct a corresponding instance Iˆ in time O(n). Then, compute an optimal solution for the instance Iˆ in time O(n) by utilizing Lenstra’s algorithm (Lenstra, 1983). Finally, we output a feasible solution for instance I as in Lemma 2.2. Let OUT be the value of the output solution. By Lemma 2.2, we have OUT ≤ ˆ + δ. By Lemma 2.1, we have OPT ˆ ≤ (1 + δ)OPT + δ. Hence, OUT ≤ (1 + OPT δ)OPT + 2δ ≤ (1 + 5δ)OPT ≤ (1 + )OPT, as 1/2 ≤ OPT and δ =  15  ≤ 5 . Thus,  we achieve our main result as follows. Theorem 2.2. The problem P |GoS2 |Cmax possesses an EPTAS with running time O(n log n), where the hidden constant depends exponentially on 1 . 3. A New FPTAS for Pm |GoS|Cmax Although there exist many FPTAS for the problem Pm |GoS|Cmax , only two of them have running time O(n) (Jansen and Porkolab, 2001; Fishkin et al., 2008). As these two FPTAS are designed for a more general problem, they are complicated. In this section, we investigate the special structure of the problem Pm |GoS|Cmax and design a simpler FPTAS with running time O(n). Assume that all the machines and jobs are sorted in nondecreasing order of their GoS levels, i.e., g(M1 ) ≤ g(M2 ) ≤ · · · ≤ g(Mm ) and g(J1 ) ≤ g(J2 ) ≤ · · · ≤ g(Jn ). n For convenience, denoted by P = j=1 pj the total processing time. For each job Jk , let vk = max{i | g(Mi ) ≤ g(Jk )} be the machine index of the job Jk . We first present a standard dynamic program, which can also be found in (Woeginger, 2009). In the dynamic program, we will store certain information for certain schedules for the first k jobs (1 ≤ k ≤ n): Every such schedule is encoded by an m-dimensional load vector (P1 , P2 , . . . , Pm ), where, Pi specifies the overall processing times of all jobs assigned to machines Mi for i = 1, 2, . . . , m. The state space ψk consists of all m-dimensional vectors for schedule for the first k job, where for k = 0, the state space ψ0 contains only one element (0, 0, . . . , 0). For k ≥ 1, every schedule (P1 , P2 , . . . , Pm ) in state space ψk−1 can be extended in vk ways by placing job Jk to the machine Mi , 1 ≤ i ≤ vk . This yields vk schedules: (P1 , P2 , . . . , Pm ) + pk ei ; 1250029-6

i = 1, 2, . . . , vk

1250029.tex

4th Reading October 10, 2012 2:51 WSPC/S0217-5959

APJOR

1250029.tex

Two Approximation Schemes for Scheduling on Parallel Machines

where, ei denotes the m-dimensional vector whose coordinates are 0 except for the ith coordinate as 1. We put these vk schedules into the state space ψk . In the end, the optimal objective value is:   min z | ∃(P1 , P2 , . . . , Pm ) ∈ ψn , which satisfies max {Pi } ≤ z . 1≤i≤m

Here, the computational complexity of this dynamic programming formulation is clearly O(mnP m ), which is a pseudo-polynomial time. As mentioned in (Woeginger, 2009), following the framework in (Woeginger, 2000), the problem Pm |GoS|Cmax has a FPTAS. However, the running time is not linear. We will present a new FPTAS with running time O(n) by utilizing a new method of handling small jobs. As in (Woeginger, 2000), we iteratively thin out the state space of the dynamic program, collapse solutions that are close to each other, and then bring the size of the state space down to polynomial size. The trimming parameter in our method n 2 P here is defined as = 4m 2 , where  > 0 and P = j=1 pj . Different from the definition of -domination in (Woeginger, 2000), we call that  ) is -dominated by the state s = (P1 , P2 , . . . , Pm ) if a state s = (P1 , P2 , . . . , Pm and only if, Pi − ≤ Pi ≤ Pi + ,

for each i = 1, 2, . . . , m

(13)

For each state space ψk , if the state s ∈ ψk is dominated by the state s ∈ ψk , we remove state s from ψk . Finally, we will get the trimmed state space ψk∗ . Whenever we compute a new state space ψk∗ in the dynamic program, we start from the ∗ instead of ψk−1 in the original dynamic program. trimmed state space ψk−1 Lemma 3.1. |ψk∗ | = O(1), i = 1, 2, . . . , n, where |ψk∗ | is the cardinality of the trimmed state space ψk∗ and m is fixed. Proof. It is easy to verify that the coordinates of every state are integers between 0 and P , hence the trimmed state space ψk∗ contains at most one (P1 , P2 , . . . , Pm ) satisfying fi ≤ Pi ≤ (fi + 1) , for each i = 1, 2, . . . , m, where P − 1}. Thus, the cardinality of ψk∗ is bounded by: fi ∈ {0, 1, 2, . . . , 

P



m

=

4m2 P 2 P



m

=

2m 

2m

= O(1)

where the first equality comes from the definition of -domination and the last equality comes from the fact that m and  are fixed numbers.  Lemma 3.2. For each state s = (P1 , P2 , . . . , Pm ) in the original state space ψk , there is a state s = (P1 , P2 , . . . , Pm ) in the trimmed state space ψk∗ satisfying

Pi − k ≤ Pi ≤ Pi + k , 1250029-7

i = 1, 2, . . . , m.

4th Reading October 10, 2012 2:51 WSPC/S0217-5959

APJOR

1250029.tex

W. Li, J. Li & T. Zhang

Proof. By a straightforward induction on k, we can complete the proof of this assertion. Lemma 3.3. Let B = {Jj |pj > |B| ≤ 2m  .

P 2m }.

Then B contains at most

2m 

jobs, i.e.,

Proof. If not, we have, P ≥



pj >

Jj ∈B

P |B| > P. 2m

A contradiction. Thus, the lemma holds. Our FPTAS for the problem Pm |GoS|Cmax is described as follows. Step 1. Relabel the jobs such that J1 , J2 , . . . , J|B| are jobs from B; ∗ for Step 2. Use the previous dynamic program to compute the state space ψ|B| B = {J1 , J2 , . . . , J|B| }. Step 3. For every state (l˙1 , l˙2 , . . . , l˙m ) ∈ ψ ∗ , assign each job Jj in J − B to the |B|

least-loaded machine Mi with g(Mi ) ≤ g(Jj ) according to the increasing order of the GoS level, and then obtain the corresponding feasible state (l1 , l2 , . . . , lm ). ∗ | feasible states. Step 4. Find the optimal state (l1 , l2 , . . . , lm ) in the |ψ|B| Then, obtain a feasible solution (S1 , S2 , . . . , Sm ) corresponding to the state (l1 , l2 , . . . , lm ), where Si denotes the set of jobs assigned to the machine Mi .

Theorem 3.1. The above algorithm is a FPTAS with running time O(n) for the problem Pm |GoS|Cmax . ∗ ∗ Proof. Let (S1∗ , S2∗ , . . . , Sm ) and (l1∗ , l2∗ , . . . , lm ) denote the optimal solution for the instance I and its corresponding load state, respectively. Clearly, OPT = maxi li∗ . ∗ ) denote the load state without assigning small jobs in the optimal Let (l˙1∗ , l˙2∗ , . . . , l˙m  ∗ ∗ ∗ solution (S1 , S2 , . . . , Sm ), i.e., l˙i∗ = Jj ∈S ∗ ∩B pj , i = 1, 2, . . . , m. By Lemma 3.2, i there is a state (l˙ , l˙ , . . . , l˙ ) ∈ ψ ∗ satisfying 1

2

m

|B|

2m 2 P  l˙i ≤ l˙i∗ + |B| ≤ l˙i∗ + ≤ l˙i∗ + OPT,  4m2 2

i = 1, 2, . . . , m,

(14)

where the second inequality comes from Lemma 3.3, and the last inequality comes from the fact that P/m ≤ OPT.   Let (l1 , l2 , . . . , lm ) denote the state obtained from (l˙1 , l˙2 , . . . , l˙m ) after Step 3. Let Mτ be the most-loaded machine in the solution corresponding to the state  ), i.e., lτ = maxi li . (l1 , l2 , . . . , lm 1250029-8

4th Reading October 10, 2012 2:51 WSPC/S0217-5959

APJOR

1250029.tex

Two Approximation Schemes for Scheduling on Parallel Machines

If lτ = l˙τ , which implies that there are no jobs in J − B assigned to the machine Mτ , then we have,  lτ = l˙τ ≤ l˙τ∗ + OPT 2  ≤ lτ∗ + OPT 2  ≤ maxi li∗ + OPT 2 ≤ (1 + )OPT If lτ > l˙τ , let Jk be the last job assigned to Mτ in J − B and S = {Jj |Jj ∈ J − B P ≤ 2 OPT and all jobs in S can only be assigned to and j ≤ k }. Clearly, pk ≤ 2m the first vk machines. Hence, lτ = lτ − pk + pk ≤ ≤

= ≤

(15)

vk ˙  i=1 li + Jj ∈S pj

 + OPT vk 2  vk ˙∗  i=1 (li + 2 OPT) + Jj ∈S pj vk

(16)  + OPT 2

vk ˙∗   i=1 li + Jj ∈S pj + vk · 2 OPT vk

∗ i=1 li

vk

vk

 + OPT 2

+ OPT

≤ (1 + )OPT

(17)

(18)

(19) (20)

Here, the inequality (16) comes from the fact that the machine Mτ is the leastloaded machine when we assign the job Jk . The inequality (17) comes from (14). The inequality (19) comes from the fact that the jobs in S can only be processed by the first vk machines. The last inequality comes from the fact that average load is no more than OPT. ∗ | feasible states, we have, As (l1 , l2 , . . . , lm ) is the optimal state in the |ψ|B| max li ≤ max li = lτ ≤ (1 + )OPT. i

i

It is easy to verify that Step 1 can be done in O(n + |B| log |B|) = O(n + 2m 2m 2m+1 log 2m ) = O(( 2m ) time. Step 3  ) time. Step 2 can be done in O(|B|(  )  ) 2m 2m ∗ ∗ |) can be done in O(|ψ|B| |n) = O((  ) n) time. Step 4 can be done in O(|ψ|B| 2m 

1250029-9

4th Reading October 10, 2012 2:51 WSPC/S0217-5959

APJOR

W. Li, J. Li & T. Zhang

time. Since m,  are fixed constants, the running time of the whole algorithm is: ∗ ∗ O(n + |B| log |B| + |B||ψ|L| | + |ψ|B| |n)  2m+1  2m

2m 2m + n = O(n). =O n+  

This completes the proof the theorem. 4. Conclusion In this paper, we design an EPTAS with running time O(n log n) for the problem P |GoS2 |Cmax which is a special case of the problem P |GoS|Cmax . An immediate problem is whether there is an EPTAS for the problem P |GoS|Cmax . It is not clear whether the method in current paper can be extended to P |GoS|Cmax . This may be a direction of the future work although the method does not seem extensible to P |GoS|Cmax . We also present a new simpler FPTAS with running time O(n) for the problem Pm |GoS|Cmax . The technique of handling small jobs in our FPTAS has independent interest and may find other applications in the scheduling model under a grade of service provision. Acknowledgments We are grateful to an anonymous referee for valuable comments and suggestions that have helped us to improve the paper. The work is fully supported by the National Natural Science Foundation of China [No. 10861012, 61063011, 11101193], the Tianyuan Fund for Mathematics of the National Natural Science Foundation of China [No. 11126315, 11026204] and Natural Science Foundation of Yunnan University [No. 2009F04Z]. References Alon, N, Y Azar, GJ Woeginger and T Yadid (1998). Approximation schemes for scheduling on parallel machines. Journal of Scheduling, 1, 55–66. Fishkin, A, K Jansen and M Mastrolilli (2008). Grouping techniques for scheduling problems: Simpler and faster. Algorithmica, 51, 183–199. Graham, RL, EL Lawler, JK Lenstra and AHG Rinnooy Kan (1979). Optimization and approximation in deterministic sequencing and scheduling: A survey. Annals of Discrete Mathematics, 5, 287–326. Horowitz, E and S Sahni (1976). Exact and approximate algorithms for scheduling nonidentical processors. Journal of the ACM, 23, 317–327. Hwang, HC, SY Chang and K Lee (2004). Parallel machine scheduling under a grade of service provision. Computers & Operations Research, 31, 2055–2061. Jansen, K (2009). Parameterized approximation scheme for the multiple knapsack problem. In Proc. 20th ACM-SIAM Symposium on Discrete Algorithms. pp. 665–674; SIAM Journal on Computing, 39, 1392–1412. 1250029-10

1250029.tex

4th Reading October 10, 2012 2:51 WSPC/S0217-5959

APJOR

1250029.tex

Two Approximation Schemes for Scheduling on Parallel Machines

Jansen, K and L Porkolab (2001). Improved approximation schemes for scheduling unrelated parallel machines. Mathematics of Operations Research, 26, 324–338. Ji, M and TCE Cheng (2008). An FPTAS for parallel machine scheduling under a grade of service provision to minimize makespan. Information Processing Letters, 108, 171–174. Jiang, Y (2008). Online scheduling on parallel machines with two GoS levels. Journal of Combinatorial Optimization, 16, 28–38. Kulik, A and H Shachnai (2010). There is no EPTAS for two-dimensional knapsack. Information Processing Letters, 110, 707–710. Lenstra, HW (1983). Integer programming with a fixed number of variables. Mathematics of Operations Research, 8, 538–548. Leung, JYT and CL Li (2008). Scheduling with processing set restrictions: A survey. International Journal of Production Economics, 116, 251–262. Mastrolilli, M (2003). Efficient approximation schemes for scheduling problems with release dates and delivery times. Journal of Scheduling, 6, 521–531. Ou, J, JYT Leung and CL Li (2008). Scheduling parallel machines with inclusive processing set restrictions. Naval Research Logistics, 55, 328–338. Tan, Z and A Zhang (2011). Online hierarchical scheduling: An approach using mathematical programming. Theoretical Computer Science, 412, 246–256. Woeginger, GJ (2000). When does a dynamic programming formulation guarantee the existence of a fully polynomial time approximation scheme (FPTAS)? INFORMS Journal on Computing, 12, 57–74. Woeginger, GJ (2009). A comment on parallel-machine scheduling under a grade of service provision to minimize makespan. Information Processing Letters, 109, 341–342. Zhang, A, Y Jiang and Z Tan (2009). Online parallel machines scheduling with two hierarchies. Theoretical Computer Science, 410, 3597–3605. Zhou, P, Y Jiang and Y He (2007). Parallel machine scheduling problem with two GoS levels. Applied Mathematics: A Journal of Chinese Universities (Series A), 22, 275–284 (in Chinese).

Weidong Li was born in Henan, China, on September 14, 1981. He received the Bachelor Degree (07/2004), the Master Degree in Computational Mathematics (07/2007) and the Doctoral Degree in Applied Mathematics (07/2010) from Yunnan University (Kunming, China). He is currently a teacher at the Department of Atmospheric Science of Yunnan University. His research interest focuses on approximation algorithm, randomized algorithm, computational complexity and algorithmic game theory. He has published over fifteen articles in journals, books, and refereed conference proceedings in computer science research. Jianping Li was born in Yunnan Provence, China, on April 1965. He received his Bachelor Degree from University of Science and Technology of China (1988), Master Degree from Institute of Systems Science, Chinese Academy (1991), and PhD from Universit´e de Paris-Sud (1999). He currently works as a professor in Yunnan University. His main research interest focuses on combinatorial optimization, approximation algorithm design, scheduling theory and graph theory. Tongquan Zhang was born in Shandong, China, on February 1979. He received the Bachelor Degree from Ludong University in 2002, the Master Degree from Yunnan 1250029-11

4th Reading October 10, 2012 2:51 WSPC/S0217-5959

APJOR

W. Li, J. Li & T. Zhang

University in 2005, and his Ph.D. degree from Yunnan University in 2009. His main research interests include combinatorial optimization problems, approximation algorithms and algorithmic graph theory. He is currently Associate Professor in School of Mathematics and Computer Science at Yunnan Nationalities University.

1250029-12

1250029.tex

Suggest Documents