Scheduling with an outsourcing option on both ... - Semantic Scholar

1 downloads 340 Views 398KB Size Report
Dec 14, 2012 - scheduling performance in various companies, especially, includ- ... problem of determining production quantity and outsourcing quantity over ...
Computers & Operations Research 40 (2013) 1234–1242

Contents lists available at SciVerse ScienceDirect

Computers & Operations Research journal homepage: www.elsevier.com/locate/caor

Scheduling with an outsourcing option on both manufacturer and subcontractors Hadi Mokhtari, Isa Nakhai Kamal Abadi n Department of Industrial Engineering, Faculty of Engineering, Tarbiat Modares University, Tehran, Iran

a r t i c l e i n f o

a b s t r a c t

Available online 14 December 2012

This paper addresses a single-stage scheduling problem with outsourcing allowed where each job can be either scheduled for in-house production or outsourced to one of the outside subcontractors available. The manufacturer has an unrelated parallel machine system, and each subcontractor has its own single machine. Subcontractors are capable to process all the jobs. Unlike most of past research, our study considers the joint scheduling of both in-house and outsourced jobs simultaneously. The objective is to minimize sum of the total weighted completion time and total outsourcing cost. An integer programming formulation is presented and then improved through an optimality property on job orders. A heuristic algorithm is also introduced to decompose the problem into smaller and easier subproblems and solve them to optimality. & 2012 Elsevier Ltd. All rights reserved.

Keywords: Manufacturing Outsourcing Decomposition Integer programming

1. Introduction Outsourcing is commonly required as a way to improve overall scheduling performance in various companies, especially, including electronics industries and motor industries [15]. The outsourcing of non-critical activities to subcontractors allows firms to focus more on high value operations. Nowadays, many companies outsource their jobs to a third party instead of managing them directly. Manufacturers often face with situations where demands requested from customers are too much to be taken care by internal capacity. In such situations, they transfer all or part of their activities to an external service provider so as to handle demand fluctuations without the need to store more inventory or design for high production capacity. A proper plan for outsourcing can improve lead times, reduce total costs, and make a company more competitive. As a result, recent interest has been developed towards finding an appropriate outsourcing policy that enables companies to compete with others, and help them emerge as a market winner [6]. While a manufacturer can benefit from outsourcing, the potential maximum benefit cannot be achieved unless there is an efficient production plan that can cope with the complexity of outsourcing [24]. To achieve this benefit, management needs to decide what quantities of each product to be manufactured and what quantities to be outsourced to external subcontractors. For this purpose, a joint scheme between production and outsourcing plans is necessary in an efficient

n

Corresponding author. Tel./fax: þ 98 21 82884387. E-mail addresses: [email protected], [email protected] (I.N.K. Abadi). 0305-0548/$ - see front matter & 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cor.2012.12.003

scheduling scheme. The complexity of this problem is that we have to decide on scheduling plans for manufacturer and subcontractors, as well as their coordination. In the last three decades, the production planning problem known as aggregate production planning (APP) has been widely analyzed jointly with outsourcing scenarios. However, there are few studies on production scheduling with outsourcing decisions. The APP involves the problem of determining production quantity and outsourcing quantity over multiple time periods where the objective is often to minimize total costs including production cost, inventory cost, and outsourcing cost [2,10,12]. In APP, the quantity of outsourcing is determined in each time period and the scheduling of individual jobs is not decided, whereas in joint scheduling and outsourcing models it should be determined which jobs to be outsourced and how to schedule the jobs. In this paper, we investigate the problem of single-stage scheduling with an option of outsourcing with the objective of minimizing sum of total weighted completion time and total outsourcing cost. The manufacturer receives a set of jobs at the beginning of planning horizon; each of which needs one stage processing. A set of decisions should be made simultaneously: which set of jobs should be outsourced, which subcontractor is appropriate for outsourcing and which sequence of jobs is more beneficial. Mathematically, this problem can be stated as follows. There are a set of n independent jobs N ¼ 1, 2, . . ., n, m1 unrelated in-house machines M 1 ¼ 1, 2, . . ., m1 , and m2 outside subcontractorsM2 ¼ 1, 2, . . ., m2 and each subcontractor has a single machine. We show the set of total machines by M where M¼M1[M2. All machines in M are available in the whole planning horizon without breakdown. A job can be processed on only one machine, and a machine can process at most one job at a time. For

H. Mokhtari, I.N.K. Abadi / Computers & Operations Research 40 (2013) 1234–1242

ease of presentation, we denote this class of scheduling problems by P. In the problem P, each job jAN has 9M9 processing times k pjk(kAM), a weight factor wj, and an outsourcing cost bj for machine kAM2. All the jobs in N are available at time zero. Similar to the classic scheduling problem, pre-emption of jobs is prohibited and cancelation is not allowed. Since machines are unrelated, the processing times pjk, for all jAN and kAM, are arbitrary and dissimilar on different machines. Let Cj indicates the completion time of job j in a feasible schedule S. We define Sp as outsourced jobs set in schedule S, and Oj as the outsourcing cost of job j A Sp after its processing machine is determined. Then the total P weighted completion time is j A N wj C j , and total outsourcing P cost OC is j A Sp Oj . Without the loss of generality, it is assumed k that parameters pjk and bj are integer values for all jAN, kAM k (pjk , bj A Z þ ). Using the conventional three-field classification notation for scheduling problems presented by Lawler et al. P [13], we denote problem P by R99 wj C j þ OC. Since the problem P R99 wj C j þ OC is a generalization of classical unrelated parallel machine scheduling, it can be concluded that this problem belongs to the class of NP-hard. The remainder of this paper is organized as follows. Section 2 discusses the related literature. The integer programming formulation of problem is given in Section 3. A Lagrangian relaxation scheme is presented in Section 4, and then Section 5 describes a dynamic programming algorithm for solving subproblems. Section 6 gives details of the proposed Lagrangian heuristic for solving dual problem, and Section 7 analyzes the performance of solution procedure via computational experiments. In Section 8, we conclude the paper.

2. Literature review In general, the problem of scheduling with outsourcing option is a new research direction and a few researchers have studied this problem in the literature. The main part of these researches focused on single-stage environment. For example, Ruiz-Torres et al. [25] presented a bi-objective model to deal with the problem of finding outsourcing strategies and considered tradeoffs between the following measures: (i) average tardiness P T ¼ h A M T h in which Th is the tardiness of jobs on machine h and M is the machines set, and (ii) total outsourcing cost P OC ¼ h A K fðMC h Þ in which K is a subset of machines set M, MCh is completion of last job on machine h and f(x) represents the cost function of x. A single machine with n one-operation jobs are available; each one can be completed by either in-house machine or m identical outsource machines. A heuristic algorithm is introduced in their work and evaluated by a lower bounding procedure. Moreover, Lee and Sung [15] suggested a singlemachine scheduling problem with outsourcing allowed. They considered minimizing weighted sum of the outsourcing cost P outsourced jobs in schedule p, i A Op oi in which Op is the set of P and sum of completion times i C i , subject to an outsourcing budget OC(p) rK. In the analysis of the problem which is denoted P by 19Budget9 (1 d)Cj þ d.OC, some problem properties were characterized and also two associated heuristics and a branch and bound were derived. A similar research on single-machine scheduling has been presented by Lee and Sung [16] in which the scheduling measure is one of the following: maximum lateness, total tardiness, and total completion time. The authors showed that the problem is NP-hard in the ordinary sense. As another example of single-stage problem, Qi [22] dealt with a general outsourcing model with transportation considerations. It was assumed that all outsourced jobs have to be transported back to the company in batches. The authors discussed four commonly P used objective functions, i.e., (i) total completion time C j , (ii)

1235

makespan C max ¼ max1 r j r n C j , (iii) maximum lateness Lmax ¼ P max1 r j r n C j dj , and (iv) total number of tardy jobs U j where Cj and dj represent the completion time and due date of job j, and Uj is a binary. It indicates 1 if job j is late (Cj 4dj) and 0 otherwise (Cj rdj). Optimal algorithms for all variants of the problem were also developed. Besides, Chen and Li [7] dealt with a model with make-to-order and identical parallel machines subject to a constraint on delivery performance. Each job j is processed by manufacturer’s own plant with processing time p0j and processing cost q0j dollars. In addition to in-house production, outsourcing of job j to subcontractor h is possible with delivery time phj and processing cost of qhj dollars. The objective is to minimize total PP sum of production cost and subcontracting cost i j q0j :xij þ P j A V qsj :yj in which xij is a 0–1 variable indicating whether job j is assigned to machine i. Also yi is a 0–1 variable indicating whether job j is assigned to subcontractor sj, and V is the set of jobs can be subcontracted. The problem is shown to be NP-hard even when there is a single machine at the manufacturer plant and there is a single-remote subcontractor. The computational results revealed that a significant reduction in the production costs is achieved when the subcontracting option is used. Additionally, some of the researchers were dealt with the twostage environment for the problem of production scheduling with outsourcing option. Cai et al. [5] investigated a manufacturing model that involves two machines, with one owned by manufacturer and the other by third party, and all the outsourced jobs have to be transported back from the third-party after completion. In order to minimize the total cost, including lateness cost, cost to use third-party machine, and overtime cost, the authors developed an integer nonlinear program, obtained some optimality properties and designed an algorithm to find optimal solution. In another study, Lee and Choi [18] considered a twostage production problem in which each job requires two operations denoted by ð1, jÞ and ð2, jÞ to be processed on machines 1 and 2, and there is one option of outsourcing. The objective is to minimize sum of makespan C max ðsÞ associated with sequence s and total outsourcing cost a1p1j þ a2p2j where ai represents unit outsourcing cost for operations at machine i and pij is the processing time of operation ði, jÞ. In their work, the computational complexity was discussed for all cases and an approximation algorithm was developed for NP-hard cases. Besides, a study on two-stage outsourcing model of production scheduling was presented by Qi [23] in which each job needs two sequential operations. In their model, there exist two in-house production facilities where each one can process both operations, and also another option of outsourcing stage-one operations to a remote outside supplier. Outsourcing cost is denoted by a.x where a indicates the difference between in-house and outsource unit costs, and x represents total outsourcing time. The batch transportation and transportation delay t is also considered and makespan criterion was minimized. Recently, Qi [24] presented an outsourcing model for scheduling problem in a two-stage flow shop and analyzed different scenarios for outsourcing. He assumed that the outsourcing cost is proportional to the job processing time with a constant coefficient. Research on this problem in multi-stage production environment is very limited. In this class, Lee et al. [17] studied an advanced planning and scheduling problem in a supply chain which can be modeled as a flow shop and devised a genetic algorithm to solve the problem. In another study, Chung et al. [9] studied a multi-stage job shop scheduling where an operation of jobs can be performed either on an in-house machine or on an external machine with an extra cost. They presented a decomposition-based algorithm that converts the original problem into smaller subproblems, and also improved the efficiency of their heuristic by executing operations sequencing and picking

1236

H. Mokhtari, I.N.K. Abadi / Computers & Operations Research 40 (2013) 1234–1242

out steps, repeatedly. Besides, Mokhtari et al. [19] investigated the problem of flow shop scheduling under four outsourcing scenarios and due date constraints, and then developed a heuristic for solving the problem in different scenarios. More recently, Tavares Neto and Godinho Filho [27] dealt with the permutation flow shop problem with the objective of minimizing weighted sum of makespan and outsourcing costs under budget constraint F9prmu, Budget9ð1dÞ:M þ d:OC, and designed two sequential ant colony optimization algorithms to solve the problem. One major drawback in almost all previous studies is that they limit the number of subcontractors available to just one company. However, in practice, there exist often several suppliers to choose for each job. Using several suppliers makes the problem more realistic and, at the same time, more complicated to formulate and solve, because doing so, a supplier selection decision is embedded into the traditional problem. The authors could find very few papers like [7,19] that considered several available subcontractors to be selected for subcontracting. Another drawback in some previous studies like [15,16] is that they assume unlimited capacity for subcontractors and hence ignore scheduling of outsourced orders which should be considered in parallel with in-house scheduling. It is also recognized that the problem of scheduling with outsourcing has not been treated in unrelated parallel machine environment yet. As a remedy, the current paper supports several available suppliers to be selected for subcontracting and also considers scheduling of outsourced jobs as well as in-house in an unrelated parallel machine system. This problem has not been considered in the literature yet. In sequel, details of formulation and solving approach will be presented.

xk0j

a binary variable indicating whether job j is processed first on machine k a binary variable indicating whether job j is processed last on machine k

xkj,n þ 1

In terms of the above notations, the problem can be formulated as follows: 0 ðProblem PÞ

subject to X X kAM iAN

xkij ¼ 1,

gkj N M M1 M2

number of jobs total number of machines number of in-house machines number of subcontractors job indices machine indices weight of job j processing time of job j on machine k processing cost of job j on machine k set of jobs, N ¼ 1, 2, . . ., n set of total machines, M ¼ 1, 2, . . ., m1 þ m2 set of in-house machines, M1 ¼ 1, 2, . . ., m1 set of outsource machines, M 2 ¼ 1, 2, . . ., m2

Variables Cjk xkij

completion time of job j on machine k a binary variable indicating whether job i is processed immediately before job j on machine k

8j A N

ð1Þ

ð2Þ

i aj X

xk0j r 1,

8k A M

ð3Þ

jAN

X

X

xkij ¼

xkji ,

iAN [ 0

iAN [ nþ1

i aj

ia j

3. Problem formulation

n m m1 m2 i, j,l k wj pjk

B C C P PB P B C xkij C Bwj C jk þ gkj C j A Nk A MB iA N [ 0 A @ i aj

[0

C jk ¼ pjk xk0j þ

In this section, we aim at formulating the problem considered in current paper as an integer program. To this end, a general kind of problem is considered at first step where we have just one set of machines such that any job j can be processed by any machine k for a processing time pjk and a processing cost gkj . A special case of this problem would be when some of the machines are inhouse and others are out of house as in our problem. In such a case, the processing costs for in-house jobs are set to be zero and those of outsourced jobs can be interpreted as outsourcing cost charged by subcontractors. The following notations are used in the formulation of the general problem. Sets, indices and parameters

min

1

 X  C ik þpjk xkij , iAN

8j A N,

8j A N,

kAM

kAM

ð4Þ

ð5Þ

i aj

xkij A 0, 1,

8i,

j A N,

kA M

ð6Þ

The objective (1) seeks to minimize sum of total weighted completion time and total processing cost. Constraint (2) guarantees that each job is processed exactly once, and constraint (3) ensures that at most one job has the starting position on each machine. Constraint (4) which has the same role as flow conservation condition in network flow problems ensures that the jobs are correctly sequenced within a partial schedule on machine kAM and also assures that at most one job has the finishing position on each machine. Constraint (5) defines the completion time Cjk considering the relationship between two adjacent jobs. The last constraint (6) ensures binary requirement of decision variables. In the sequel, we are going to improve the above formulation by taking into account a job ordering restriction on machines. The given formulation can be improved, if we consider those schedules where the partial schedule on each machine must follow any ordering pattern within jobs. In our case, the well-known Smith’s rule [26] is applicable. This rule states that in any optimal P schedule of problem R99 wj C j , jobs within each partial schedule follow the weighted shortest processing time (WSPT) order. So it is needed only to consider those schedules where the jobs on each machine follow the WSPT rule. That means if jobs i and j are both processed on a certain machine k, and pik/wi rpjk/wj, then we only need to consider partial schedules on machine k where job i precedes job j. Let Akj is defined as a set of jobs i in the sequence of WSPT on machine k where job j precedes job i, and Bkj as a set of jobs i in the sequence of WSPT on machine k where job j succeeds job i. By use of WSPT as a problem-specific job-ordering restriction, we have the following new formulation for problem P

H. Mokhtari, I.N.K. Abadi / Computers & Operations Research 40 (2013) 1234–1242 0

Eqs. (9)–(12) and

denoted by P 

Problem P0



1237

min

P P j A Nk A M

subject to X X xkij ¼ 1,

0 @wj C jk þ gk j

P i A Bkj [0

1

lj is unrestricted in sign8j A N

xkij A

ð14Þ

ð7Þ

8j A N

ð8Þ

k A M i A Bk [0

Problem LR can be decomposed into 9M9 single machine subproblems (one for each machine) as follows:   LRK LðkÞ ðlÞ ¼ min Z ðkÞ LR

j

X

xk0j r1,

8kA M

ð9Þ

jAN

X i A Bkj [0

C jk ¼

X

xkij ¼

xkji ,

8j A N,

kAM

ð10Þ

i A Akj [n þ 1

X

pjk xk0j þ

 C ik þ pjk xkij ,

8j A N,

kA M

ð11Þ

i A Bkj

xkij A 0, 1,

8i,

j A N,

kAM

ð12Þ

As can be seen, the integer programming formulation (7)–(12) is a new larger size unrelated parallel machine scheduling problem with n one-operation jobs and m1 þm2 machines. However, the structure of objective function is a special case we cannot find in any classical problem of parallel machine schedul0 ing. In the sequel, we exploit a special structure of problem P for designing a decomposition scheme to solve it.

4. A decomposition scheme for problem P

0

0

As described before, the problem P includes a special kind of machine scheduling problem with unrelated parallel machines. Hence we can conclude that this problem belongs to the class of NP-hard. In addition, it involves a nonlinear term in Eq. (11) which makes it hard to be solved by traditional methods. We can also observe that this problem is composed of similar subsystems, represented by m1 þm2 machines. As an interesting fact, there are some independent blocks linked by a coupling constraint in the technological coefficient matrix. We can exploit this observation, and apply a Lagrangian relaxation (LR) approach to decompose the problem into smaller and easier subproblems and solve the problem via standard LR technique. In fact, such an idea has been employed in several past publications for designing efficient solution procedure for different scheduling problems [28,8,14]. The idea of LR is within this fact that many hard problems can be viewed as smaller and easier problems complicated by a set of coupling constraints. Dualizing the coupling constraints generates a Lagrangian problem which is easier to solve and whose optimal solution gives a lower bound of the original problem. An upper bound is also obtainable from a Lagrangian heuristic where the solution of relaxed problem is modified. The interested reader is referred to Geoffrion [11] and Bazaraa and Goode [3] for detailed discussion on the LR technique. 0 We consider Lagrangian relaxation of the problem P by dualizing constraint (8) that couples different machines. Using multiplier lj for all jAN, the LR problem is constructed as follows: ðLRÞ

LðlÞ ¼ min Z LR

subject to 0 1 0 1 X XXB X BX X C C xkij A þ lj @ xkij 1A Z LR ¼ @wj C jk þ gkj j A Nk A M

¼

X jAN

0

lj þ

i A Bkj [0

jAN

k A M i A Bk [0

1

  X XXB C xkij A @wj C jk þ gkj þ lj

j A Nk A M

i A Bkj [0

j

ð13Þ

subject to 0 1   X XB C ðkÞ xkij A , Z LR ¼ @wj C jk þ gkj þ lj jAN

ð15Þ

i A Bkj [0

Eqs. (9)–(12), and (14). These subproblems can be solved independently and then their solutions are utilized to obtain the solution of Lagrangian problem LR whose objective function value is calculated by P P Z LR ¼ k A M Z ðkÞ j A N lj . In subsequent section, we will describe LR  a dynamic programming algorithm for solving the constructed subproblems.

5. Dynamic programming for solving subproblems Dynamic programming (DP) is a useful general optimization technique for making a sequence of dependent decisions. In general, when performance measure of a single machine problem has an additive form, as in our subproblems, we can find an optimal solution through a dynamic programming approach [1]. In first stage of DP, it starts with a small part of the whole problem and finds the optimal solution for that small part. It then slightly enlarges this small part and finds the optimal solution of the new problem using the preceding one. This procedure is repeated until the current problem encompasses the original problem, and the problem solves optimally at the last stage of algorithm. Then, tracking back from original problem to the smaller parts, DP determines the optimal variables. Here, we aim at designing a forward DP algorithm for solving single-machine subproblems presented in Eq. (15). Let us first give a brief description of these subproblems. We have a set of independent jobs N ¼ 1, 2, . . ., n with a processing time pjk and a weight factor wj, both of which are positive integer numbers. The only decision is to find a subset of jobs A DN in such a way that P k the total cost value j A A ðwj C jk þ lj þ gj Þ is minimized. We construct the DP algorithm incorporating with WSPT rule. By using this rule, as a job-ordering restriction in our problem, only jobs in Bkj can be sequenced before job j on machine k. So we first re-number the jobs in such a way that p1k =w1k rp2k =w2k r    r pnk =wnk . Let Rk ¼ 1,2, . . ., N denotes the new order of jobs on machine k, and H ¼ 1,2,::, l denote a subset of jobs (HDRk) which follows the WSPT order. At each stage lðl ¼ 1,2, . . ., nÞ, it is decided whether job l is included in partial schedule or not. It is also assumed that G(l,t) represents the minimum cost of all partial schedules that consist of jobs from the set H and their last jobs are completed at time t. Algorithm starts with the first job where G(l,t) is initially set to be 0 for t Z 0, l ¼ 0, and N for t o0, l ¼ 0, . . ., n. In next stages, the minimum objective value of partial schedules is calculated via the recursive relation Gðl,t Þ ¼ minG   l1, tplk þwl t þ ll þ gkl , Gðl1,t Þ for all l ¼ 1, . . ., n and t ¼ 0, . . ., Pk . Note that Pk represents the planning horizon on P machine k, and is equal to j A N pjk . If the partial schedule consists of job l, we add it to the best solution for the first l 1 jobs that complete at time t  pjk; the value of this solution is calculated by   G l1, tplk þ wl t þ ll þ gkl . If job l is not included in the partial schedule, we then select the best schedule with respect to the

1238

H. Mokhtari, I.N.K. Abadi / Computers & Operations Research 40 (2013) 1234–1242

first l 1 jobs that complete at time t; the value of this solution is calculated by G(l 1,t). After executing all stages, current subproblem can be solved by computing minimum value of G(l,t) for all t ¼ 0, . . ., P k in last stage. Example. Assume that a manufacturer receives three jobs (1,2,3) with weight factor w¼(2,2,1), processing time p¼(2,1,1) and processing cost g ¼ ð5, 10, 5Þ on a certain in-house machine k. The Lagrangian multiplier is given as l ¼(  7,5,  9). The aim of single machine subproblem LR(k) is to find the best set of jobs from (1,2,3) with the minimum objective value Z ðkÞ LR . According to the WSPT rule, the new order of jobs is Rk ¼(2,1,3). After calculating initial values of DP algorithm, the recursive relation is used to obtain the objective value G(l,t) for all l ¼1,2,3 and t ¼0,1,2,3,4 as follows:

where h is the iteration number, s is the step size at the hth iteration calculated by sh ¼ a

Ln Lh   , h Jg l J2

0 o a o2

ð21Þ

in which Ln represents the optimal value estimated by the best feasible solution achieved so far and Lh is the value of L at the hth iteration, and gh ¼g(lh)is the subgradient of  Lagrangian function P P at lh which is equal to xk 1 . The variables xij are kAM i A Bkj [0 ij calculated by !    X ðkÞ Z LR xij ¼ arg min subject to ð9 ð12 and ð14 : ð22Þ

Third stage on job 3: Gð3,0Þ ¼ min Gð2, 01Þ þ 1  0 þ ð9Þ þ 5, Gð2,0Þ ¼ 0 Gð3,1Þ ¼ min Gð2, 11Þ þ 1  1 þ ð9Þ þ5, Gð2,1Þ ¼ 3 Gð3,2Þ ¼ minGð2, 21Þ þ 1  2 þ ð9Þ þ 5, Gð2,2Þ ¼ 2 Gð3,3Þ ¼ minGð2, 31Þ þ 1  3 þ ð9Þ þ 5, Gð2,3Þ ¼ 1 Gð3,4Þ ¼ min Gð2, 41Þ þ 1  4 þ ð9Þ þ5, Gð2,4Þ ¼ 0

kAM

According to above calculations, the best set of jobs should be scheduled on machine k is H¼{3}. The value of this solution is 3. Similar approach can be employed for all other in-house and outsource machines.

6. A Lagrangian heuristic procedure In this section, we are going to present a Lagrangian heuristic to solve the original problem. The Lagrangian dual (LD) is formulated as follows: maxl LðlÞ ¼ minZ LR

kAM

ð20Þ h

Second stage on job 1: Gð2,0Þ ¼ minGð1, 02Þ þ 2  0 þ ð7Þ þ 5, Gð1,0Þ ¼ 0 Gð2,1Þ ¼ minGð1, 12Þ þ 2  1 þ ð7Þ þ 5, Gð1,1Þ ¼ 0 Gð2,2Þ ¼ minGð1, 22Þ þ 2  2 þ ð7Þ þ 5, Gð1,2Þ ¼ 0 Gð2,3Þ ¼ minGð1, 32Þ þ 2  3 þ ð7Þ þ 5, Gð1,3Þ ¼ 0 Gð2,4Þ ¼ minGð1,42Þ þ 2  4 þ ð7Þ þ 5, Gð1,4Þ ¼ 0

subject to X ðkÞ X Z LR ¼ Z LR  lj

A crucial step of LR approach is to optimize the dual function, and the success of it depends greatly on the ability to generate better Lagrangian multipliers. The subgradient method is commonly utilized to achieve a good value for multipliers. In this method, multiplier is updated according to

lh þ 1 ¼ lh þsh g h

First stage on job 2: Gð1,0Þ ¼ minGð0, 01Þ þ 2  0 þ ð5Þ þ 10, Gð0,0Þ ¼ 0 Gð1,1Þ ¼ min Gð0, 11Þ þ 2  1 þ ð5Þ þ 10, Gð0,1Þ ¼ 0 Gð1,2Þ ¼ min Gð0, 21Þ þ 2  2 þ ð5Þ þ 10, Gð0,2Þ ¼ 0 Gð1,3Þ ¼ min Gð0, 31Þ þ 2  3 þ ð5Þ þ 10, Gð0,3Þ ¼ 0  Gð1,4Þ ¼ minGð0, 41Þ þ 2  4 þ 5Þ þ 10, Gð0,4Þ ¼ 0

ðLDÞ

Step 2: Construction of a feasible solution. Generate a feasible solution from the solution of relaxed problem by using a heuristic, and calculate the upper bound. Step 3: Evaluation of convergence. If the GAP¼UB  LB(l) is equal to zero (or less than a specific threshold), or lower bound has not been updated for a pre-known number of iterations, the algorithm terminates. Step 4: Solving dual problem. Update the Lagrangian multipliers, and then return to Step 1.

ð19Þ

jAN

(9)–(12), and (14) The solution of dual problem (19) provides a lower bound for the original problem. The general steps of the Lagrangian heuristic for scheduling problems are introduced by Nishi et al. [20] as follows. Step 0: Initialization. Set the number of iterations, and Lagrange multipliers. Step 1: Solving relaxed problem. Each subproblem is solved to optimality, and the lower bound of original problem is calculated.

The initial value of a is usually set to be 0.5. When the duality gap GAP is less than a predefined small number or a fixed CPU time is reached, the algorithm terminates. According to (21) and (22), the subgradient method needs to solve all subproblems at each iteration. However, the solution of Lagrangian dual problem obtained in each iteration of subgradient method is not necessarily identical to the solution of the original problem which is due to the existence of duality gaps in scheduling problems [20]. Thus the dual solution is generally associated with an infeasible schedule and some of relaxed constraints may be violated. In this regard, there are two set of jobs that may make our problem infeasible. The first set UF1 includes jobs that may be repeated in more than one machine, and the second set UF2 is related to jobs which are not chosen in any partial schedule at all. To modify these solutions, we present a Lagrangian heuristic approach to construct a feasible solution from the solution of relaxed problem. In this algorithm, the jobs which have been scheduled only once remain fixed form both assigned machine and starting time points of view. Regarding the two other types of jobs in sets UF1 and UF2, the heuristic selects the best strategy which leads to the minimum cost sjk ¼ wjCjk þ djk for job j on machine k, where djk is P gkj i A Bk [0 xkij . For job j in UF1, let Fj represents all the machines on j

which job j should be processed in an infeasible schedule. The cost of job j for all machines kAFj is calculated, and then job j is decided to be processed on machine k with the minimum cost k9sjk ¼ min sjd 8d A F j . This procedure is repeated until all jobs in UF1 are scheduled only once. By the end of this stage, we may have a schedule that involves machine idle time. Let Yk represents the set of all jobs assigned to the machine k at this moment, and mkj be the value of pjk/wj for all unselected jobs jAUF2 and kAM. Under Smith’s rule, all jobs should be scheduled in WSPT order, and therefore, the potential position of unselected job j on machine k can be obtained by comparing mkj and pik/wi for all

H. Mokhtari, I.N.K. Abadi / Computers & Operations Research 40 (2013) 1234–1242

jobs i in Yk. Doing so, the potential position of all unselected jobs on every machine is determined, under Smith’s rule. It is clear that assigning an unselected job to a machine under WSPT rule may need shifting the subsequent jobs to the right, and this will incur additional cost in terms of total weighted completion time. Hence, the unselected job should be assigned to the machine k where the total additional cost incurred by shifted jobs plus the weighted completion time of inserted job is minimized. This procedure is implemented for all jobs in UF2 in ascending order of their processing times hoping that the gaps of idle time are filled as much as possible without further shifting. At the end of this stage, all the jobs are contained in the schedule once and thus the feasibility of solution is assured. However, we may still have a schedule with machine idle time, at this moment. Definition 1. A performance measure Z is regular if (a) the scheduling objective is to minimize Z, and (b) Z can increase only if at least one of the completion times in the schedule increases [1]. It can be shown that in an optimal solution of a scheduling problem with regular measure, as in our problem, there is no machine idle time. By using this rule, we simply shift all the jobs in the current schedule to the left in such a way that the machine idle time is equal to zero, and there is no overlap between successive jobs, to obtain a perfect schedule. The value of this schedule gives an upper bound for the original problem.

1239

Table 1 Performance of the solution procedure. Set

9N9

9M19

9M29

Duality gap

CPU (s)

Best

Worst

Mean

Best

Worst

Mean

1 2 3 4

20 20 20 20

4 4 6 6

2 5 2 5

0.69 1.12 2.86 1.92

1.04 2.75 4.45 3.93

0.84 1.61 3.56 2.84

0.51 0.64 0.61 0.79

0.84 0.77 0.81 0.87

0.64 0.95 0.73 0.83

5 6 7 8

30 30 30 30

4 4 6 6

2 5 2 5

2.51 2.26 2.32 1.44

2.85 3.01 2.74 1.88

2.69 2.81 2.49 1.58

1.34 1.51 1.49 2.01

1.65 1.78 1.87 2.54

1.45 1.63 1.74 2.31

9 10 11 12

60 60 60 60

6 6 8 8

2 5 2 5

2.24 2.96 2.29 2.78

3.34 3.81 3.03 3.64

3.17 3.29 2.83 3.54

8.65 9.54 9.27 10.54

10.21 11.84 11.46 12.21

9.49 10.68 10.91 11.94

13 14 15 16

80 80 80 80

6 6 8 8

2 5 2 5

2.99 1.58 1.94 3.31

3.51 2.64 3.19 3.63

3.41 2.11 2.27 3.49

16.32 28.31 26.98 29.09

19.02 32.58 31.87 33.10

18.13 29.77 30.07 32.07

17 18 19 20

100 100 100 100

8 8 10 10

2 5 2 5

3.19 2.67 2.94 3.96

4.66 3.14 3.47 4.58

3.96 2.94 3.19 4.29

35.21 46.21 40.95 51.05

39.14 51.27 48.45 56.51

38.32 48.45 46.14 53.37

Table 2 Comparison between Lagrangian and LP-relaxation lower bounds.

7. Computational experiments ID

This section describes the computational experiments which were designed to evaluate the performance of the proposed solution procedure. First of all, the processing cost gkj is set to be zero for in-house jobs, and be outsourcing cost for outsourced jobs in order to more concentrate on original problem (scheduling with outsourcing option). In order to capture a wide range of problem structures, 20 problem sets were considered and 10 problems were randomly generated in each set, resulting in a total of 200 test problems used in experiments. The statistical values of results are computed in order to enhance the confidence level of the performance of the solution procedure. Data required for our problem consists of number of jobs, number of in-house machines, number of subcontractors, processing times, and outsourcing cost. The number of jobs, number of in-house machines, and number of subcontractors vary from 20 to 100, 4 to 10, and 2 to 5, respectively. The processing times and job weights were generated from a discrete uniform distribution over (1,20) and (1,10), respectively. It is also considered that the outsourcing cost is drawn from a uniform distribution between 10 and 30. The results of the test experiments are shown in Table 1. These results are described by providing number of jobs 9N9, number of in-house machines 9M 1 9, number of outsource machines 9M29, and average percentage gap (gap %) between the value of obtained feasible solution and the lower bound. The gap % which is used to evaluate the quality of the solutions is defined asgap % ¼ 100 ðvalue of obtained f easible solution lower boundÞ=lower bound. The average CPU time is also reported in seconds. In all experiments, the algorithm terminates when a predefined number of iterations (300) or a value of average percentage gap (0.5%) is met. The results reported in Table 1 show that the solution procedure does well for a wide range of problem sizes, with an average gap of 2.85% over all 200 test problems. The number of jobs has a significant effect on the computational time of the solution procedure. It also seems that, for problems with a given number of machines, the gaps approximately increase when the number of jobs increases.

1 2 3 4 5 Average

9N9

20 30 60 80 100

9M19

6 6 8 8 10

9M29

5 5 5 5 5

L-gap

5.23 8.32 10.95 11.21 14.46 10.034

CPU (s) LR procedure

LP-relaxation

0.83 2.31 11.94 32.07 53.37 20.104

53.26 97.32 154.62 397.54 695.71 279.69

Additionally, we compared the tightness of the lower bound obtained by the Lagrangian relaxation (LR) presented in current paper with the lower bound obtained by the commonly used linear programming relaxation (LP-relaxation). To this end, we designed additional computational experiments on some set of problems with 9N9¼{20,30,60,80,100}. The LP-relaxation lower bounds are generated using the well-known commercial solver LINGO 8.0. The comparisons were carried out through lower bound gap (L-gap), which is defined L-gap 100  (LR bound  LP bound)/LP bound. The average values of results are shown in Table 2. As obvious, the lower bound gap is on average 10.034% which shows the higher performance of Lagrangian approach with respect to the LP-relaxation bound. It is also revealed that the computational time required by LR procedure is averagely less than the CPU time required to obtain the LP bound. Since the resulting LP-relaxation problem is still a nonlinear program, the LP-relaxation procedure is more time consuming than our LR approach as shown in Table 2. Furthermore, additional experiments were carried out to investigate the quality of the feasible solution (upper bound) obtained by the Lagrangian heuristic proposed in our paper. For this purpose, the optimal/best solution of original problem obtained by LINGO is recorded. The average value of comparison results is reported in Table 3 in terms of quality of solutions and CPU time. The quality of solutions is measured by Q-gap¼ 00  (LR solution value LINGO value/LINGO value). The results show that our Lagrangian procedure is capable of finding good

1240

H. Mokhtari, I.N.K. Abadi / Computers & Operations Research 40 (2013) 1234–1242

Table 3 Comparison between Lagrangian upper bounds and LINGO solutions. ID

1 2 3 4 5 6

9N9

25 30 45 70 90 120

9M19

6 6 6 8 8 10

9M29

5 5 5 5 5 5

Q-gap

0.026 0.042 0.093 0.016a 0.014a 0.019a

Table 4

  Results of sensitivity analysis for scenario I wj ¼ 2gj .

CPU (s) Lagrangian heuristic

LINGO

1.13 2.31 6.08 18.91 41.61 59.74

5853 10,854 24,243 b43,200b b43,200b b43,200b

a Comparisons were carried out by the best solution obtained by LINGO after running 12 h. b LINGO failed to obtain the optimal solution after running 12 h.

feasible solutions compared to the optimal/best solutions generated by LINGO solver. As shown in Table 3, LINGO failed to generate the optimal solution for three out of the six problem sets after running for 12 h. It is also shown that the average CPU time required by LINGO is significantly more than that of our LR heuristic. In addition to above comparisons, a new set of experiments were conducted to analyze the sensitivity of results to variability in gkj , pjk and wj. In order to create each class of problems, the value of some of the input data ðgkj , pjk , wj Þ should be changed. The pattern of changes should be associated with a valid range of parameters ðgkj , pjk , wj Þ. For this purpose, we are going to discuss an industrial interpretation of parameters wj and gkj which are P P P appeared in the objective function j A N k A M wj C jk þ gkj i A Bk [0 j k xij . The weight wj may be interpreted as a holding cost of job j per unit time [21]. In such a case, the objective function is the sum of holding and processing cost. It is assumed that the processing cost of jobs charged by the subcontractors gkj is drawn from the discrete uniform distribution over interval [20,100], and holding cost wj is calculated by ugj , where uA{0.2,1,2} is a transformation P factor and gj ¼ k gkj is mean outsourcing cost. These values of v represent three scenarios described below. Scenario I. The holding cost of a job is two times the processing cost of the job charged by subcontractors (u¼ 2). This scenario can be occurred when we have hazardous goods such as explosives, flammables, poisons, corrosive, radioactive substances. These materials must be packaged, labeled, handled and transported according to strict regulations from several agencies. Mitigating the risks associated with hazardous materials may require the application of safety precautions during their transport and storage. All these circumstances lead to a greater holding cost than processing cost. Scenario II. The holding cost of a job is same as the processing cost of the job charged by subcontractors (u¼ 1). This scenario can be occurred when we have sensitive goods which can be categorized into three classes: (i) temperature sensitive goods, (ii) stress sensitive goods, and (iii) light sensitive goods. The pharmaceuticals, foodstuffs and chemicals are some examples of sensitive goods. Holding costs of sensitive goods are generally less than those of hazardous goods and hence holding cost of a sensitive good can be assumed to be same as the production cost of that good. Scenario III. The holding cost of a job is 0.2 times the production cost of the job charged by subcontractors (u ¼0.2). This scenario can be occurred when we have regular goods. The good that is not included in hazardous and sensitive categories, called regula good.

Optimal value

Duality gap

Q-gap

L-gap

9225.15 16,373.50 23,113.10 27,155.20

0.818 0.793 0.835 0.785

2.81 2.95 3.16 3.08

0.0141 0.0135 0.0150 0.0147

40 60 80 100

19,312.10 32,039.00 44,488.10 57,445.00

0.846 0.836 0.707 0.783

3.14 3.07 3.18 3.27

0.0159 0.0170 0.0167 0.0181

U½20, U½40, U½60, U½80,

40 60 80 100

35,710.90 62,140.20 85,972.40 109,376.00

1.000 0.876 1.118 0.872

3.13 3.34 3.22 3.16

0.0163 0.0189 0.0155 0.0178

U½20, U½40, U½60, U½80,

40 60 80 100

68,998.60 120,038.00 1,767,771.00 220,023.00

0.955 1.072 0.962 1.015

3.27 3.46 3.52 3.49

0.0185 0.0184 0.0190 0.0162

Duality gap

Q-gap

L-gap

ID

pjk

gkj

1

U½5, 10

U½20, U½40, U½60, U½80,

40 60 80 100

2

U½10, 20

U½20, U½40, U½60, U½80,

3

U½20, 40

4

U½40, 60

Table 5

  Results of sensitivity analysis for scenario II wj ¼ gj . Optimal value

ID

pjk

gkj

1

U½5, 10

U[20,40] U[40,60] U[60,80] U[80,100]

4675.29 8313.94 11,749.90 13,767.20

0.699 0.659 0.695 0.731

2.43 2.49 2.67 2.49

0.0132 0.0135 0.0138 0.0145

2

U½10, 20

U½20, U½40, U½60, U½80,

40 60 80 100

9706.30 16,180.70 22,378.80 22,169.20

0.717 0.733 0.682 0.725

2.47 2.53 2.64 2.58

0.0148 0.0138 0.0155 0.0142

3

U½20, 40

U½20, U½40, U½60, U½80,

40 60 80 100

18,295.70 30,808.30 43,881.50 56,348.60

0.772 0.749 0.978 0.755

2.59 2.67 2.75 2.53

0.0138 0.0148 0.0145 0.0151

4

U½40, 60

U½20, U½40, U½60, U½80,

40 60 80 100

35,026.30 60,683.90 89,555.00 111,551.00

0.812 0.988 0.799 0.776

2.73 2.76 2.49 2.84

0.0158 0.0134 0.0156 0.0150

Duality gap

Q-gap

L-gap

Table 6

  Results of sensitivity analysis for scenario III wj ¼ 0:2gj . Optimal value

ID

pjk

gkj

1

U½5, 10

U½20, U½40, U½60, U½80,

40 60 80 100

993.34 1780.60 2504.56 2884.74

0.607 0.622 0.695 0.614

2.24 2.11 2.05 2.09

0.0124 0.0102 0.0105 0.0111

2

U½10, 20

U½20, U½40, U½60, U½80,

40 60 80 100

2019.83 3383.68 4685.63 6114.06

0.712 0.631 0.685 0.724

2.24 2.13 2.30 2.28

0.0115 0.0109 0.0116 0.0114

3

U½20, 40

U½20, U½40, U½60, U½80,

40 60 80 100

3782.79 6545.33 9055.87 11,702.70

0.653 0.723 0.734 0.690

2.31 2.36 2.16 2.27

0.0108 0.0128 0.0123 0.0106

4

U½40, 60

U½20, U½40, U½60, U½80,

40 60 80 100

7275.49 12,659.90 18,622.60 23,077.60

0.728 0.708 0.659 0.895

2.39 2.30 2.42 2.29

0.0124 0.0131 0.0118 0.0127

H. Mokhtari, I.N.K. Abadi / Computers & Operations Research 40 (2013) 1234–1242

We carried out analysis of sensitivity on a sample problem with n¼20, m1 ¼4 and m2 ¼ 2 for three aforementioned scenarios separately. Tables 4–6 show the results for scenarios I, II and III. As the results show, the optimal objective value of all the three scenarios is approximately increased when processing time pjk or processing cost gkj increase. As another observation, all the Gap indices, i.e., duality gap, L-gap and Q-gap, increase when the processing times increase. At a given level of processing time, duality gap, L-gap and Q-gap decrease from scenarios I to III. These observations are also depicted in Figs. 1–3. Note that the average duality gap, L-gap and Q-gap at a given processing time and scenario are defined as the average of results obtained in those processing time and scenario over all processing cost gkj . Scenario I

Scenario II

Scenario III

Average Duality Gap

1.30 1.10 0.90 0.70 0.50 0.30 U[5,10]

U[10,20]

U[20,40]

U[40,60]

Range of Processing Time Fig. 1. The effect of processing time on average duality gap in scenarios I–III.

Scenario I

Scenario II

Scenario III

3.60

Average L-gap

3.30 3.00 2.70 2.40 2.10 1.80 1.50 U[5,10]

U[10,20]

U[20,40]

U[40,60]

Range of Processing Time Fig. 2. The effect of processing time on average L-gap in scenarios I–III.

Scenario I

Scenario II

Scenario III

0.03

Average Q-gap

0.02 0.02 0.01 0.01 0.00 U[5,10]

U[10,20]

U[20,40]

U[40,60]

Range of Processing Time Fig. 3. The effect of processing time on average Q-gap in scenarios I–III.

1241

8. Conclusion Because of increasing importance of on-time delivery of jobs to the customers, this paper suggests a single-stage production scheduling model with an option of outsourcing. Unlike most of past research, our model allows for multiple available subcontractors and also supports joint decision on scheduling of inhouse and outsourced jobs on manufacturer and subcontractors simultaneously. We designed an integer programming model and also presented a dynamic programming based Lagrangian relaxation procedure for solving the problem. Results of extensive computational experiments revealed high performance of the proposed approach for a wide range of problem sizes.

References [1] Baker KR, Trietsch D. Principles of sequencing and scheduling. Hoboken, New Jersey: John Wiley & Sons; 2009. [2] Baykasoglu A. MOAPPS 1.0: aggregate production planning using the multiple objective tabu search. International Journal of Production Research 2001;39:3685–702. [3] Bazaraa MS, Goode JJ. A survey of various tactics for generating Lagrangian multipliers in the context of Lagrangian duality. European Journal of Operational Research 1979;3:322–8. [5] Cai X, Lee C-Y, Vairaktarakis GL. Optimization of processing and delivery decisions involving third-party machines. Nonlinear Analysis 2005;63(5– 7):2269–78. [6] Chan FTS, Kumar V, Tiwari MK. The relevance of outsourcing and leagile strategies in performance optimization of an integrated process planning and scheduling model. International Journal of Production Research 2009;47(1): 119–42. [7] Chen Z-L, Li C-L. Scheduling with subcontracting options. IIE Transactions 2008;40(12):1171–84. [8] Chen H, Luh PB. An alternative framework to Lagrangian relaxation approach for job shop scheduling. European Journal of Operational Research 2003;149(3): 499–512. [9] Chung D, Lee K, Shin K, Park J. A new approach to job shop scheduling problems with due date constraints considering operation subcontracts. International Journal of Production Economics 2005;98(2):238–50. [10] Da Silva CG, Figueira J, Lisboa J, Barman S. An interactive decision support system for an aggregate production planning model based on multiple criteria mixed integer linear programming. Omega 2006;34:167–77. [11] Geoffrion AM. Lagrangian relaxation and its uses in integer programming. Mathematical Programming Study 1974;2:82–114. [12] Jamalnia A, Soukhakian MA. A hybrid fuzzy goal programming approach with different goal priorities to aggregate production planning. Computers & Industrial Engineering 2009;56:1474–86. [13] Lawler EL, Lenstra JK, Rinnooy Kan AHG, Shmoys DB. Sequencing and scheduling: algorithms and complexity. In: Graves SC, Rinnooy Kan AHG, Zipkin PH, editors. Logistics of production and inventory. North Holland: Amsterdam; >. [14] Li Z, Ierapetritou MG. Integrated production planning and scheduling using a decomposition framework. Chemical Engineering Science 2010;65(22):5887–900. [15] Lee IS, Sung CS. Single machine scheduling with outsourcing allowed. International Journal of Production Economics 2008;111(2):623–34. [16] Lee IS, Sung CS. Minimizing due date related measures for a single machine scheduling problem with outsourcing allowed. European Journal of Operational Research 2008;186(3):931–52. [17] Lee YH, Jeong CS, Moon C. Advanced planning and scheduling with outsourcing in manufacturing supply chain. Computers and Industrial Engineering 2002;43(1–2):351–74. [18] Lee K, Choi B-C. Two-stage production scheduling with an outsourcing option. European Journal of Operational Research 2011;213(3):489–97. [19] Mokhtari H, Abadi INK, Amin-Naseri MR. Production scheduling with outsourcing scenarios: a mixed integer programming and efficient solution procedure. International Journal of Production Research 2012;50(19): 5372–95. [20] Nishi T, Hiranaka Y, Inuiguchi M. Lagrangian relaxation with cut generation for hybrid flowshop scheduling problems to minimize the total weighted tardiness. Computers and Operations Research 2010;37(1):189–98. [21] Pinedo M. Scheduling: theory, algorithms, and systems. third ed.Springer; 2008. [22] Qi X. Coordinated logistics scheduling for in-house production and outsourcing. IEEE Transactions on Automation Science and Engineering 2008;5(1):188–92. [23] Qi X. Two-stage production scheduling with an option of outsourcing from a remote supplier. Journal of Systems Science and Systems Engineering 2009;18(1):1–15. [24] Qi X. Outsourcing and production scheduling for a two-stage flow shop. International Journal of Production Economics 2011;129:43–50.

1242

H. Mokhtari, I.N.K. Abadi / Computers & Operations Research 40 (2013) 1234–1242

[25] Ruiz-Torres AJ, Lopez FJ, Ho JC, Wojciechowski PJ. Minimizing the average tardiness: the case of outsource machines. International Journal of Production Research 2008;46(13):3615–40. [26] Smith WE. Various optimizer for single-stage production. Naval Research Logistics Quarterly 1956;3:59–66.

[27] Tavares Neto RF, Godinho Filho M. An ant colony optimization approach to a permutational flowshop scheduling problem with outsourcing allowed. Computers & Operations Research 2011;38(9):1286–93. [28] Van Den Akker M, Hoogeveen JA, Van Velde SLDE. Parallel machine scheduling by column generation. Operations Research 1999;47(6):862–72.

Suggest Documents