A Cutting Plane Approach for the Single-Product ... - CiteSeerX

0 downloads 0 Views 194KB Size Report
Apr 24, 1999 - applied strong cutting plane methods to SPASD problems. Pinnoi and Wilhelm (1998) devised a branch-and-cut approach for the SPASD ...
A Cutting Plane Approach for the Single-Product Assembly System Design Problem by Radu Gadidov Emery Worldwide Airlines, Vandalia, Ohio 45377 and Wilbert Wilhelm1 Texas A&M University, College Station 77843-3131 1 Communicating Author; e-mail: [email protected] April 24, 1999 Abstract.

This paper evaluates a new branch-and-cut approach, establishing a computational

benchmark for the single-product, assembly system design (SPASD) problem. Our approach, which includes a heuristic, preprocessing, and two cut-generating methods, outperformed OSL in solving a set of 102 instances of the SPASD problem. The approach is robust; test problems show that it can be applied to variations of the generic SPASD problem that we encountered in industry. 1. Introduction The traditional (Type I) assembly line balancing (ALB) problem is to assign a set of tasks to stations, minimizing the number of stations required while observing task precedence relationships and a cycle time requirement. This paper deals with an extension of the ALB problem, the singleproduct assembly system design (SPASD) problem. The objective of the generic SPASD problem is to minimize the total cost of system design; in general, this consists of the fixed costs of activating stations and purchasing machines and the variable cost of assembly over the planning horizon. We assume that all of these costs are deterministic and known in advance. We also assume that the set of immediate predecessors for each task is known. The requirements are that each task must be assigned to some station and that assignments observe task precedences. Each task can be performed on any one of a set of alternative machines, and we assume that the processing time on each alternative machine is deterministic and also known in advance. Finally, we assume that all stations have the same cycle time c, which is also deterministic and known. In addition, this paper deals with two actual SPASD problems we encountered in industry. The first allows parallel, identical machines to be located at each station. This configuration allows “long” jobs (i.e., with processing times larger than the cycle time) to be handled. It also increases

1

station availability, helping to accommodate precedence relationships. The second imposes positional requirements so that tasks that require processing from the front side of the product cannot be assigned to the same station that processes tasks that require processing from the back side. We sought out these actual variations to evaluate the robustness of our solution approach in application to idiosyncrasies found in different plants. The purpose of this paper is to evaluate a new branch-and-cut approach, establishing a computational benchmark for the SPASD problem. The approach consists of a heuristic, preprocessing methods and two types of cuts, including those identified by the Facet Generation Procedure (FGP) (cf. Gadidov and Wilhelm 1997) and some new types that are based on the particular structure of SPASD problems. Even though researchers have studied the SPASD and related ALB problems extensively and have developed elegant solution methods, it is well known that these methods have not been widely used in industry. We conjecture that one reason for this lack of technology transfer is that prior solution methods are specialized to a particular model and cannot be adapted easily to address variations found in each different plant. This paper demonstrates that our approach is sufficiently robust that, with only minor modifications, it can be applied successfully to variations of the SPASD problem. We have organized the body of this paper in five sections. Section 2 reviews relevant literature, and section 3 introduces our mathematical formulations. Section 4 presents the main components of our solution approach. In section 5, we discuss our computational evaluation.

Finally, we offer conclusions and

recommendations for future research.

2. Literature Review The objective of the (Type I) ALB problem is to minimize the number of identical stations required to process a set of tasks, subject to constraints that impose a fixed cycle time c and precedence relationships among tasks (Baybars 1986). ALB is an NP-hard problem (Karp 1972), but several specialized branch-and-bound algorithms have been shown to solve selected test problems effectively; for example, see Assche and Herroelen (1998), Johnson (1988), Hackman, Magazine and Wee (1989), Ugurdag, Papachristou and Rachmadugu (1997), and Scholl and Klein (1997). De Reyck and Herroelen (1995) surveyed model formulations and compared two effective algorithms, EUREKA (Hoffman 1992) and FABLE (Johnson 1988). Scholl (1995) overviewed the topic.

2

Rapid changes in manufacturing around the world have led researchers to study extended versions of the ALB problem. In particular, robots and other types of “machines” must now be prescribed, so that stations need not be identical as manual stations typically are in the traditional ALB problem. Extending ALB to prescribe the type of machine at each station results in the SPASD problem. The most appropriate objective for the SPASD problem is to minimize total cost (e.g., see Graves and Lamar (1983), Graves and Redfield (1988), and Ghosh and Gagnon (1989)) since the SPASD design with the minimum number of stations need not minimize cost. Wilhelm (1999) devised a column-generation approach to the SPASD problem, specifying assembly sequence to deal explicitly with tool changes, for example, in robotic assembly. Kimms (1997) proposed a column-generation method to design a flow line. He incorporated operation precedence relationships but required each operation to be performed by a specified type of machine rather than selecting one from a set of alternative machine types as we do. We focus our review on the most relevant literature, which comprises papers that have applied strong cutting plane methods to SPASD problems. Pinnoi and Wilhelm (1998) devised a branch-and-cut approach for the SPASD problem, employing preprocessing at the root node. They showed that the node packing (NP) polytope is a relaxation of the ALB polytope and generated cuts based on this relationship. Pinnoi and Wilhelm (1997b) gave a formulation of the workloadsmoothing problem, a variation of the ALB problem that minimizes the maximum idle time on any station, “smoothing” the workloads assigned to a predetermined number of stations. They proposed a related solution approach, which exploits the relationship between the NP and ALB polytopes. Kim and Park (1995) studied the SPASD problem associated with robotic lines, which is to assign tasks, parts and tools to robotic cells (i.e., stations) in order to minimize the total number of cells activated. They proposed a branch-and-bound algorithm, including preprocessing and adding most violated cover cuts (cf. Nemhauser and Wolsey (1988)) at the root node. Finally, Pinnoi and Wilhelm (1997a) established a family of hierarchical models for the SPASD Problem, discussing in detail a variety of models and relationships between them. The generic SPASD model we study in this paper is based on Pinnoi and Wilhelm (1997b), but we present several new devices to facilitate solution. We now detail our SPASD models. 3. Model Formulation

3

In this section we present a mathematical formulation of the generic SPASD problem as well as two variations that represent actual industrial cases. First, however, we introduce notation that is common to all three models. Indices • m = machine type

(m ∈ M)



s = station

(s ∈ S)



t = task

(t ∈ T)

Task information •

(t ∈ T)

T = set of tasks

(t’∈ IPt)

• IPt = set of immediate predecessors to task t Machine information • M = set of machine types

(m ∈ M)

• Mt = set of machines types that can perform task t

(m ∈ Mt)

• pmt = processing time of task t on machine type m

(t ∈ T, m ∈ Mt)

• Tm = set of tasks that can be assigned to machine type m

(m ∈ M)

• v mt = variable cost of assigning task t to machine type m

(t ∈ T, m ∈ Mt)

• cm = fixed cost incurred if machine type m is prescribed

(m ∈ M)

Station information • c = cycle time • S = set of stations • f s = fixed cost incurred if station s is activated

(s ∈ S)

We now describe the generic SPASD problem. 3.1 The Generic SPASD Formulation. The decision variables in this model are Decision variables • xmst = 1 if task t is assigned to machine type m at station s

(s ∈ S, t ∈ T, m ∈ Mt)

• yms = 1 if machine type m is assigned to station s

(m ∈ M, s ∈ S)



(s ∈ S).

zs = 1 if station s is activated

We now present our generic, SPASD model. Formulation

4

min z = ∑ f s z s + ∑

Model SPASD

s∈S

∑ c m y ms + ∑ ∑

s∈ S m∈ M

∑ v mt x mst

s∈S t∈ T m∈ Mt

subject to:



(1)

∑ xmst = 1

t∈ T

m∈ M t s∈ S

∑ ∑ sx

(2)

m∈ M t ' s∈ S

(3)



mst '

≤∑

∑ sx

m∈ M t s∈ S

mst

∑ p mt xmst ≤c z s

t ∈ T, t ' ∈ IPt s∈ S

t ∈ T m∈ M t

(4)

x mst ≤ y ms

s ∈ S , t ∈ T, m ∈ M t

(5)

y ms ≤z s

m∈ M , s ∈ S

(6)

zs+ 1 ≤zs

s ∈ S \{|S|}

(7)

xmst, yms, zs ∈ {0,1}

s ∈ S, t ∈ T , m∈ M t .

The objective function minimizes the total cost of the system design, including the fixed costs of activating stations and prescribing machines and the variable cost of performing tasks. Equalities (1) require that each task be processed, and inequalities (2) invoke task precedence relationships. Constraints (3) impose the cycle time limitation, while inequalities (4) assure that task t is assigned to machine type m at station s only if a machine of that type is located at station s. Inequalities (5) are redundant for the integer program, but we included them in order to strengthen the linear programming relaxation. They require station s to be activated in order to assign a machine to it. Inequalities (6) require station s to be activated before station s+1 can be activated. We assume that an unlimited number of machines of each type m may be prescribed. This model is similar to the one studied by Pinnoi and Wilhelm (1998), but there are important differences. Both models incorporate constraints (1) and (2). The Pinnoi and Wilhelm model did not incorporate zs decision variables so that their cycle time constraint (3) involved c yms instead of c zs and enumerated (3) over m ∈ Mt instead of summing The Pinnoi and Wilhelm model was designed specifically to imbed the ALB polytope (i.e., though constraints (1)-(3)), and they limited the number of machines assigned to station s using inequality (4), ∑ m yms ≤ 1. In comparison,

5

our model incorporates several features to facilitate solution. First, we strengthen inequality (3) by summing over m ∈ Mt. Second, inequalities (4)-(6) are disaggregated forms that are known to strengthen the linear programming relaxation, making bounds more discerning. Finally, inequality (6) improves the formulation by avoiding symmetry. If S0 stations are activated out of the |S| that are | S | 

available, there are   ways to make a given assignment of machine types and tasks to the |S| S  0

stations. Our model avoids this symmetry, so that we need not explore all alternative - but equivalent - ways of making the same assignment to different stations. Now, we describe a variation of the generic SPASD problem that was posed by a manufacturer of assembly work stations. 3.2 Parallel Machines. The application involved an assembly line in which determining the number of machines at each station was a crucial issue in the system design. Thus, we consider a variation of the generic SPASD problem in which identical machines (i.e., machines of the same type) can be used in parallel at each station. We need the following additional notation to describe each station: Station information •

ns = upper bound on the number of machines that can be assigned to station s

• c mns = fixed cost of assigning n machines of type m to station s The decision variables in this case are Decision variables •

xmst = 1 if task t is assigned to machine type m at station s

(s ∈ S, t ∈ T, m ∈ Mt)



ymns = 1 if n machines of type m are assigned to station s

(s ∈ S, m ∈ M, n = 1, … , ns).

We now present our SPASDPM model. Formulation of SPASD Problem with Parallel Machines ns

SPASDPM

min z = ∑ ∑

∑ c mns y mns + ∑ ∑

s∈ S n =1 m∈ M

∑ v mt x mst s∈ S t ∈ T m∈ M t

subject to: (1), (2) and ns

( 8)

∑ p mt x mst ≤c ∑ n y mns

t ∈ Tm

n =1

6

s ∈ S , m∈ M



( 9)

n s+ 1

ns

∑ y mn( s + 1) ≤ ∑ ∑ y mns

m∈ M n =1

s ∈ S \{|S|}

m∈ M n =1

n1

∑ ∑ y mn1 ≤ 1

(10)

m∈ M n =1

xmst, ymns ∈ {0,1}

(11)

s ∈ S, t ∈ T , m∈ M t .

The objective is again to minimize the total cost of system design. New constraints (8) state that the availability (i.e., time capacity during each cycle) of a station is increased by assigning more than one machine to that station. Parallel machines allow “long” jobs (i.e., those with pmt > c) to be handled and also provide a larger capacity, which helps to accommodate precedence relationships. Inequalities sequence { ∑

(9)

replace

(6),

assuring

that

stations

are

activated

consecutively.

The

ns



m∈ M n =1

y mns }s is non-increasing, so constraint (10) ensures that at most one machine type

is assigned to each activated station. Next, we introduce a variation of the generic SPASD problem that was posed by a manufacturer of assembly systems that are designed around an automatic conveyor transport. 3.3 Position Requirements.

The company supplies the conveyor and pallets that convey

workpieces. In addition, the company designs workstations, installs and tests the system, and provides system support. The application involved an assembly system in which tasks can have positional requirements. That is, it may be necessary to perform some tasks from the front side of the product and other tasks from the backside. We assume that a station can do either “front” or “back” tasks but not both. Furthermore, we assume that an unrestricted task can be done on any station, whether it is assigned “front” or “back” tasks or not. In the application that motivated this case, a mechanism can be installed to rise automatically from under the conveyor to rotate a workpiece, orienting it appropriately, depending whether front or back tasks are performed at the next station. This allows all operations to be performed from one side of the conveyor, facilitating access to machines for maintenance. The application involved prescribing a single machine at each station. We need the following additional notation to describe tasks. Task information • TB = set of back tasks

7

• TF = set of front tasks • TU = set of unrestricted tasks (TU = T \ ( TB

UT

F

)).

The decision variables in this case are Decision Variables • xmst = 1 if task t is assigned to machine type m at station s

s ∈ S, t ∈ T , m∈ M t

• yms = 1 if machine type m is assigned to station s

s ∈ S, m∈ M

We now present our SPASDPR model. Formulation of the Assembly System Design Problem with Positioning Requirements Model SPASDPR

∑ ∑ cms y ms + ∑ ∑ ∑ vmt xmst

min z =

s∈S m∈ M

s∈S t∈ T m∈ M t

subject to (1), (2) and:



(12)

t∈ Tm

m ∈ M, s ∈ S

p mt x mst ≤c y ms

m ∈ M, s ∈ S , t ∈ TF , t ' ∈ TB

x mst + x mst ' ≤1

(13) (14)

∑ y m( s + 1) ≤ ∑ y ms

m∈ M

(15)

m∈ M

s∈ S

∑ y m1 ≤1

m∈ M

(16)

xmst, yms ∈ {0,1}

s ∈ S , t ∈ T , m ∈ Mt .

Inequalities (12) invoke cycle time restrictions, while the new constraints (13) assure that one cannot assign both front and back tasks to the same station. Constraints (14) and (15) replace (9) and (10), respectively, ensuring that stations are activated consecutively and that only one type of machine is located at each station. 4. Solution method This section describes our branch-and-cut approach.

It consists of a heuristic and pre-

processing methods and two types of cutting planes, those generated by the Facet generation Procedure and others based on machine capacities.

8

4.1 Heuristic. Our heuristic yields an upper bound for the value of the optimal solution of the SPASD problem. It schedules the task that can fit on a station and will incur the least cost. Assume that, at a general iteration, we have already activated s stations. Task t is a candidate for assignment if its predecessors have all been assigned. We find the minimal cost to assign each candidate task t to a machine, m ∈ Mt, at station s. If no more tasks can be assigned to station s (i.e., the remaining candidate tasks all have processing times greater than the unused capacity of station s), we activate a new station. Otherwise, we assign the task incurring the minimal cost and continue. By keeping track of the unused capacity (cycle time) at a station, this heuristic is able to prescribe “tight” solutions. For example, consider three tasks with precedence relationship 1 → 2 → 3. Each task has processing time 3, and the cycle time is 5. The traditional method for calculating the earliest station to which each task can be assigned yields E1 = 1, E2 = (3 + 3) / 5  = 2 and E3 = (3 + 3 + 3) / 5  = 2. Our heuristic assigns task 1 to station 1, leaving unused capacity of 2, so that it assigns task 2 to station 2. Similarly, task 2 leaves capacity of 2 at station 2, so our heuristic assigns task 3 to station 3. 4.2 Preprocessing methods. Our preprocessing methods yield a bound on the optimal number of stations as well as the earliest and latest stations to which each task might be assigned. These bounds allowed us to fix some variables to zero to reduce problem size. We now define the notation we introduce in this section:

z IP

= the optimum solution

Ns

= optimal number of stations.

We derive our bound on the optimal number of stations by starting with the observation (17)

Ns

∑ f k + N s min{ c m ; m ∈ M } + ∑ min{ v mt ; m ∈ M t } ≤z IP .

k =1

t

The first term in (17) represents the cost to open Ns stations, the second term represents the lowest possible cost of purchasing Ns machines and the third term represents the lowest possible cost of producing all the tasks. If z IPH is any heuristic solution, then z IP ≤z IPH , so Ns

(18)

∑ f k + N s min{ c m ; m ∈ M } + ∑ min{ v mt ; t ∈ M t } ≤z IPH ,

k =1

t

or, equivalently, 9

Ns

(19)

∑ f k + N s min{ c m ; m ∈ M } ≤z IPH − ∑ min{ v mt ; t ∈ M t } .

k =1

t

An upper bound for N s can be obtained easily from (19). Given N s , we can define L t , the latest station to which task t can be assigned: min { p mt ; m ∈ M t } + Lt = N s -    



~ t = successor of t

c

min { p m~t ; m ∈ M ~t }  .   

Finally, the earliest station to which task t can be assigned, E t , is given by min { p m~t ; m ∈ M ~t }  min { p mt ; m ∈ M t } + ∑ ~   t = predecessor of t Et =  . c     The upper bound Ns is, apparently, new, and our definitions of Et and Lt extend those used in ALB by considering the different processing times that alternative machines in set Mt require to complete task t. We now describe two types of cutting planes we used to facilitate solution. 4.3The Facet Generation Procedure (FGP). We generated one type of cuts using the FGP. Gadidov, Parija and Wilhelm (1997) reported fundamental technical details of the FGP. So that this paper is self-contained, we augment that earlier work, providing a brief intuitive description. The FGP deals with the following separation problem: given a full dimensional polytope  in Rm and an m-dimensional vector f* such that f*∉  , find a hyperplane H that separates f* from  and represents a facet of  . Underlying assumptions are that 0 ∈  and that it is possible to identify a set of m linearly independent extreme points of  , E1, such that f* belongs to the convex cone generated by E1. Columns x ∈ E1 form the initial basis for the linear program (P): Problem (P): z* = min {∑ x ∈

ext Â

αx | ∑ x ∈

ext Â

αx x = f* , αx ≥ 0, x ∈ ext  }

in which ext  is the set of extreme points of  . At each iteration, an oracle solves a subproblem maximizing a linear function over  , to generate column x ∈ ext  . This step prices out nonbasic columns for (P). If the LP optimality criterion is satisfied, the current basis, B*, is optimal. B* consists of m linearly independent vectors, x ∈ ext  . We determine H as the hyperplane that supports these m points by finding vector n ∈ Rm that satisfies nT x* =

10

1 for all x* ∈ B*. n is the normal to H = { x ∈ Rm : nT x = 1 }, which separates f* from  and F = H ∩  represents a facet of  . The relationship of H and F to  and f* is easy to interpret geometrically. Since f* ∉  and f* lies in the cone generated by the extreme points of  , the ray 0 → f* intersects one of the facets of  at the point (1 / z*) f*, where z* is the value of the optimal solution to (P). If (P) has a unique optimal basis, the ray 0 → f* passes through exactly one facet of  ; this is the facet that the FGP identifies.

.f*

H 1 f* z*

.

 0 Figure 1 If the ray 0 → f* passes through a face F of dimension k ( 0 ≤ k < m –1 ), the solution to (P) is degenerate since the optimal solution expresses f* as a positive linear combination of k+1 extreme points of  . In this case, the FGP identifies one of the facets defining the face F, depending on the optimal basis chosen by the column generation procedure.

11

H 1 f* z*

.f*

. Â

0 Figure 2 4.4 New Cuts. To further tighten the LP relaxation we also added new valid inequalities that we describe in this section. We added the first type (i.e., inequalities (21)) at the root node and the second type (i.e., inequalities (22)) at other nodes in the branch and cut tree. To derive the first type, let E t1 = s1 be the earliest station for task t1 and let m1 ( = arg min{ p mt1 ; m ∈ M t1 } ) be the fastest machine type on which t1 can be assigned. Let R1 =



min { p m ~t ; m ∈ M ~t }

~ t = predecessor of t1

be the sum of the processing times of all predecessors of t1 (each assigned on the fastest machine type possible). Furthermore, let t2 be some other task that is not a predecessor of t1 with earliest station E t 2 = s2, s2



s1 and let m2 = arg min{ p mt 2 ; m ∈ M t 2 } ) be the fastest

machine type on which t2 can be assigned. Let R2 =

∑ min { p m~t ; m ∈ M ~t } ~ t = predecessor of t 2 ~ t = not a predecessor of t 1

be the sum of the processing times of all the predecessors of t2 that are not predecessors of t1; if R1 + p m1t1 + R 2 + p m 2 t 2 > cs1 , there is not enough machine capacity to assign task t1 to s1 and task t2 to s2, so (20)

x m1 s1t1 +

s1

∑ x m 2 kt 2 ≤1

k = s2

12

is a valid inequality. This can be improved by lifting all the variables x mkt 2 , k = s2, … , s1, m ∈ M t 2 . The lifted version of (20) is: (21)

s1

∑ x ms1t1 + ∑

m∈ M t



k = s2 m∈ Mt

1

x mkt2 ≤1 . 2

To derive cuts of the second type, consider a node in the Branch and Cut tree at which task t1 is assigned to a machine of type m1 , m1 ∈ M t1 , at station s1 (i.e., x m1 s1t1 = 1 ). Moreover, assume that predecessor t2 of t1 is assigned to a machine of type m2, m2 ∈ M t2 , at station s2 ≤ s1 (i.e., x m 2 s 2 t 2 = 1 ). Let rs1 = c −

∑ p mt x ms1t =1

rs 2 = c −

∑ pmt x ms2t =1

be the unused capacity of station s1,

be the unused capacity of station s2, and ~ r= ∑

min { pmt ; m ∈ M t }

t = predecessor of t1 t = successor of t 2

be the minimum time to process the tasks that are successors of t2 and predecessors of t1. If

 rs1 + ( s1 − s 2 )rs2 r~ >   rs1 + rs2 + ( s1 − s 2 )c

if s 2 + 1 ≤ s1 otherwise,

there is not enough time to assign the tasks that are successors of t2 and predecessors of t1, so we can prune the node by adding the valid inequality (22)

x m1 s1t1 + x m 2 s 2 t 2 ≤1 .

We now describe our computational experience with the SPASD problem. 5. Computational Evaluation. To provide benchmarking computational experience, we ran a set of tests on an IBM RISC 6000 model 550 using the supernode routine of OSL release 3 to manage the cutting planes generated by our methods. We added the violated cuts of type (21) at the root node, and kept the other cuts in a pool and added them (if violated) at other nodes in the branch-and-cut search tree along with inequalities of type (22) and cuts generated by the

13

FGP. To solve the generic SPASD instances, we applied the FGP to underlying knapsack polytopes, each defined by an inequality of type (3). Similarly, we applied it to inequalities of type (8) to solve SPASDPM instances and to inequalities of types (12) and (13) to solve SPASDPR instances. We made minor but straightforward adaptations in our heuristic and preprocessing methods to address SPASDPM and SPASDPR problems. 5.1Test Instances. Our tests of the SPASD problem involved four factors: (1) number of tasks (2) task precedence relationships (3) cycle time (4) number of machine types. We chose two levels for the first factor - 20 and 30 tasks – and defined three levels for each of the other factors. We generated task precedence relationships based on an 80-task instance in Arcus (1966), randomly dropping 60 or 50 tasks, respectively. We justify this approach by noting that other researchers can easily generate similar precedence relationships to confirm our results. In addition, the resulting networks have low order strength and are, therefore, quite challenging. The 20-task instances have order strengths of 12%-14%; and 30-task instances, 8%-9%, so that precedence relationships to do not impose tight constraints and numerous combinations must be evaluated to prescribe an optimal assignment of tasks to stations. Figures 3-5 depict the resulting precedence relationships for 20-task instances; and Figures 6-8, for 30-task instances. We employed these task precedence relationships in tests of all three models (i.e., SPASDP, SPASDPM and SPASDPR). We selected three cycle times c1, c2, c3 (which are indexed in increasing order) and defined the largest (c3) as the original cycle time in Arcus’ 80-task instance. To generate c1 and c2, we drew random numbers r11 and r21 from uniform distributions with ranges [0.6, 0.75] and [0.75, 0.95], respectively, and computed c1 = r11 c3 and c2 = r21 c3. Finally, we defined the number of machines types, |M|, to be 2, 4 or 6. We specified the set of alternative machines that can perform task t by generating random numbers { rm2 }m∈ M from the uniform distribution with range [0.0, 1] and assigning machine type m to set Mt if rm2

14

≤ 0.6. If this process assigned no machine type to set Mt, machine type m = 1 was assigned to Mt so that task t could be processed. Each test instance involved one level of each of these four factors. Time and resource limitations precluded us from replicating each test. However, experiments of this design are commonly used to evaluate mathematical programming algorithms. We derived processing times from Arcus’ 80-task instance. Let pt , t ∈ T, be the processing times for the selected tasks from Arcus’ 80-task instance. We drew a set of random numbers {r 3m }m ∈

M

from a uniform distribution with range [0.3,1] and defined the processing

time p mt of task t ∈ Tm on machine type m ∈ Mt as pmt = r 3m pt. We generated the fixed cost of activating station s, fs, by drawing random numbers {fs}s ∈ S

from a uniform distribution with range [100, 300].

We generated the fixed cost of

prescribing machine type m, cm, by drawing random numbers {r 4m }m

∈ M

from the uniform

distribution with range [0.3, 1.0] and using cm = 2000 / r 4m . To generate the variable cost of operations, vmt, we drew random numbers {r 5m }m ∈ M and {r t6 }t ∈ T from uniform distributions with ranges [0.3, 1.0] and [800, 1200], respectively, and defined vmt = r 5m r t6 . We used factors (1), (2) and (4) in generating 24 SPASDPM test instances. In this case, we allowed more than one machine (all of the same type, however) at each station. Since cycle time is not a relevant issue in this case, we redefined factor (3) to be the maximum number of machines allowed at each station and chose two levels: 4 and 6. We generated costs cmns, using cmns = n cm + fs in which we generated cm and fs as described above. We used factors (1)-(4) in generating 24 SPASDPR test instances. As in the generic SPASD instances, we allowed only one type of machine at each station. In the task precedence diagrams of Figures 3-8, U denotes an unrestricted task, and B (F) indicates that a task must be performed from the back (front) side of the assembly line. This task designation is relevant only to the SPASDPR model and has no meaning for the SPASD and SPASDPM models. Since allowing only two types of alternative machines for each task is very restrictive, we defined only two levels for the number of alternative machine types: 4 and 6, respectively.

15

Similarly, we also chose two levels for the cycle time: c2 and c3, respectively. We generated fixed costs, cms, using cms = cm + fs in which we generated cm and fs as described above. 5.2 Test Results. Tables 1-6 describe our 102 test instances. Column 1 defines the factor levels used in each test; the notation, for example, asdp1c1m2_30, represents a SPASD instance, task precedence relationships given in diagram 1, cycle time c1, two types of alternative machines and 30 tasks. Columns 2 and 3 give the number of constraints and number of binary variables for each instance. Columns 4, 5 and 6 give the optimal values for the initial LP relaxation, ZLP, the optimal binary solution, ZBP, and the percent gap [i.e., 100 * (ZBP

- ZLP ) / ZBP ],

respectively. The models are relatively tight; 80 instances have gaps less than 10% and the remaining 22 have gaps less than 16%. Relatively small gaps like these are often interpreted as reflecting a “good” model, that is, one that facilitates solution by making bounds discerning. Tables 7-12 give the results of our tests. Column "Cuts" lists the number of cuts generated during our Branch-and-Cut search, column "Nodes" gives the number of nodes required to find and verify an optimal solution, and column "CPU" gives the run time (in seconds) for solving the instance. The last two columns of Tables 7-12 demonstrate the results of using only OSL to solve each instance. These columns report “Nodes,” the number of nodes required to find and verify an optimal solution, and “CPU,” the run time (in seconds), respectively. The OSL implementation did not use our heuristic, pre-processing methods or cuts, but it did employ the supernode routine. We set time limits of one hour for OSL on instances that our approach solved in minutes and five hours for harder ones. Our branch-and-cut approach was able to solve all 102 instances within the time limits allotted.

However, the OSL implementation was not able to solve 54 of these 102 test

instances (i.e., 53%) within allotted limits. Even for the instances that were solved within the time limits, our branch-and-cut approach was significantly faster. In addition, except for one 30-task SPASDPM instance, our approach required substantially fewer nodes to find and confirm an optimal solution. 6. Conclusions This paper evaluates a new branch-and-cut approach, establishing a computational benchmark for the single-product assembly system design (SPASD) problem. The paper studies the

16

generic SPASD problem and two variations that are based on actual industrial cases. One variation allows parallel, identical machines to be located at each station, and the second imposes positional requirements, allowing each station to process tasks that must be accomplished from the front side or the back side of the product but not both. Our approach consists of a heuristic, preprocessing methods and cuts that are generated by the Facet Generation Procedure (FGP) (cf. Gadidov and Wilhelm 1997) and by a family of valid inequalities that are based on the particular structure of SPASD problems. The paper established computational benchmarks by testing our approach on a set of 102 instances. The supernode routine, which allows OSL to conduct a branch-and-cut search, manages cuts in a manner that is poorly documented. For example, we observed that supernode may discard cuts that our methods identify after leaving the node at which they are generated. The same cut may be generated again at another node. For this reason, OSL does not provide a report that gives a good measure that might be used to understand the cut-generation process at a detailed level. The run time to solve an instance is the primary measure of efficacy, and the number of nodes required to find and confirm an optimal solution is a secondary measure. As test results indicate, our method performed particularly well in comparison to OSL. The large number of cuts generated in solving some of the instances may be attributed – at least in part - to the supernode routine discarding cuts and generating them again at other nodes. The four factors ((1) number of tasks, (2) task precedence relationships, (3) cycle time, and (4) number of alternative machine types) interact in a complicated way to influence run time; no simple pattern of interaction appears evident. Tests show that, as expected, run-time increases rapidly with the number of tasks. However, this was not true in the case of SPASDPM instances. Thus, increasing station capacities facilitated solution. The models were relatively tight; the gap (i.e., the percent difference between the optimal values of the initial LP relaxation and the optimal integer solution) was less than 10% for most instances and less than 16% for all instances. However, many test instances were challenging, showing one more time that, in general, the gap is not necessarily a good measure of a model. Gadidov, Parija and Wilhelm (1997) applied the FGP to the 16 binary instances in MIPLIB. The gaps for these instances ranges from 0% to over 96% and averages 19.5%. The FGP solved 12 of these instances in less than 230 seconds; the remaining four required 660,

17

770, 4404 and 9623 seconds. OSL required significantly more time to solve each instance. These MIPLIB problems have been considered particularly challenging, but it appears that the SPASD problems are more so. Our experience confirms the importance of evaluating a solution approach specifically on SPASD instances, which are particularly challenging. The assembly line balancing (ALB) problem is closely related to the SPASD problem (e.g., consider constraints (1)-(3)). Specialized branch-and-bound algorithms (e.g., Johnson (1988), Hoffman (1992), and Scholl and Klein (1997)) have been shown to be quite effective in solving the ALB problem, especially if the optimal number of stations is the same, or one larger than, the theoretical minimal number of stations. Tests have described the effects of factors such as cycle time, number of tasks, precedence relationships (i.e., order strength), and processing times (i.e., ratio of maximum processing time to minimum). These same factors affect the assignment of tasks in the SPASD problem. However, the SPASDP problem must also deal with the combinatorial feature of selecting from alternative machine types that can perform each task and assigning one machine type to each station. This additional feature makes the SPASD problem substantially more challenging. We hope that these results will encourage application of our solution approach in industry. Results show that it provides a means of solving a variety of SPASD problems. A promising direction for future research would be to investigate extensions that would extend our solution approach to ASD problems that involve multiple products and other issues related to flexible assembly (a topic of wide interest to industry) by exploiting the special structures embedded in problems of that type. Our continuing research is focused in this direction. Acknowledgements This material is based upon work supported by the National Science Foundation on Grants DDM-9114396 and DMI-9500211. 7. References Arcus, A. L., 1966, COMSOAL: A Computer Method of Sequencing Operations for Assembly Lines, in Readings in Production and Operations Management, John Wiley & Sons, New York. Assche, F. V., and Herroelen, W. S., 1998, An Optimal Procedure for the Single Model Deterministic Assembly Line Balancing Problems, European Journal of Operational Research, 3, 142-149. Baybars, I., 1986, A survey of Exact Algorithms for the Assembly Line Balancing Problem, Management Science, 32, 909-932.

18

De Reyck, B., and, Herroelen, W., 1995, Assembly Line Balancing by Resource-Constrained Project Scheduled Techniques: A Critical Appraisal, Working Paper. Gadidov, R., Parija, G., and, Wilhelm, W., 1997, A Facet Generation Procedure for solving 0/1 integer programs, to appear in Operations Research. Ghosh, S., and, Gagnon, R. J., 1989, A Comprehensive Literature Review and Analysis of the Design, Balancing and Scheduling of Assembly Systems, International Journal of Production Research, 27, 637-670. Graves, S. C., and B. W. Lamar. 1983. An Integer Programming Procedure for Assembly System Design Problems. Opns. Res. 31, 522-545. Graves, S. C., and C. H. Redfield. 1988. Equipment Selection and Task Assignment for Multiproduct Assembly System Design. Int. J. Flex. Manu. Sys. 1, 31-50. Hackman, S. T., Magazine, M. J., and, Wee, T. S., 1989, Fast, Effective Algorithms for Simple Assembly Line Balancing Problem, Operations Research, 6, 916-924. Hoffman, T. R., 1992, Eureka: A Hybrid System for Assembly Line Balancing, Management Science, 38, 39-47. Johnson, R. V., 1988, Optimally Balancing Large Assembly Lines with ‘Fable’, Management Science, 34, 240-253. Karp, R. M., 1972, Reducibility Among Combinatorial Problems, in Complexity of Computer Applications, R. E. Miller and J. W. Thatcher (eds.), Plenum, New York, 85-104. Kim, H., and, Park, S., 1995, A Strong Cutting Plane Method for the Robotic Assembly Line Balancing Problem, International Journal of Production Research, 33, 2311-2323. Kimms, A., 1998, Minimal Investment Budgets for Flow Line Configuration, Working Paper 470, Lehrstuhl für Produktion und Logistik, Institut für Betriebswirtschaftslehre, Christian-Albrechts Universität zu Kiel, Kiel, Germany. Nemhauser, G. L., and, Wolsey, G. L., 1988, Integer and combinatorial optimization, John Wiley & Sons, New York. Pinnoi, A., and, Wilhelm, W., 1997a, A Family of Hierarchical Models for the Assembly System Design, International Journal of Production Research, 35, 253-280. Pinnoi, A., and, Wilhelm, W., 1997b, A Branch-and-Cut Approach for Workload Smoothing on Assembly Lines, ORSA Journal of Computing, 9, 335-350.

19

Pinnoi, A., and, Wilhelm, W., 1998, Assembly System Design: A Branch-and-Cut Approach, Management Science, 44, 103-118. Scholl, A., 1995, Balancing and Sequencing of Assembly Lines, Physica Verlag, Heidelberg, Germany. Scholl, A., and R. Klein, 1997, SALOME: A Bidirectional Branch-and-Bound Procedure for Assembly Line balancing, INFORMS Journal on Computing, 9 (4). Ugurdag, H. F., Papachristou, C. A., and, Rachamadugu, R., 1997, Designing Paced Assembly Lines with Fixed Number of Stations, European Journal of Operational Research, 102, 488-501. Wilhelm, W.E., 1999, A Column Generation Approach for the Assembly System Design Problem with Tool Changes, International Journal of Flexible Manufacturing Systems, Vol. 11, No. 2, 1999, forthcoming.

20

Table 1: 20-task, SPASD test instances Problem asdp1c1m2_20 asdp1c1m4_20 asdp1c1m6_20 asdp1c2m2_20 asdp1c2m4_20 asdp1c2m6_20 asdp1c3m2_20 asdp1c3m4_20 asdp1c3m6_20 asdp2c1m2_20 asdp2c1m4_20 asdp2c1m6_20 asdp2c2m2_20 asdp2c2m4_20 asdp2c2m6_20 asdp2c3m2_20 asdp2c3m4_20 asdp2c3m6_20 asdp3c1m2_20 asdp3c1m4_20 asdp3c1m6_20 asdp3c2m2_20 asdp3c2m4_20 asdp3c2m6_20 asdp3c3m2_20 asdp3c3m4_20 asdp3c3m6_20

Rows 710 880 1040 745 879 1198 685 879 1257 696 878 1094 712 883 1261 679 928 1191 720 922 1165 670 886 1198 732 929 1246

Variables 643 812 978 678 816 1089 621 816 1134 628 812 1031 649 817 1178 612 863 1124 654 860 1098 618 824 1130 671 867 1185

ZLP 38220.54 38609.42 37888.36 39689.50 37099.12 36815.88 40792.05 40631.14 40031.45 37945.60 40592.12 39634.45 38392.98 41306.49 40298.29 37945.60 39066.64 38811.21 36392.34 40893.42 39864.06 38229.82 41410.28 39259.13 35093.18 40138.67 39560.71

21

ZBP 44553.71 43614.43 42726.60 41534.40 41358.02 40164.75 44523.03 43343.31 42323.01 43991.31 45025.21 43925.69 41410.11 42269.53 41211.68 41450.11 43081.94 42102.05 43953.92 46103.65 46677.16 39559.30 43995.12 41968.35 40424.18 43953.92 41296.33

Gap (%) 14.21 11.48 11.32 4.44 10.30 8.34 8.38 6.26 5.41 13.74 9.85 9.77 7.29 2.28 2.22 8.46 9.32 7.82 17.20 11.30 14.60 3.36 5.86 6.46 13.19 8.68 4.20

Table 2: 30-task, SPASD test instances Problem asdp1c1m2_30 asdp1c1m4_30 asdp1c1m6_30 asdp1c2m2_30 asdp1c2m4_30 asdp1c2m6_30 asdp1c3m2_30 asdp1c3m4_30 asdp1c3m6_30 asdp2c1m2_30 asdp2c1m4_30 asdp2c1m6_30 asdp2c2m2_30 asdp2c2m4_30 asdp2c2m6_30 asdp2c3m2_30 asdp2c3m4_30 asdp2c3m6_30 asdp3c1m2_30 asdp3c1m4_30 asdp3c1m6_30 asdp3c2m2_30 asdp3c2m4_30 asdp3c2m6_30 asdp3c3m2_30 asdp3c3m4_30 asdp3c3m6_30

Rows 1575 1555 2078 1569 1770 2160 1611 1878 2181 1614 1795 1962 1606 1904 2094 1526 1827 2110 1679 1786 1946 1545 1889 2081 1614 1817 1953

Variables 1466 1655 1979 1470 1671 2061 1501 1679 2082 1523 1698 1866 1500 1808 1998 1427 1699 2014 1584 1691 1851 1440 1794 1986 1518 1722 2009

ZLP 53222.49 52391.71 51653.43 52596.58 54152.84 47652.32 52099.90 53946.73 50935.41 49910.35 48271.33 47190.45 50925.98 48975.26 47390.54 51380.54 50922.88 50988.06 48763.66 48492.92 49420.37 48413.90 49555.93 47838.59 49892.82 51052.85 50774.77

22

ZBP 57839.00 56812.33 55033.46 57020.97 56192.15 52293.95 54390.97 56231.57 53585.62 55438.12 53891.31 52612.44 54556.08 52954.55 52331.66 53256.98 52431.76 53927.59 53828.53 54814.18 58413.53 52075.62 51978.35 51668.75 53156.98 55466.30 54388.25

Gap (%) 7.98 7.78 6.14 7.76 3.63 8.88 4.21 4.06 4.95 9.97 10.43 10.31 6.65 7.51 9.44 3.52 2.88 5.45 9.41 11.53 15.40 7.03 4.66 7.41 6.14 7.96 6.64

Table 3: 20-task, SPASDPM test instances Problem asdpmp1m4n4_20 asdpmp1m4n6_20 asdpmp1m6n4_20 asdpmp1m6n6_20 asdpmp2m4n4_20 asdpmp2m4n6_20 asdpmp2m6n4_20 asdpmp2m6n6_20 asdpmp3m4n4_20 asdpmp3m4n6_20 asdpmp3m6n4_20 asdpmp3m6n6_20

Rows 143 143 183 183 146 146 186 186 143 143 183 183

Variables 920 1056 1164 1252 911 1056 905 1024 994 1077 1081 1125

ZLP 38747.26 37614.73 37306.96 36704.92 38127.37 36713.55 38607.17 36697.65 38484.46 35820.83 38489.46 36757.40

ZBP 42356.53 41512.76 41175.16 40440.13 41386.80 40445.31 41091.89 39013.47 41949.07 40017.48 40571.90 40008.04

Gap (%)

ZLP 48344.10 47548.78 47586.45 46124.78 47108.30 46220.61 47435.51 46195.00 47910.90 47263.21 48059.86 46009.43

ZBP 53429.41 52398.95 51745.91 51205.34 52382.83 50753.80 51739.06 50313.43 50403.85 49201.73 50037.44 49780.86

Gap (%)

8.52 9.39 9.39 9.24 7.88 9.23 6.05 5.94 8.26 10.5 5.13 8.12

Table 4: 30-task, SPASDPM test instances Problem asdpmp1m4n4_30 asdpmp1m4n6_30 asdpmp1m6n4_30 asdpmp1m6n6_30 asdpmp2m4n4_30 asdpmp2m4n6_30 asdpmp2m6n4_30 asdpmp2m6n6_30 asdpmp3m4n4_30 asdpmp3m4n6_30 asdpmp3m6n4_30 asdpmp3m6n6_30

Rows 190 190 240 240 190 190 240 240 195 195 245 245

Variables 1873 2002 2180 2406 1847 1963 2142 2378 1797 1918 2175 2419

23

9.52 9.26 8.04 9.92 10.07 8.94 8.32 8.19 4.95 3.94 3.95 7.58

Table 5: 20-task, SPASDPR test instances Problem asdprp1c1m4_20 asdprp1c3m4_20 asdprp1c1m6_20 asdprp1c3m6_20 asdprp2c1m4_20 asdprp2c3m4_20 asdprp2c1m6_20 asdprp2c3m6_20 asdprp3c1m4_20 asdprp3c3m4_20 asdprp3c1m6_20 asdprp3c3m6_20

Rows 945 978 1183 1562 841 902 1168 1178 875 906 1207 1200

Variables 789 811 1020 1088 741 795 982 1010 675 793 1011 1016

ZLP 40640.03 38112.29 38706.36 37435.00 42171.24 38422.12 39215.80 38614.51 43959.97 44636.51 42642.26 42086.94

ZBP 44242.45 43008.50 42596.98 42477.00 45557.33 45122.04 43651.61 44067.07 48183.86 48332.74 47509.47 46412.00

Gap (%)

ZLP 54337.75 55181.41 53211.51 53151.20 52800.10 52383.49 52447.10 52534.49 53594.01 54853.23 54796.94 53017.97

ZBP 58652.31 57755.11 56694.22 55163.89 56091.42 54801.31 55243.45 57150.61 60194.97 59763.86 59852.55 59298.54

Gap (%)

8.14 11.38 9.13 11.87 7.43 14.85 10.16 12.37 8.77 7.65 10.24 9.32

Table 6: 30-task, SPASDPR test instances Problem asdprp1c1m4_30 asdprp1c3m4_30 asdprp1c1m6_30 asdprp1c3m6_30 asdprp2c1m4_30 asdprp2c3m4_30 asdprp2c1m6_30 asdprp2c3m6_30 asdprp3c1m4_30 asdprp3c3m4_30 asdprp3c1m6_30 asdprp3c3m6_30

Rows 2372 2148 2442 2382 2180 2310 2529 2467 2228 2255 2381 2688

Variables 1625 1642 1979 1908 1649 1695 1988 2007 1615 1737 2012 2096

24

7.36 4.46 6.14 3.65 5.87 4.41 5.06 8.08 10.97 8.22 8.45 10.59

Table 7: Test results for 20-task, SPASD instances Problem asdp1c1m2_20 asdp1c1m4_20 asdp1c1m6_20 asdp1c2m2_20 asdp1c2m4_20 asdp1c2m6_20 asdp1c3m2_20 asdp1c3m4_20 asdp1c3m6_20 asdp2c1m2_20 asdp2c1m4_20 asdp2c1m6_20 asdp2c2m2_20 asdp2c2m4_20 asdp2c2m6_20 asdp2c3m2_20 asdp2c3m4_20 asdp2c3m6_20 asdp3c1m2_20 asdp3c1m4_20 asdp3c1m6_20 asdp3c2m2_20 asdp3c2m4_20 asdp3c2m6_20 asdp3c3m2_20 asdp3c3m4_20 asdp3c3m6_20 *

Cuts 734 3825 715 2551 649 2141 5709 644 1055 5579 210 1617 1556 917 1356 818 372 275 6722 14215 1752 383 2073 583 833 1039 157

Branch and Cut Nodes 19 512 16 156 82 334 878 83 103 783 2905 170 167 117 190 85 31 52 1269 811 133 29 123 58 107 1269 13

*

CPU 984 1482 721 510 579 1571 3080 784 675 1353 4789 636 578 359 471 189 124 138 1527 3511 975 176 1198 458 389 1527 121

All times are in seconds

** Time or Memory over limit

25

OSL Nodes CPU * 88 1455 623 3519 79 2618 721 4233 234 2934 744 3544 2033 4123 1902 3241 ** ** ** ** 6211 13344 761 3400 ** ** 188 2727 966 2596 ** ** 433 3021 ** ** ** ** 3988 9899 ** ** 244 3122 ** ** 152 2030 ** ** ** ** 398 2689

Table 8: Test results for 30-task, SPASD instances Problem asdp1c1m2_30 asdp1c1m4_30 asdp1c1m6_30 asdp1c2m2_30 asdp1c2m4_30 asdp1c2m6_30 asdp1c3m2_30 asdp1c3m4_30 asdp1c3m6_30 asdp2c1m2_30 asdp2c1m4_30 asdp2c1m6_30 asdp2c2m2_30 asdp2c2m4_30 asdp2c2m6_30 asdp2c3m2_30 asdp2c3m4_30 asdp2c3m6_30 asdp3c1m2_30 asdp3c1m4_30 asdp3c1m6_30 asdp3c2m2_30 asdp3c2m4_30 asdp3c2m6_30 asdp3c3m2_30 asdp3c3m4_30 asdp3c3m6_30

Cuts 9207 8435 7114 685 1065 938 2167 11256 1428 3456 455 8456 4349 180 23510 1032 1634 3344 4809 16707 3157 3081 4746 5355 221 2883 1953

Branch and Cut Nodes 534 321 224 22 92 138 198 1122 282 423 32 356 129 1 2529 7 15 366 198 2838 194 172 468 522 4 143 307

*

CPU 4151 5233 6416 977 399 3364 947 11408 2187 3211 578 7245 1309 540 16729 510 1371 3336 2837 11541 4580 1468 2842 15508 288 3706 8054

* All times are in seconds ** Time or Memory over limit

26

OSL Nodes CPU * ** ** 897 13421 ** ** 233 2455 154 3122 ** ** 478 3467 ** ** 1344 14578 ** ** ** ** ** ** 532 5789 98 3212 ** ** 132 3423 614 2819 ** ** 1478 13967 ** ** ** ** 837 7361 ** ** ** ** 213 2189 ** ** 1349 16283

Table 9: Test results for 20-task, SPASDPM instances Problem asdpmp1m4n4_20 asdpmp1m4n6_20 asdpmp1m6n4_20 asdpmp1m6n6_20 asdpmp2m4n4_20 asdpmp2m4n6_20 asdpmp2m6n4_20 asdpmp2m6n6_20 asdpmp3m4n4_20 asdpmp3m4n6_20 asdpmp3m6n4_20 asdpmp3m6n6_20

Cuts 410 1069 955 429 1027 405 118 221 514 1072 847 500

Branch and Cut Nodes 20 319 26 12 8 73 12 24 11 41 25 36

*

CPU 307 1385 794 270 481 1785 148 291 159 1414 332 336

OSL Nodes CPU * 54 3214 ** ** ** ** ** ** 43 1422 ** ** 64 2490 ** ** 54 2600 ** ** 43 1327 ** **

Table 10: Test results for 30-task, SPASDPM instances Problem asdpmp1m4n4_30 asdpmp1m4n6_30 asdpmp1m6n4_30 asdpmp1m6n6_30 asdpmp2m4n4_30 asdpmp2m4n6_30 asdpmp2m6n4_30 asdpmp2m6n6_30 asdpmp3m4n4_30 asdpmp3m4n6_30 asdpmp3m6n4_30 asdpmp3m6n6_30

Cuts 3374 847 5699 28904 15748 3051 3220 585 291 2709 437 5097

Branch and Cut Nodes 244 143 218 1234 337 116 78 6 4 42 6 107

* All times are in seconds

** Time or Memory over limit

27

*

CPU 1943 1491 3058 15448 10841 3315 2548 226 382 3513 407 5851

OSL Nodes CPU * ** ** 120 4321 311 5411 ** ** ** ** 255 7988 ** ** ** ** 89 3217 ** ** 56 1467 ** **

Table 11: Test results for 20-task, SPASDPR instances Problem asdprp1c1m4_20 asdprp1c3m4_20 asdprp1c1m6_20 asdprp1c3m6_20 asdprp2c1m4_20 asdprp2c3m4_20 asdprp2c1m6_20 asdprp2c3m6_20 asdprp3c1m4_20 asdprp3c3m4_20 asdprp3c1m6_20 asdprp3c3m6_20

Cuts 9363 11495 7980 5955 5450 1487 7825 1264 883 539 2130 7412

Branch and Cut Nodes 291 321 243 309 220 74 262 68 33 16 63 219

*

CPU 2795 4317 3304 3742 2848 1216 3757 2013 817 493 3407 4071

OSL Nodes CPU * 925 13915 ** ** ** ** ** ** ** ** 198 3612 ** ** ** ** 108 3092 89 2590 481 8041 ** **

Table 12: Test results for 30-task, SPASDPR instances Problem asdprp1c1m4_30 asdprp1c3m4_30 asdprp1c1m6_30 asdprp1c3m6_30 asdprp2c1m4_30 asdprp2c3m4_30 asdprp2c1m6_30 asdprp2c3m6_30 asdprp3c1m4_30 asdprp3c3m4_30 asdprp3c1m6_30 asdprp3c3m6_30

Cuts 24163 3108 2193 463 21913 2861 28130 25456 18126 514 1026 34206

Branch and Cut Nodes 518 67 61 17 621 88 462 782 331 16 24 1374

* All times are in seconds

** Time or Memory over limit

28

*

CPU 14166 3954 3465 989 13814 3109 12083 13014 9635 1712 1822 14261

OSL Nodes CPU * ** ** ** ** ** ** 61 3122 ** ** ** ** ** ** ** ** ** ** 112 4277 61 3014 ** **

Precedence diagram 1 for 20-task test problems: 1U*

2F*

3B*

4U

7F

5U

8F

10B

6F

9F

17B

11U

18B

12F

13U

14U

15B

16B

19U

20U

Figure 3 *

U = unrestricted task B = back task F = front task

29

Precedence diagram 2 for 20-task test problems:

1U

2B

5B

3B

12U

13B

14B

15B

16F

17U

18U

19U

20U

Figure 4

30

4U

8F

9F

6F

7F

11U

10F

Precedence diagram 3 for 20-task test problems: 1U

2B

5B

6F

7F

11U

3B

8F

12U

4B

9F

13B

10F

14B

15U

16F

17U

18U

19U

20U

Figure 5 31

Precedence diagram 1 for 30-task test problems: 1U 2U 3B 6B 9U

4B

7U

8U

17F

13F

15B

10F

18F

14F

16B

11F

19F

25B

20F

26B

21U

27U

22U

23U 24U 28U 29U Figure 6 30U

32

5B 12U

Precedence diagram 2 for 30-task test problems: 1U 3F 6F 7F

24U

4B 22B

19F

5B

25B

23F

20B

8U 9F

2B

21B 26B

10U 11F 12F 13U 14U 15U 16U 17U 18U 27U 28U 29U 30U

Figure 7

33

Precedence diagram 3 for 30-task test problems: 1U 5U

2F 3F

4B 7U

6F

9U 12F

8B

24F

11B

26F 25U

14U 15B 16B 17U 18B 19U 20B 21U 22U 23U

27F 28U 29U 30U

Figure 8

34

10B 13F