Removing a job from an ordered list of early jobs to the unordered list of late jobs ..... minimize the weighted number of late jobs on a single machine'. Technical.
Heuristics and metaheuristics for a parallel machine scheduling problem: a computational evaluation Marc Sevaux
Philippe Thomin
University of Valenciennes CNRS, UMR 8530, LAMIH – Production Systems F-59313 Valenciennes cedex 9 – France {Marc.Sevaux, Philippe.Thomin}@univ-valenciennes.fr
Research Report 01-2-SP (extended version of report 01-1-SP) November 2001
Abstract In scheduling theory, minimizing the weighted number of late jobs in the general case (non-equal release dates) is an objective function usually used as a performance indicator. This paper presents several approaches for the N P-Hard parallel machine scheduling problem, including two MILP formulations for exact resolution and various heuristics and metaheuristics to solve large size instances. An original data structure is developed and allows flexibility for computing starting times once the sequence is fixed. A simple neighborhood combined with this data
1
structure is used in the heuristics and metaheuristics and give promising results.
Keywords: Scheduling, parallel machines, late jobs, MILP, heuristics, metaheuristics.
1
Introduction
A parallel machine scheduling problem where the objective is to minimize the number of late jobs is often used as a performance indicator for similar objectives. This problem, noted P m|rj |
P
wj Uj in the standard classification is N P-hard [Garey and Johnson, 1979].
The problem can be described as follows. A set of n jobs has to be scheduled on m parallel machines. Each job j cannot start before its release date rj and is either completed before its due date dj or is late. One job cannot be sequenced on more than one machine, and preemption is not allowed. The processing time of the job j is denoted by pj and the weight by wj . The objective function minimizes the sum of the weights of the late jobs. The following remark is then very important for the rest of the paper. Remark 1 Late jobs can be scheduled arbitrarily after the jobs on time, it is then not necessary to find a starting time for the late jobs. When the jobs are available at time zero, and the weights are all equal to one, the problem consists in minimizing the number of late jobs (noted P m| |
P
Uj ) and remains
N P-hard [Garey and Johnson, 1979]. [Ho and Chang, 1995] have proposed various heuristics to solve this problem but without guaranties on optimality except for a set of ten-job and two-machine problems. [Liu et al., 1998] have developed a hybrid genetic algorithm but their computational experiment report is restricted to only 9 test problems. When the jobs with release dates and weights are to be sequenced on a single machine (noted 1|rj |
P
wj Uj ), the problem is also N P-hard but well studied today. First lower 2
bounds and heuristics were suggested by [Dauz`ere-P´er`es, 1995]. An approach developed by [Baptiste et al., 1998] uses constraint propagation techniques and is able to solve optimally instances up to 120 jobs. Same performance is reached by [Dauz`ere-P´er`es and Sevaux, 1999] with a Lagrangean relaxation algorithm based on an efficient mixed-integer linear programming formulation. To solve larger instances, [Sevaux and Dauz`ere-P´er`es, 2000] have developed a genetic local search algorithm that performs well. When jobs have to be scheduled on parallel machines and subject to release times and weights (i.e., the problem we want to solve), [Baptiste et al., 2000] proposed a constraintbased approach that solves the problem up to 50 jobs on 6 machines. Due to the structure of their technique, if no solution is found in a reasonable amount of time, the quality of the solution is very poor. They are currently improving their methods and better results should appear soon. In this paper, Section 2 introduces two MILP formulations based on formulations developed for the single machine problem [Dauz`ere-P´er`es and Sevaux, 1998]. A specific data structure that allows flexibility in job starting times is developed and a procedure called best insertion is described in Section 3. Then Section 4 presents a neighborhood that can be used by heuristics and metaheuristics (Section 5). Computational results are reported and discussed in Section 6.
2
MILP formulations
Using mixed integer linear programming formulations is an easy way to tackle this scheduling problem in a first step. It gives optimal results rapidly, but even with a powerful commercial code, large size instances cannot be optimally solved in a reasonable limited CPU time (1 hour). This approach has been used as a basis for comprison.
3
In this section, two different models are developed and some instances have been initially solved by ILOG OPL Studio 2.1.3. Since OPL Studio does not fit our requirements in terms of instances handling, we switch to Xpress-MP, a commercial software from Dash Associates. For small size instances, optimal solutions have been found in a reasonable amount of CPU time (10 minutes), but for the majority, one hour of CPU time is only sufficient for finding bounds to our problem. These poor results, but even interesting, are one of the motivations for using metaheuristics. The first model developed in Section 2.1 uses a classical formulation with binary positional variables (ujkl = 1, if job j is sequenced at position k on machine l, 0 otherwise). The second formulation developed in Section 2.2 is known for less efficiency. Time-indexed variables are used in that model (vjt = 1, if job j start at time instant t, 0 otherwise). One of the main advantage of this formulation is that, it is very compact and easy to solve when time horizon is small.
2.1
Positional variables
In that formulation, Uj = 1 if job j is late (completed after its due date) and 0 otherwise. A binary variable ujkl = 1 if job j is scheduled on time at position k on machine l, 0 otherwise. Positive real variable tkl denotes the starting time of the job at position k on machine l.
min
n X
wj U j
(1)
j=1
subject to n X m X
k=1 l=1 n X
ujkl ≤ 1
ujkl ≤ 1
∀j = 1 . . . n
(2)
∀k = 1 . . . n, ∀l = 1 . . . m
(3)
j=1
4
tkl + tkl − tkl +
n X
pj ujkl ≤ tk+1,l
∀k = 1 . . . n − 1, ∀l = 1 . . . m
(4)
rj ujkl ≥ 0
∀k = 1 . . . n, ∀l = 1 . . . m
(5)
(pj − dj )ujkl ≤ 0 ∀k = 1 . . . n, ∀l = 1 . . . m
(6)
j=1 n X
j=1 n X
j=1 m n X X
∀j = 1 . . . n
(7)
tkl ≥ 0
∀k = 1 . . . n, ∀l = 1 . . . m
(8)
ujkl ∈ {0, 1} ∀j = 1 . . . n, ∀k = 1 . . . n, ∀l = 1 . . . m
(9)
ujkl + Uj = 1
k=1 l=1
Uj ∈ {0, 1}
∀j = 1 . . . n
(10)
The objective function (1) is the minimization of the weights of the late jobs. In Constraint (2), each job cannot have more than one position and according to Remark 1, one job can have no corresponding positions (hence, it will be late). Constraints (3) specifies that each position on each machine cannot be occupied by more than one job. By Constraint (4), we ensure that the job sequenced at position k + 1 will not start before the completion of the job at position k on the same machine. Constraint (5) prevents jobs to start before their release dates and Constraint (6) assigns a position on one machine for one job iif it can be completed before its due date. Constraint (7) sets the binary variable Uj = 1 for each job not having a position on one machine.
2.2
Time-indexed variables
Same binary variables Uj are used in this time-indexed formulation to specify on-time and late jobs. Binary variables vjt are equal to 1 if the job j starts at time t. Since machines are all equivalent, it is not necessary to specify on which machine the jobs should be executed but only when it starts. Of course, for this model, one needs to discretize the time horizon. Without loss of generality, only integer time instants are considered. The time limit can 5
be roughly computed as T = maxj=1...n rj +
Pn
j=1
pj but according to Remark 1, it is not
necessary to find a starting time for each job. Hence, time horizon end at the completion time of the last early job, bounded by the largest due date, so T = maxj=1...n dj .
min
n X
wj U j
(11)
j=1
subject to dj −pj
X
t=rj n X
j=1
∀j = 1 . . . n
(12)
vjs ≤ m ∀t = 0 . . . T
(13)
vit + Uj = 1 t X
s=t−pj +1 s∈[rj ,dj −pj ]
vjt ∈ {0, 1}
∀j = 1 . . . n, ∀t = 0 . . . T
(14)
Uj ∈ {0, 1}
∀j = 1 . . . n
(15)
The objective function (11) remains identical in both formulations. By Constraint (12), we ensure that each job should either be executed in its time window or be late. Constraint (13) checks for each time period if no more than m jobs are simultaneously running. Since jobs are scheduled on identical parallel machines, it is not necessary to know on which machine a job is scheduled. We only check that no more than m jobs are sequenced at the same timne. These models are solved using Xpress-MP modeler and solver from Dash Associates and formulations can be found in Appendix A.
3
Best insertion procedure
For this problem, we use a specific data structure that allows us to handle a class of equivalent solutions instead of a unique solution. A job is represented by the classical 6
quadruple (rj , pj , dj , wj ), but, instead of assigning a job to a machine at a specific date tj , we use for each machine a partial ordered sequence of early jobs, ensuring that the sequence is always feasible. Due to Remark 1, late jobs are not sequenced but kept in a unordered list of late jobs. That type of data structure allows jobs to be easily inserted or removed at a negligible cost. Insertion and removal operations will be used in the following as a basis for more complex operations. Using linked lists, we do not need to compute the starting time of each job. We will call in the sequel this structure floating jobs because jobs can move in a temporal window: starting and ending date of a job is defined either by its own data (rj , pj , dj ) or by its neighbors (See Figure 1). This type of floating jobs can handle identical solutions since early jobs in the same order on a machine can start at many different dates. Moreover, the idle period between two consecutive jobs can be efficiently used. Ji
Jk
Jj r j
dj
tj
Figure 1: Floating jobs. Removing a job from an ordered list of early jobs to the unordered list of late jobs causes no problem: the resulting partial sequence remains feasible. When we insert a job on a machine, we must ensure that this job will not overlap adjacent jobs, i.e. will be scheduled on time. We compute the maximum insertion interval [tmin , tmax ] between the two adjacent jobs. As shown in Figure 2, job j can be inserted and will be on time if:
max(rj , tmin ) + pj ≤ min(dj , tmax )
To insert a job at the first (resp. the last) position in a list, i.e. sequencing a job on time 7
eet i Ji
Jk lstk
Figure 2: Inserting a job. on a specific machine, the value of tmin (resp. tmax ) will be 0 (resp. +∞). Otherwise, the maximum insertion interval is obtained by computing the earliest ending time eet i of the immediate predecessor job i and the latest starting time lstk of the immediate successor job k. We obtain these bounds with a recursive computation that ends either at the beginning or the end of the list or when two consecutive jobs cannot overlap (dp ≤ rs ). Denoting pred(j) and succ(j) the immediate predecessor and successor of a job j, respectively, we can define formally eet and lst by: eetj = max(rj , eetpred(j) ) + pj lstj = min(dj , lstsucc(j) ) − pj Given the previous definitions, we are able to check if a job can be inserted between two other jobs, or at the beginning or the end of a sequence. For a job that is not yet sequenced, the best insertion is the insertion point where the remaining idle time is minimum. This type of insertion tends to fill up completely an empty space between two jobs by pushing and pulling adjacent jobs at their limits.
4
Neighborhood
Due to Remark 1, late jobs are handled separately from early jobs. With the insertion procedure, we know that, on each machine, only on time jobs are listed. Thus an obvious neighborhood consist in exchanging late jobs with early jobs. 8
Removing a job from a machine is always feasible, it simply consists in removing this job on the machine and adding it to the list of the late jobs. One late job can be exchanged with 0 to l early jobs and the maximum value for l is defined by lmax = dpmax /pmin e where pmax and pmin are respectively the largest and smallest processing time among all jobs. Neighborhood for a single late job is computed by examining all possible insertion points on each machine. This can be done for each job in the late list and acceptable points are kept into a list ordered by increasing values of objective function modifications. A complete neighborhood can be computed in O(n2 ).
5
Heuristics and metaheuristics
Several heuristics and metaheuristics are used to solve the problem. Simple descent heuristics have been studied but the results are always dominated by deepest descent heuristics. Hence, simple descnet heuristics are not presented here. Once a constructive insertion and the neighborhood are defined, few work is needed to develop descent heuristics, a multistart descent, a simulated annealing and a tabu search procedure.
5.1
Heuristics
Using the best insertion method on specific sorts of the late jobs list gives simple heuristics and first results of our problem. Three heuristics are implemented based on three orders:
1. natural order (as jobs are read on the input file), 2. weighted shortest processing time (increasing order of wi /pi ), 3. random shuffle (the list is shuffled randomly). 9
The three heuristics will be respectively named BNO (Best insertion + Natural Order), BWS (Best insertion + WSPT rule) and BRS (Best insertion + Random Shuffle).
5.2
Descent heuristics
A set of heuristics and metaheuristics has been used in that paper to tackle large size instances. In Appendix B a short and uniform description of these algorithms is given and the reader can easily use them in another context.
Deepest descent Starting from an initial solution and using the neighborhood described in Section 4, a simple deepest descent is applied (see Appendix B.1). Since neighborhood allows to exchange one late job with 0 to l early jobs, it is possible to start with all jobs late. Hence a formal comparison between initial solution and no initial solution will be done. Four runs of the DS (descent heuristic) will be done, one starting from an empty solution (denoted by EMP) and three starting from each of three initial heuristic solutions.
Multistart descent Starting from a random ordered list can be done many times. Since the computation time is negligible, 1000 runs are performed for each instance. With this technique, we hope that 1000 local optima can be reached including the global optimum. This method will be called MD (see Appendix B.2).
5.3
Simulated annealing
For a complete and detailed description of simulated annealing we refer the reader to Reeves’ book [Reeves, 1993]. A general description of the SA method can be found in Appendix B.3. Since the neighborhood itself leads to good results (see descent heuristics
10
results) we develop a simple simulated annealing procedure. The simulated annealing procedure is stopped after 10000 iterations without improvement of the best solution.
Initial temperature The initial temperature has been chosen after some test runs and 20 is a temperature satisfying for problems with 10 to 100 jobs. Each 100 iterations the temperature is decreased by 5%.
Reheating For some problems, after many iterations, only few neighbors (compare to the total number of neighbors) can improve the current solution. Since the temperature is already low, the probability that an improving neighbor is chosen is too small. In some case, it happens that no improving neighbors can be found. Hence if we find 100 consecutive iterations with no improving neighbors, the temperature is increased at half level of the previous initial temperature (10 for the first reheating, 5 the second, etc.). All these parameters can be easily changed.
5.4
Tabu search
For a good introduction the tabu search see [Reeves, 1993]. A basic tabu search procedure (see Appendix B.4) is developed but a cycle detection and dynamic tabu tenure is handled. Identical stopping conditions are used.
Tabu criteria Tabu criteria is one of the most difficult point to set before running tests. Many tabu criteria have been studied here starting from the complete schedule (each position of each job on each machine) to the simple value of the weighted number of late jobs. For the first criteria, many equivalent solution (by exchanging machine) can be found and tenure should be too highly increased to avoid cycles. On the other side, the simple
11
weighted number of late jobs value is too restrictive, not enough different values can be found and rapidly, everything becomes tabu. Between these two criteria, we choose the pair (
P
wj U j ,
P
pj Uj ), sum of weights and sum of the processing times of the late jobs and
excluding any information on early jobs.
Cycle detection and dynamic tenure The tabu tenure is critical for each instance. On one hand, if the tenure is too small, cycle can occurs during the search and the optimal solution could not be found. On the other hand if the tabu tenure is too large, some point can be reached where everything is tabu and the procedure stop because no neighbors can be chosen. To avoid cycle and blocking during the search the tabu tenure is adjusted dynamically. Each time a blocking state is detected the tabu tenure is decreased by one. Cycle detection is more complicated since an history greater the the tabu tenure has to be recorded. Memory space for this history is rather small since only a pair of integer has to be recorded at each iteration. If a cycle is detected, the tenure is increased to the length of the cycle and the forthcoming move is set tabu to escape directly from this cycle.
6 6.1
Computational experiments Test instances
Thanks to [Baptiste et al., 2000], we have been able to test all these methods on a large set of instances where part of optimal solutions were known. Their method uses constraint propagation techniques to find optimal solutions. They have set the time limit to 600s and unfortunately, when they cannot find the optimal solution, the quality of the best solution found so far is very poor. With this time limit they are not able to find any solution with strictly more that 50 jobs. 12
The set of instances covers problems with 10, 20, 30, 40 and 50 jobs and 1, 3 and 6 machines. Release dates, processing times and due dates are generated according to a process that uses Normal distribution and overall load of the machines (see [Baptiste et al., 2000] for details). Weights are randomly generated between 1 and Wmax where Wmax ∈ {1, 10, 99}.
6.2
MILP results
Other optimal solutions and lower bounds are given by the mixed integer programming models described in Section 2. For these tests, the time-indexed formulation have been firstly used since it is well solved when time horizon is not too large. The positional variables formulation leads to poor results since the time limit of 600 second is too short to find a single integer solution. These MILP experiments have been conducted on a PC celeron 466MHz. Most of the optimal solutions for n < 50 have been found by Baptiste et al., we concentrate our efforts on finding missing optimal solutions for n = 50. All optimal solutions were known for the single machine problem, hence, Table 1 reports the results for 3 and 6 machines and for different values of Wmax . The first two columns describe the class of instances. Next column gives the percentage of optimal value found for a given time limit of 600 seconds. When a solution is found but not proved optimal, the average deviation from the linear relaxation lower bound is computed. The last column reports the average CPU time for all instances (including instances stopped after 600 seconds). For some instances with 6 machines, 600 seconds is not enough for computing a first solution, then it is not possible to report the average deviation from the lower bound.
13
# of
Optimum
Avg. Dev.
Avg. CPU
hits (%)
from LB (%)
time (s)
1
63.3
0.0
429
10
36.7
14.4
442
99
33.3
13.2
481
1
50.0
(*)
406
10
46.7
(*)
438
99
40.0
(*)
446
machines Wmax 3
6
(*) no feasible solution found within 600s Table 1: MILP results.
6.3
Global results
Table 2 shows the global results for the procedures described in this paper. Descent method, simulated annealing and tabu search procedures are ran one time without initial solution and with the three heuristics. MD denotes the multistart descent procedure. The best procedure is given by the tabu search method preceeded by the BNO heuristic. This method reaches 88% the optimal solution, gives the best results among all methods for 80% instances and the smallest average gap to the optimal solution (1.89%). Moreover the CPU time is rather small (10.17s on average and less 100s at the maximum). The most surprising result is the multistart descent method that is always better than simulated annealing on every criteria.
6.4
Comparison to other methods
Since the tabu search method gives the best results, it has been used for comparison with the existing method from [Baptiste et al., 2000]. 14
Initial Heuristics
Optimum hits (%)
First
Gap to
Avg. CPU
Max. CPU
time (s)
time (s)
pos. (%) opt. (%)
Deepest descent heuristics EMP
45
33
21.5
0.02
0.12
BWS
49
35
19.6
< .01
< .01
BNO
44
31
20.3
< .01
< .01
BRS
44
31
20.3
< .01
< .01
Multistart deepest descent heuristics MD
78
63
3.9
4.58
41.1
Simulated annealing EMP
70
55
6.72
9.10
60.3
BWS
70
54
6.59
7.68
53.2
BNO
71
56
6.03
9.01
64.9
BRS
70
55
5.98
8.87
< .01
EMP
89
79
1.94
10.63
101.9
BWS
88
76
2.00
9.42
112.0
BNO
88
80
1.89
10.17
93.7
BRS
88
79
1.95
10.16
107.5
Tabu search
Table 2: Global results.
15
Table 3 draws results when the number of jobs increases. The second set of columns gives the number of instances for which the tabu search gives better (resp. equal or worst) results compared to Baptiste et al.’s results. Of course the number of optimal solutions found decreases but also the total number of optimal solution known decreases (only 146 / 270 instances for n = 50). The maximum CPU time of the tabu search method is always less than the average value of the competitors. For n = 50, Baptiste et al. need 308s on average to compute a solution whereas the tabu search is performed in less that 17s on average. In fact, as the number of jobs increases, the number of solution better or equal to the best solution given by Baptiste et al. increases too. When the optimal solution is not found, the relative gap is still reasonable (less than 4%). Number
Comp. to BJ solutions
Comp. to Optimum
of jobs
Better Equal Worst
Found (%) Gap (%)
CPU time (s) Avg.
Max
10
0
270
0
100
0
0.73
3.12
20
0
256
14
95.2
0.87
2.09
10.03
30
27
210
33
87.8
1.71
4.46
16.76
40
68
150
52
75.9
4.63
9.27
41.65
50
110
102
58
69.2
3.95
16.81
89.85
Table 3: Comparison to Baptiste et al. (2000)’s best results for n ≤ 50.
For more that 50 jobs only CPU times are given in Table 4. These CPU times are very encouraging for tackling large size instances.
7
Conclusion and future work
To solve the N P-hard problem P m|rj |
P
wj Uj , we have developed a set of heuristics and
metaheuristics based on a powerful data structure for the objective function. By using 16
Number CPU time (s) of jobs
Avg.
Max
60
27.7
93.7
70
39.1
166.1
80
56.8
262.2
90
77.0
241.7
100
106.5
416.1
Table 4: CPU time.
the same neighborhood, comparisons are relevants. After numerical experiments, we know that an initial solution is not necessary to reach the best solution, but tends to speed up the procedure. Since the CPU times are very promising for instances with 100 jobs, we will focus our future work on finding good lower bounds. Two MIP formulations can be used through a linear relaxation of the model but the quality of the bounds is often poor. The approach proposed in this paper can be extended to problem with non-identical parallel machines. Since during the exploration of the neighborhood each machine is scanned independently form the others, a different processing time for each job could be used without modification. Hence this method solves P m|rj | Rm|rj |
P
wj Uj in the same manner.
P
wj Uj , Qm|rj |
P
wj Uj and also
Through this paper, we can see that the time-indexed formulation gives good results for this problem but only when the time horizon is very limited. This problem could be fixed by developing a column generation technique to find either optimal solutions or lower bounds.
17
References [Baptiste et al., 2000] Baptiste, P., A. Jouglet, C. L. Pape, and W. Nuijten: 2000, ‘A Constraint-based Approach to Minimze the Weighted Number of Late Jobs on Parallel Machines’. Technical Report 2000/228, UMR, CNRS 6599, Heudiasyc, France. [Baptiste et al., 1998] Baptiste, P., C. L. Pape, and L. P´eridy: 1998, ‘Global Constraints for Partial CSPs: A Case Study of Resource and Due-Date Constraints’. In: Proc. 4-th Int Conf Principles and Practices Const. Program. Pisa, Italy. [Dauz`ere-P´er`es and Sevaux, 1998] Dauz`ere-P´er`es, S. and M. Sevaux: 1998, ‘An efficient formulation for minimizing the number of late jobs in single-machine scheduling’. Technical Report 98/9/AUTO, Ecole des Mines de Nantes. [Dauz`ere-P´er`es and Sevaux, 1999] Dauz`ere-P´er`es, S. and M. Sevaux: 1999, ‘Using Lagrangean relaxation to minimize the (weighted) number of late jobs’. In: National contribution for the 15th triennal conference, IFORS’99. Beijin, P.R. of China. Technical Report 99/8/auto, Ecole des Mines, Nantes, France. [Dauz`ere-P´er`es, 1995] Dauz`ere-P´er`es, S.: 1995, ‘Minimizing late jobs in the general one machine scheduling problem’. European J. Oper. Res. 81, 134–142. [Garey and Johnson, 1979] Garey, M. and D. Johnson: 1979, Computers and intractability: a guide to the theory of N P-completeness. San Francisco, CA, USA: Freeman. [Gu´eret et al., 2000] Gu´eret, C., C. Prins, and M. Sevaux: 2000, Programmation lin´eaire. Eyrolles. ISBN 2-212-09202-4, 365 pages (in French). [Ho and Chang, 1995] Ho, J. and Y. Chang: 1995, ‘Minimizing the number of tardy jobs for m parallel machines’. European J. Oper. Res. 84, 343–355.
18
[Liu et al., 1998] Liu, M., C. Wu, and X. Jiang: 1998, ‘Genetic algorithm method for minimizing the number of tardy jobs in indentical parallel machine scheduling problem’. Chinese J. Electr. 7(2), 188–192. [Reeves, 1993] Reeves, C. (ed.): 1993, Modern heuristic techniques for combinatorial problems. New York, NY, USA: John Wiley & Sons, inc. [Sevaux and Dauz`ere-P´er`es, 2000] Sevaux, M. and S. Dauz`ere-P´er`es: 2000, ‘Genetic algorithms to minimize the weighted number of late jobs on a single machine’. Technical Report 2000/51, UMR CNRS 8530, Lamih/SP, France.
19
A
Models in Xpress-MP
A specific example with 50 jobs and 3 machines is used for these formulations. We refer the reader to the documentation of Xpress-MP or to [Gu´eret et al., 2000] for a description of the xpress language syntax. Details of the constraints can be found in Section 2.
A.1
Formulation using positional variables
Except some necessary declarations, the constraints section is very similar to the mathematical model (1)-(10).
MODEL Pos LET n = 50 m = 3 TABLES r(n) p(n) d(n) w(n) DISKDATA -s r, p, d, w = f_3_1_1_p50_1.DAT VARIABLES U(n) u(n,n,m) t(n,m) CONSTRAINTS Obj:
Sum(j=1:n) w(j)*U(j) $
Jobs(j=1:n):
Sum(k=1:n,l=1:m) u(j,k,l) =r(j) & .and. s