abstract 1 problem modeling - CiteSeerX

0 downloads 0 Views 179KB Size Report
y : CNES, Centre Spatial de Toulouse, 18 av E. Belin, 31055 Toulouse Cedex, France z : CERT-ONERA, 2, av. E. Belin, BP 4025, 31055 Toulouse Cedex, ...
EXACT AND APPROXIMATE METHODS FOR THE DAILY MANAGEMENT OF AN EARTH OBSERVATION SATELLITE J.C. Agnese, N.Bataille, D. Blumstein (y) E. Bensana, G. Verfaillie (z)

y : CNES, Centre Spatial de Toulouse, 18 av E. Belin, 31055 Toulouse Cedex, France z : CERT-ONERA, 2, av. E. Belin, BP 4025, 31055 Toulouse Cedex, France

abstract The daily management of a remote sensing satellite like Spot, which consists in deciding every day what photographs will be attempted the next day, is of a great economical importance. But it is a large and dicult combinatorial optimization problem, for which ecient search methods have to be built and assessed. In this paper we describe the problem in the framework of the future Spot5 satellite. Then we show that this problem can be viewed as an instance of the Valued Constraint Satisfaction Problem framework, which allows hard and soft constraints to be expressed together and dealt with. After that we describe several methods which can be used in this framework : exact methods like Depth First Branch and Bound or Pseudo Dynamic Search to nd an optimal solution and approximate methods like Greedy Search or Tabu Search to nd a good solution. Finally, we compare the results obtained with these methods on a set of representative problems. The conclusion addresses some lessons which can be drawn from this overview. Keywords

: Valued Constraint Satisfaction, Constrained Optimization, Branch and Bound, Greedy Search, Tabu Search.

1 problem modeling 1.1 The Scheduling Problem

The Spot5 daily scheduling problem [1] can be informally described as follows :  given a set S of photographs, mono or stereo, which can be attempted the next day w.r.t. the satellite trajectory;  given a weight associated to each photograph, which is the result of the aggregation of several criteria like the client importance, the demand urgency, the meteorological forecasts : : :  given a set of possibilities associated to each photograph corresponding to the di erent ways to achieve it : up to three for a mono photograph (because of the three instruments on the satellite) and only one for a stereo (because such photographs require two trials : one with the front instrument and one with the rear one);  given a set of hard constraints, which must be satis ed : { non overlapping and respect of the minimal transition time between two successive trials on the same instrument; { limitation of the instantaneous data ow through the satellite telemetry; { limitation of the recording capacity on board;  the problem is to nd a subset S 0 of S which is admissible (hard constraints met) and which maximizes the sum of the weights of the photographs in S 0 . This problem clearly belongs to the class of the Discrete Constrained Optimization Problems.

1.2 Modeling as a Valued Constraint Satisfaction Problem

1.2.1 The VCSP framework The Valued Constraint Satisfaction Problem framework (VCSP) [2, 3] is an extension of the CSP framework, where each problem can be characterized by :

 a set V of variables : a nite domain of values is associated to each variable and de nes its possible    

instantiations; a set C of constraints : each constraint links a subset V 0 of the variables and de nes forbidden combinations of values for the variables in V 0 ; a valuation set E (to valuate constraints and assignments), with a total order  (to compare two valuations), a minimal element ? (to represent constraint satisfaction) and a maximal one > (to represent violation of a hard constraint); a valuation function ', associating to each constraint c in C an element '(c) in E, which represents the importance of the satisfaction of c; an aggregation operator (to aggregate constraints valuations), which respects : { commutativity and associativity; { monotonicity relatively to the order ; { ? identity; { > absorbing element.

Given an assignment A of all the problem variables, the valuation of A is the aggregation by the operator of the valuations of all the constraints not satis ed by A : v(A) = c2C; c not satisfied by A ['(c)] The standard objective is to produce an assignment with a minimal valuation. It is an NP-hard problem, according to the complexity theory, and then its worst-case complexity grows at least exponentially

with the problem size. Table 1 describes some frameworks w.r.t the combination of the valuation set E, the order , the minimal element ?, the maximal element > and the aggregation operator . Framework Standard Possibilistic Additive

Label

E ftrue; falseg Max-VCSP [0; 1] -VCSP N [ f+1g

^-VCSP



?

>



false  true true false ^ > 0 1 max > 0 +1 +

Table 1: Some CSP frameworks Other frameworks like -VCSP (probabilistic CSP) or Lex-VCSP (lexicographic CSP) can be characterized in the same way. 1.2.2 Modeling The modeling of the Spot5 scheduling problems within the VCSP framework [4] consists in :

 associating a variable v to each photograph p;  associating to v a domain d of values corresponding to the di erent possibilities to achieve p : { a subset of f1; 2; 3g for a mono photograph (values 1, 2 et 3 corresponding to the possibility of using the front, middle or rear instrument to take the photograph);

{ the only value 13 for a stereo photograph (corresponding to the only possibility, with both the front and rear instruments);

 adding to d, the special value 0 corresponding to the possibility of not selecting p in the schedule;  associating to v a unary constraint forbidding the special value 0, with a valuation equal to the     

weight of p (the penalty for not selecting p); translating as binary constraints, with the maximal valuation >, the constraints of non overlapping and respect of the minimal transition time between two trials on the same instrument; translating as binary or ternary constraints, with the maximal valuation >, the constraints of limitation of the instantaneous data ow; translating as an n-ary constraint, with the maximal valuation >, the constraint of limitation of the recording capacity; using as valuation set the set of integers between 0 (?) and an integer greater than the sum of the weights of all the photographs (>); using as order the natural order on integers and as aggregation operator the usual + operator.

With this modeling, the valuation of an assignment A is equal to > when a hard constraint is not satis ed or equal to the sum of the weights of the rejected photographs when all the hard constraints are met. As it is always possible to produce an assignment where all the hard constraints are satis ed (for example by rejecting all the photographs), nding an assignment of minimal valuation is equivalent to nding an assignment satisfying all the hard constraints and minimizing the sum of the weights of the rejected photographs. About the resulting VCSP, we can note that the domains are at most of size 4 (f0; 1; 2; 3g) for mono photographs and of size 2 (f0; 13g) for stereo photographs. Except the unary constraints associated to each variable (the only ones which can be violated), all the other constraints are hard (valuation equal to >). The valuation set and aggregation operator induce an additive VCSP. Well, additive VCSP are, with probabilistic and lexicographic VCSP, the most dicult ones to solve, much more dicult than classic and possibilistic VCSP [2, 3]. Table 2 shows the variables and table 3 the constraints of the VCSP corresponding to a toy problem. Name

S129703-1 S129702-1 S129701-1 S129701-2 S17302-1 S17301-1 S17302-2 S17302-3

Domain

1230 1230 1230 1230 13 0 13 0 13 0 13 0

Table 2: Variables of the toy problem Table 4 summarizes information about the VCSP corresponding to the biggest problems from our data set (see x4.1 for a description of the data set) :  Pb is the problem number;  Nval is the number of possible trials;  Nvar is the number of possible photographs (variables);  Nc is the number of constraints of arity i; i

Linked Variables

(S129703-1) (S129702-1) (S129701-1) (S129701-2) (S17302-1) (S17301-1) (S17302-2) (S17302-3) (S17302-1 S17301-1) (S17302-2 S17301-1) (S129703-1 S129702-1) (S129702-1 S129701-1) (S129702-1 S129701-2) (S129703-1 S129701-1) (S129703-1 S129701-2)

Forbidden tuples Valuation

(0) (0) (0) (0) (0) (0) (0) (0) (13 13) (13 13) (3 3) (2 2) (1 1) (3 3) (2 2) (1 1) (3 3) (2 2) (1 1) (3 3) (2 2) (1 1) (3 3) (2 2) (1 1)

1 1 1 1 2 2 2 2

> > > > > > >

Table 3: Constraints of the toy problem

 Nct is the total number of constraints;  Tcpu is the cpu time needed to create the VCSP. Pb Nval Nvar Nc1 Nc2 Nc3 Nct Tcpu 11 874 364 364 5025 4719 10065 345 1021 2517 1057 1057 14854 5875 21786 1002 Table 4: VCSP characteristics for problems 11 et 1021

2 exact methods Exact methods are systematic tree search procedures. The root of the tree, starting point for the search, is the empty assignment. At each node, the set of variables is partitioned into a set of instantiated variables and a set of uninstantiated variables. The children of a node correspond to all the possible extensions of the current assignment by instantiating a new variable. The leaves of the tree correspond to all the possible assignments. Variable instantiation ordering and value ordering can be used to guide the search. These methods are called exact, because they are able to nd an optimal solution, provided that no running time limit is set. To avoid producing and evaluating all the possible assignments, optimistic evaluations of the partial assignments are used :

De nition 1 The partial assignment valuation vp is said 0to be optimistic i 8A, partial assignment, 0 8A , complete extension of A over the set of variables , v(A )  vp(A). Consequently, the following property holds :

Property 1 Given vp 0an optimistic partial assignment valuation; given A a complete assignment with 0 0 00

valuation v(A); given A a partial assignment with valuation vp(A ); if vp(A )  v(A), then 8A , complete extension of A0 , v(A00 )  vp(A0 )  v(A).

All methods based on Branch and Bound use that property to cut into the tree search. It can easily be shown that the more realistic the partial assignment valuation is, the better the cuts are.

2.1 Strategies

2.1.1 Depth First Branch and Bound The most frequently used algorithm is the Depth First Branch and Bound, which can be viewed as an extension to the VCSP framework of the Backtrack algorithm, widely used within the standard CSP framework [2]. Let us assume, that the problem is to nd an assignment with a minimal valuation, less than init and greater than or equal to (we suppose that it is known by other means that no assignment with valuation less than exists). By default, init = > and =?. The mechanism consists in performing a depth rst search to nd a complete assignment with a valuation less than . This bound, initialized to init, strictly decreases during search. Each time a complete assignment with a valuation less than the current bound is found, its valuation is used as a new bound. Each time a partial assignment with a valuation greater than or equal to the current bound is produced, a backtrack occurs (property 1 used to cut into the search space). The algorithm stops when a complete assignment of valuation equal to is found or when no complete assignment of valuation less than the current bound can be found. Figure 1 more precisely describes this algorithm. dfbb(p; init; )

; search for an assignment of the variables of the problem p ; of minimal valuation, less than init ; and greater than or equal to ; by default, init = > et =? let be = dfbb-variables (;; variables(p); ?; init; ) if = init then return failure else return dfbb-variables(A; V; 0; ; )

; A is the current assignment ; V is the set of uninstantiated variables ; 0 is the valuation of the current assignment if V = ; then return 0 else let be v = variable-choice(V ) return dfbb-variable (A; V; v; domain(v); 0 ; ; ) dfbb-variable(A; V; v; d; 0; ; )

if d = ; then return else let be val = value-choice(d) let be 00 = dfbb-value (A; V; v; val; 0; ; ) if 00 = then return else return dfbb-variable (A; V; v; d ? fvalg; 0; 00; )

dfbb-value (A; V; v; val; 0; ; ) let 00 = valuation (A; v; val; 0 ) if 00 

then return else return dfbb-variables (A [ fv; valg; V ? fvg; 00; ; ) Figure 1: Depth First Branch and Bound

This algorithm presents the following advantages :  it only requires a limited space (linear w.r.t. the number of variables);

 as soon as a rst assignment with a valuation less than init is found, the algorithm behaves like an

anytime algorithm : if interrupted, the best solution found can be returned and its quality cannot but improve over time. The main problem is that a depth rst search can easily be stuck into a portion of the search space where no optimal assignment exists, because of the rst choices made during the search.

2.1.2 Pseudo Dynamic Search The second algorithm is a generalization to the VCSP framework, of a Spot5 speci c algorithm developed by D. Blumstein and J.C. Agnese for nding optimal solutions. As it can be seen as an hybridization of Dynamic Programming and Branch and Bound, we called it Pseudo Dynamic Search. Given a problem p with n variables; the method, which assumes a static variable ordering, consists in performing n searches, each one solving, with the standard Depth First Branch and Bound algorithm, a subproblem of p limited to a subset of the variables. The ith problem is limited to the i last variables. Each problem is solved by using the same variable ordering, i.e. from variable (n ? i + 1) to n. The optimal valuation is recorded as well as the corresponding assignment. They will be used when solving the next problems, to improve the valuation of the partial assignments and thus to provide better cuts. Figure 2 describes this algorithm. pds(p; init)

; search for an assignment of the variables of the problem p ; with a minimal valuation less than init ; by default, init = > let be = pds-variables (;; variables(p); init; ?) if = init then return failure else return pds-variables(V; V 0; init; ) ; V [ V 0 = variables (p)

; is the optimal valuation of the problem restricted to ; the variables in V , found in the previous search if V 0 = ; then return else let be v = first-variable(V ) let be 0 = dfbb-variables (;; V [ fvg; ?; init; ) if 0 = init then return init else return pds-variables(V [ fvg; V 0 ? fvg; init; 0) Figure 2: Pseudo Dynamic Search

This method, which can be surprising since it multiplies by n the number of searches, has proved to be very ecient. The main explanation is in the quality of the valuation of the partial assignments provided by previous searches. The loss of the anytime property of the standard Depth First Branch and Bound (one must wait up to the nth search to get a solution) is however an important drawback.

2.2 Valuation of a partial assignment

The eciency of methods based on Branch and Bound mainly depends on the way partial assignments are evaluated. This evaluation must be optimistic and as realistic as possible. 2.2.1 Backward Checking Within the VCSP framework, the rst way to evaluate a partial assignment is to aggregate the valuations of the constraints instantiated by A and not satis ed (a constraint is said instantiated when all the

variables linked by it are instantiated) : vpbc (A) = c2C; c instantiated and not satisfied by A ['(c)] This evaluation is optimistic, since it only takes into account the constraints instantiated by A, but it is not very realistic when the number of uninstantiated constraints is high, i.e at the beginning of search. 2.2.2 Forward Checking A way to improve this evaluation is not only to consider the instantiated constraints, but also the constraints for which all the variables but one are instantiated by A (for these ones, we have obviously to consider all the possible instantations of the uninstantiaded variable) : vpfc (A) = vpbc (A) [ v2V; v uninstantiated[val2domain min (v)[vpbc (A [ fvalg) ? vpbc (A)]]] This evaluation is still optimistic. It is more realistic than Backward Checking (8A; vpfc (A)  vpbc (A)). However, it remains not very realistic at the beginning of the search. Forward Checking is also more costly in terms of number of constraint checks at each node, but it is often veri ed that this cost is counterbalanced by a more important pruning.

2.2.3 Pseudo Dynamic Search When the Pseudo Dynamic Search strategy is used, another valuation of the partial assignments can be performed, which takes into account the constraints linking instantiated variables and the ones linking uninstantiated variables, but ignores the ones linking instantiated and uninstantiated variables. Let V be the set of variables uninstantiated by A and vopt (V ) be the optimal valuation of the problem limited to these variables (known from previous searches) : vppds (A) = vpbc (A) vopt (V ) This valuation is optimistic, since it does not take into account the constraints linking instantiated and uninstantiated variables. It is more realistic than the previous two ones from the beginning of the search. If we try to compare this valuation with the one corresponding to Forward Checking, none of those is systematically more realistic than the other. So both can be combined by using the max operator : vppds?fc (A) = max(vppds (A); vpfc (A))

2.3 Heuristics

Heuristics play a great role in this kind of search. They can be generic or speci c (application dependent), static or dynamic. They can be used at three levels :  Variable ordering : the rst fail principle is used; it consists in rst instantiating variables which will sooner allow to detect a failure (no possible extension with a valuation less than the current bound ); some examples of generic heuristics : rst choose variables with the smallest domain or which maximize the number of constraints to check or the minimal valuation increase (when Forward Checking is used); some examples of speci c ones : rst choose variables with the highest weight or select them according to the chronological order of the corresponding photographs.  Value Ordering : the best rst principle is used; it consists in rst choosing values which will a priori lead to a solution; examples of generic heuristics are few : rst choose values which minimize the valuation increase (when Forward Checking is used); some examples of speci c heuristics : last choose the special value 0, rst choose the middle instrument for mono photographs (because stereo photographs use the others) or select values to balance the load over the three instruments;  Constraint Ordering : the order in which constraints are checked also follows the rst fail principle; it can be based on the constraint satis ability ( rst constraints with the lowest satis ability) or on the constraint valuation ( rst constraints with the highest valuation); but the role of this kind of heuristic is only to try to reduce the number of constraint checks; unlike variable and value orderings, it cannot reduce the size of the tree search.

From our experiments, it can be quoted that :

 search is more sensitive to variable ordering than to value ordering : due to the small size of the

domains, value ordering has little impact on search and last considering the special value 0 is sucient;  the most interesting variable orderings are application dependent : choosing rst variables with the highest weight for the standard Depth First Branch and Bound or using the chronological order for the Pseudo Dynamic Search to exploit the structure of the constraint graph.

3 approximate methods This section presents methods which aim at providing good solutions, but cannot prove optimality. Originally, they have been developed by the CNES team and are dedicated to the Spot5 scheduling problem. Although they have been de ned and implemented for this speci c application, they are described within the VCSP framework to keep an uni ed presentation and because their generalization to VCSP is quite obvious.

3.1 Greedy Search

This algorithm works in two phases : 1. the rst phase deals with the computation of a feasible solution : trials are rst heuristically sorted, then a solution is built by trying to insert each trial in the current solution and rejecting it if it is impossible; 2. the solution, result of the rst phase, is then improved by a perturbation method based on an iterative inhibition of the selected trials; for each selected trial, it consists in rejecting it and computing a new schedule from this point (the portion of schedule from the beginning up to the trial being kept); if a better solution is found, the trial is de nitively rejected and the current solution uptaded; else, it is de nitively selected. Figure 3 presents the translation within the VCSP framework of this algorithm. The procedure feasible, not detailed, makes the necessary constraint checks to decide if adding the new instantiation fv = j g to the current assignment A is possible or not. This algorithm is a example of combination of greedy search ( rst phase) and limited local search (second phase). The quality of the solutions found greatly depends on the sort performed at the beginning of the rst phase. In the SPOT5 problem framework, this sort uses some of the following criteria, whose aim is to maximize the solution quality and to limit the con icts : 1. rst trials with a high weight; 2. mono trials before stereo trials (because a stereo trial requires twice more resources); 3. mono trials preferably a ected to the middle instrument (because stereo trials are realized with the front and rear instruments); 4. rst trials with a low data ow (to limit con icts due to data ow and memory requirements); 5. rst trials in con ict with a little number of other trials; 6. chronological order. In our implementation, several initial schedules (up to ve) are computed, by using di erent trial orderings. Each of them uses as the most important criterion the trial weight and as the least important one the chronological order. They only di er by the intermediate criteria, as shown in table 5. A rst version, called Greedy Algorithm, performes the improvement phase on the best schedule produced during the rst phase. A second version, called Multi Greedy Algorithm, performes it on all of them. It is obviously more time consuming than the rst one, but it sometimes succeeds to produce better quality solutions.

build-solution(A; V )

for each variable v 2 V Free = true; i = 1 while Free j = ith value of domain(v) if feasible(A; v; j) then A = A [ fv = j g; Free = false else i = i + 1 end while end for return A

improve-solution(A; V )

Part = ;; Best = A; Free = V for each instantiation fv = j g 2 A if j 6= 0 then New = build-solution(Part [ fv = 0g; V ) if valuation(New) > valuation(Best) then Best = New; k = 0 else k = j else k = 0 end if Free = Free ? fvg; Part = Part [ fv = kg end for return Best Figure 3: Greedy Search

Ordering Criteria 1 2 3 4 5

[1, [1, [1, [1, [1,

6] 2, 4, 3, 5,

6] 6] 6] 6]

Table 5: Combinations of criteria used by the di erent orderings

3.2 Tabu search

Tabu search (see [5] for a detailed presentation), like any other local search method, can certainly be

well introduced by using a geometric analogy of the search process. For that, each assignment of all the problem variables can be considered as a point in an n-dimensional space (the search space, n being the number of variables). Each point p has a valuation v(p) characterizing the quality of the corresponding solution (v(p) = +1 means that p does not represent a feasible solution). The search process can then be viewed as moving process from point to point, trying to nd points with a better valuation than the best point encountered so far. A initial point can always be found : either the void solution (all the variables instantiated to the value 0), or any good solution as the one provided for example by the greedy algorithm. Completely arbitrary moves (changing several variable instantiations at the same time) are not considered, but only the simplest ones (changing one variable instantiation). More precisely, we only consider two types of moves :  to add a photograph to the current solution : instantiation changed from 0 to i;  to suppress a photograph from the current solution : instantiation changed from i to 0. This limitation makes that many points in the search space cannot be directly reached from a given point p. The set of points that can be directly reached from p via feasible moves is called its neighborhood n(p). A move from a point p to a point p0 can be given a value v (move(p; p0 )) = v(p0 ) ? v(p). Basically the new point selected from the point p is the one in n(p) which has the best value. Note that, when no point better than p exists in n(p), a worse one can be selected. This allows the search to escape from local optima. Endless cycling is avoided by forbidding the reverse move, which becomes tabu, for a certain time after this move (where the name of the method comes from). In some circumstances, this tabu restriction can be overridden and a tabu move selected. The main example of such an aspiration is when a tabu move leads to a better point than the best one found so far. To keep a history of the search, the following data are recorded, for each photograph :

 the iteration at which it has been inserted or removed from a solution (short term memory);  the number of tried insertions (mid term memory). The short term memory is used to implement the tabu restriction, the mid term one to penalize the insertion of very frequently inserted photographs and to force a diversi cation of the search, guiding it into yet unexplored area of the space. But this penalization is only applied when the search is in a phase where the solution quality is decreasing. Besides, solutions with a very good valuation (elite solutions) are recorded as they are found in a long term memory. Only elite solutions suciently di erent from the previously recorded ones are stored. This memory is used to reinforce the search in regions of the space which have provided some good solutions in the past. At a regular frequency (see algorithm tabu-search), the last elite solution found is restored, the short and mid term memories are reset and an enterely new search starts from this point. Figure 4 presents some of the procedures used in the Tabu Search algorithm. The recording of solutions is made within the procedure apply-move, which is not described here. Some remarks :

 this description is a strong simpli cation of the algorithms really implemented; for example, the

procedure select-move does not really consider at each iteration all the moves in n(p), but only a limited subset of them;  the duration, during which a move is tabu, is not xed, but computed by generating a random number within a given range, dependent of the size of the problem : for this speci c application, ranges of [3,5] for problems of size less than 500 trials and [7,10] for problems of size greater than 500 trials have been found adequate;  usual strategies in the framework of tabu search, like path relinking and strategic oscillation, have not yet been implemented, despite the fact they may improve the eciency of the search.

select-move(p; i; c) ; returns the best move from the point p at the iteration i ; c = 0 : no penalty applied, c = 1 : penalty applied bestval = ?1, v = v(p) for each move 2 n(p) do if feasible(move) then  = v (move) if tabu(move) then if v +   bestval then  = ?1 ; forbidden move else  =  + penalty (c; i; move) if v +  > bestval then bestmove = move, bestval = v +  end for return bestmove diversification(p; imax ) ; search for imax iterations from the point p i=0 repeat ; search without penalty move = select-move(p; i; 0) p = apply-move (p; i; move) i = i+1 until (no improvement for the imax =100 last iterations) while i < imax ; alternate search with and without penalty applied move = select-move(p; i; 0) while v (move)  0 ; search for a local optimum without penalty p = apply-move (p; i; move) i= i+1 move = select-move(p; i; 0) end while while v (move) > 0 ; search for a local optimum with penalty p = apply-move (p; i; move) i= i+1 move = select-move(p; i; 1) end while end while tabu-search(V; imax )

; main procedure p = init-solution(V ), i = 0 while i  imax diversification(p; imax =10) p = restore-elite-solution

reset-short-mid-term-memory

end while

Figure 4: Tabu Search

4 results 4.1 Data

The set of data provided by CNES [6] involves 498 scheduling problems. They correspond to :

 384 problems, called without limitation, corresponding to scheduling problems limited to one orbit,

where the recording capacity constraint is ignored : { 362 basic problems generated with the simulator LOSICORDOF, numbered from 1 to 362; { 13 problems, built from the biggest of the 362 previous ones (the number 11), by reducing the number of photographs, numbered from 401 to 413; { 9 problems, built from the same problem, by reducing the number of con icts between photographs, numbered from 501 to 509;  114 problems, called with limitation, corresponding to problems over several consecutive orbits, between two dumping of data, where the recording capacity constraint cannot be ignored : { 101 basic problems generated with the simulator, numbered from 1000 to 1101; { 6 problems, built from the biggest of the 101 previous ones (the number 1021), by reducing the number of photographs, numbered from 1401 to 1406; { 7 problems, built from the same problem, by reducing the number of con icts between photographs, numbered from 1501 to 1507. Because of the repartition of the photographs on the orbit, some problems can be decomposed into independent subproblems. This property is used to solve the subproblems in sequence.

4.2 Partial results

To experiment and compare the di erent methods, 20 representative problems have been selected :  13 problems without limitation : 54, 29, 42, 28, 5, 404, 408, 412, 11, 503, 505, 507 et 509;  7 problems with limitation : 1401, 1403, 1405, 1021, 1502, 1504 et 1506. Tables 6 and 7 present the results obtained on these problems by the four following methods :

 DFBB : the standard Depth First Branch and Bound, with a time limitation of 600 seconds per subproblem;  PDS : the Pseudo Dynamic Search, without time limitation;  GR : the Multi Greedy Search;  TS : the Tabu Search.

Pb is the problem number and Nv is the corresponding number of problem variables (number of photographs). For each pair problem-method, the rst number is the best solution quality (the sum of the weights of the selected photographs; * indicates that optimality has been proved) and the second number is the cpu time in seconds. Results in column DFBB and the result for problem 5 in column PDS correspond to an implementation of the corresponding algorithms in Lucid Common Lisp 4.1.1 on a Sparc 1000 workstation, other results to an implementation in Fortran 77 running on a Sparc 20/50 workstation.

Pb

54 29 42 28 5 404 408 412 11 503 505 507 509

Nv DFBB 67 82 190 230 309 100 200 300 364 143 240 311 348

70* 12032* 104067 53053 112 48 3076 15078 21096 8094 12088 13101 19104

32 12 1201 612 1213 600 603 611 646 611 603 620 638

PDS

70* 12032* 108067* 56053* 114* 49* 3082* 16102* 22120* 9096* 13100* 15137* 19125*

3 1 14 415 1702 2 184 255 419 38 108 303 382

GR

69 12032 108067 50053 114 47 3078 16097 22112 9093 12102 15129 19116

4 1 13 4 43 3 19 43 68 22 39 54 63

TS

70 12032 108067 56053 114 49 3082 16101 22116 9096 13100 15136 19123

253 1 634 1416 293 237 279 1166 1433 272 1269 1385 1384

Table 6: Without limitation problems Pb

1401 1403 1405 1021 1502 1504 1506

Nv

488 665 855 1057 209 605 940

DFBB

165058 165133 165154 165221 60155 115228 153226

648 1867 1342 1988 601 1808 1906

PDS

61158* -

13 -

GR

167060 167143 167182 167249 61158 120239 163244

93 279 692 1241 60 405 897

TS

174058 174137 174174 174238 61158 124238 165244

846 1324 1574 2197 454 1011 1945

Table 7: With limitation problems Some comments :

 Concerning without limitation problems :

{ PDS produces an optimal solution and proves its optimality on all the problems; { DFBB does the same thing on only two of them; on the others, the produced solutions are not

very good; { GR is the fastest algorithm; it seldom produces an optimal solution, but the produced solutions are often better than the ones produced by DFBB; { TS produces optimal or near optimal solutions on all the problems, obviously without any proof of optimality; but it often takes more time than PDS.  Concerning with limitation problems : { PDS terminates on only one problem; { TS provides the best solutions and DFBB the worse ones.

4.3 Global results

Results obtained on the whole data set show the eciency of both Pseudo Dynamic Search and Tabu Search. 4.3.1 Eciency of Pseudo Dynamic Search For optimality proof, the Pseudo Dynamic Search, outranks the standard Depth First Branch and Bound. Its performances are summarized in table 8 w.r.t. the size of the problem : the class i-j is the set of

Class 1-100 101-200 201-300 301-400 401-500 501-600 601-700 701-800 801-900 901-1000 1001-1100 Total

Npb 315 74 47 24 13 6 5 3 4 4 3 498

Nopt 313 67 29 18 0 0 0 0 0 0 0 427

% 99.4 90.5 61.7 75.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 85.7

Table 8: Eciency of Pseudo Dynamic Search problems with a number of variables Nv such that i  Nv  j, Npb is the number of problems in the class and Nopt the number of problems in the class optimally solved by PDS. Globally, PDS solves optimally 85.7% of the problems (90.8% for the subset without limitation, 68.4% on the subset with limitation). But, when problems become large (Nv > 400), eciency falls. It should be quoted that the main reason is not the number of variables itself, but the presence of the high arity recording capacity constraint (large problems correspond to those where the recording capacity limitation is present). 4.3.2 Eciency of Tabu search Starting from the solution provided by the Greedy algorithm, the Tabu Search generally succeeds to substantially improve it : on 32% of the without limitation problems, on 56% of the with limitation problems. It often provides optimal solutions or one of the best known solutions : on 90% of the without limitation and with limitation problems (the best known solutions are, either the optimal ones produced by exact methods, like PDS, or the ones produced by other versions of TS, since several versions of TS, corresponding to di erent parameter settings, may provide di erent solutions).

5 conclusion Exact methods, like Depth First Branch and Bound or Pseudo Dynamic Search, have the advantage to

provide optimal solutions and to prove this optimality, when no time limit is set. They succeed within a reasonable time on small and medium size problems, but fail on large size ones or in presence of high arity constraints. When they fail, the systematic order they use to explore the search space prevents them to produce very good quality solutions. Approximate methods, like Greedy Search or Tabu Search, have the advantage to provide within a limited time good quality solutions, thanks to their opportunistic way to explore the search space. But they have the disadvantage to provide no guarantee about this quality and sometimes to loss a lot of time to try to improve optimal solutions. They should be used when it is very likely, according to the problem characteristics (number of variables, domain sizes, constraint arities : : :), that the exact ones will fail. But if this competition between exact and inexact methods is technically exciting, it might be very fruitful, from a practical point of view, to consider a cooperation between them : for example, to use an ecient exact method and an ecient inexact one in parallel on the same problem and to use partial results obtained by each of them to guide or cut the search of the other one.

References

[1] J.C. Agnese. Ordonnancement SPOT5: De nition du probleme simpli e de l'ordonnancement des prises de vue de SPOT5 pour l'action de R&D Intersolve. Technical Report S5-NT-0-379-CN, -94CT/TI/MS/MN/419, CNES, 1994. [2] T. Schiex. Preferences et Incertitudes dans les Problemes de Satisfaction de Contraintes. Technical Report 2/7899 DERA, CERT, 1994. [3] T. Schiex, H. Fargier, and G. Verfaillie. Valued Constraint Satisfaction Problems : Hard and Easy Problems. In Proc. of IJCAI-95, Montreal, Canada, 1995. [4] G. Verfaillie and E. Bensana. Evaluation d'algorithmes sur le probleme de programmation journaliere des prises de vue du satellite d'observation SPOT5 (Etude CNES INTERSOLVE, Lot 1). Technical Report 1/3544/DERI, CERT, 1995. [5] F. Glover and M. Laguna. Modern Heuristic Techniques for Combinatorial Problems, chapter Tabu Search. 1992. [6] J.C. Agnese. Ordonnancement SPOT5: Fourniture de chiers de donnees pour l'action de R&D Intersolve. Technical Report -94-CT/TI/MS/MN/467, CNES, 1994.