Multi-objective optimization strategies using adjoint method and game

0 downloads 0 Views 210KB Size Report
Apr 26, 2005 - and game theory in aerodynamics. Zhili Tang ... we introduced game concept into aerodynamic design, ... solution of above three game strategies was investigated ..... Osborne, M.J., Rubinstein, A.: A course in game theory.
Acta Mechanica Sinica (2006) 22:307–314 DOI 10.1007/s10409-006-0014-9

R E S E A R C H PA P E R

Multi-objective optimization strategies using adjoint method and game theory in aerodynamics Zhili Tang

Received: 26 April 2005 / Revised: 17 March 2006 / Accepted: 3 April 2006 / Published online: 27 June 2006 © Springer-Verlag 2006

Abstract There are currently three different game strategies originated in economics: (1) Cooperative games (Pareto front), (2) Competitive games (Nash game) and (3) Hierarchical games (Stackelberg game). Each game achieves different equilibria with different performance, and their players play different roles in the games. Here, we introduced game concept into aerodynamic design, and combined it with adjoint method to solve multicriteria aerodynamic optimization problems. The performance distinction of the equilibria of these three game strategies was investigated by numerical experiments. We computed Pareto front, Nash and Stackelberg equilibria of the same optimization problem with two conflicting and hierarchical targets under different parameterizations by using the deterministic optimization method. The numerical results show clearly that all the equilibria solutions are inferior to the Pareto front. Non-dominated Pareto front solutions are obtained, however the CPU cost to capture a set of solutions makes the Pareto front an expensive tool to the designer. Keywords Multi-objective optimization · Pareto front · Nash game · Stackelberg game · Adjoint method The project supported by the National Natural Science Foundation of China (10372040) and Scientific Research Foundation (SRF) for Returned Oversea’s Chinese Scholars (ROCS) (2003-091). The English text was polished by Yunming Chen. Z. Tang (B) College of Aerospace Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China e-mail: [email protected]

1 Introduction Multi-objective optimization is gaining importance in aeronautics as well as in other areas. In the literature, contributions to single-point design optimization are abundant. There are currently three different game strategies originated in economics for treating multiobjective optimization problems: (1) Cooperative games (Pareto front [1, 2]), (2) Competitive games (Nash game [3, 4]) and (3) Hierarchical games (Stackel-berg game [5, 6]). Each game achieves different equilibria with different performance, and their players play different roles in the games. In this paper, we introduced game concept into aerodynamic design, and combined it with adjoint method to solve multi-criteria aerodynamic optimization problems. The performance distinction of the equilibrium solution of above three game strategies was investigated by numerical experiments. We solved the two-objective inverse problem in aerodynamics, high lift airfoil in subsonic regime and low drag airfoil in transonic regime [7], with all these game strategies by using a deterministic optimization method (Adjoint method [8, 9]), and achieved the equilibria of the games. The environment was modelled by Euler equations and the flows around the lifting airfoils were analyzed by Euler solvers [10]. The Pareto front is very useful to the designer because it represents a set which is optimal in the sense that no improvement can be achieved in one objective component that does not lead to degradation of at least one of the remaining component [1, 2, 7]. In Pareto front computing, deterministic optimization method is implemented by incorporating weighting constants to reduce the multi-objective functions into a single objective function. A different choice of weighting constants

308

Z. Tang

will result in a different optimum shape. The optimum shapes should not dominate each other, and therefore should lie on the Pareto front, where no improvement can be achieved in one objective component that does not lead to degradation in the remaining of component. Therefore, by varying the weighting constant, it is possible to compute the Pareto front [11]. Finally, we compared the performance distinction of the equilibrium solutions of the three games. The numerical results show clearly that the performance of all the equilibria is inferior to the Pareto front. This is due to the factor that the best non-dominated solution by cooperative strategy however the CPU cost to capture a set of solutions makes the Pareto front an expensive optimizer to the designer.

2 Game strategies for multi-objective optimization A general multi-objective optimization problem consists of a number of objectives to be optimized simultaneously (for a theoretical background, see for example the book of Cohon [1]). Such a problem can be stated as follows: Min (or Max)

fi (xx),

i = 1, . . . , N,

(1)

where fi are the cost functions, N is the number of objectives, x is a vector whose p components are the design or decision variables. 2.1 Cooperative games: Pareto optimal front In a multi-objective optimization problem, there is no unique optimal solution but a whole set of potential solutions since in general no solution is optimal with respect to all criteria simultaneously; instead, one identifies a set of non-dominated solutions, referred to as the Pareto front [1, 2, 7]. In a minimization problem, a vector x 1 is said to be partially less than another vector x 2 when: ∀i

fi (xx1 ) ≤ fi (xx2 )

(i ∈ [1, N]),

exists at least one i such that fi (xx1 ) < fi (xx2 ).

(2)

We then say that solution x 1 dominates solution x 2 , a set of non-dominated solutions is known as the Pareto optimal front. 2.2 Competitive games: Nash game Nash optima define a non-cooperative multiple objectives optimization approach firstly proposed by Nash [3]. For an optimization problem with N objectives defined

in formulation (1), let Xi be the search space for the ith criterion, Xi ⊂ X = X1 ⊗ · · · ⊗ Xi ⊗ · · · ⊗ XN . A strategy pair (x1 , x2 , . . . , xN ) ∈ X is said to be a Nash equilibrium. If and only if: fi (x1 , . . . , xi , . . . , xN )   = inf fi x1 ,. . ., xi−1 , xi , xi+1 ,. . ., xN for i = 1, 2,. . ., N. xi ∈Xi

(3) Alternatively, if x  = (x1 , x2 , . . . , xN ) is said to be a Nash equilibrium: ∀i, ∀xi fi (x1 , . . . , xi−1 , xi , xi+1 , . . . , xN )

≤ fi (x1 , . . . , xi−1 , xi , xi+1 , . . . , xN ).

(4)

This alternative formulation of the definition points us to a (not necessary efficient) method of finding Nash equilibrium: calculate the best choice for each player at the present step, then exchange their decision to repeat the decision making [12]. 2.3 Hierarchical games: Stackelberg game The Stackelberg strategy can be summarized as: one player acts as a leader, all the other agents (the followers) react independently and selfishly with respect to the leader’s strategy [5, 6]. In mathematics, it can be stated as: Suppose f1 is leader, fi , i = 2, . . . , N, are followers, then the Stackelberg equilibrium is: leader optimizes his own design variables, and the other design variables come from the best solution of followers (for example: Nash equilibrium of followers) min f1 (x1 , x2 , . . . , xi , . . . , xN ),

x1 ∈X1

(5)

where (x2 , . . . , xi , . . . , xN ) is Nash equilibrium of the followers. That is to say: fi (x1 , x2 , . . . , xi , . . . , xN )

= inf fi (x1 , x2 , . . . , xi−1 , xi , xi+1 , . . . , xN ) xi ∈Xi

for i = 2, . . . , N.

(6)

3 Adjoint method for aerodynamic design The basic optimizer in this paper is the deterministic optimization method, the gradient is computed by solving an adjoint equations [8–10]. The governing equations for inviscid compressible flow are two-dimensional Euler equations [10]. Suppose that it is desired to achieve a specified pressure distribution pd on an airfoil surface. Introduce the cost functional

Multi-objective optimization strategies using adjoint method and game theory in aerodynamics

I=

1 2

 (p − pd )2 ds = c

1 2

 (p − pd )2 , 1 .

(7)

c

According to control theory [9, 13], the adjoint equations and the gradient computation are:  ∂ T ∂ T ∂  in ,   ∂t − C1 ∂ξ − C2 ∂η = 0, (8) (AT nx + BT ny ) = 0, on B ,    ψ2 nx + ψ3 ny = −(p − pd ), on c ,

− GT rad

=

c

 T δ G¯  dξ −

 

 T δRes dξ dη

δb

ξ  ∂ ξx y δRes = δ f +δ g ∂ξ J J η  ∂ ηx y δ f +δ g , + ∂η J J η η y x ¯ g¯ , δ G¯  = δ f +δ J J

309

Therefore, by varying the weighting constant, it is possible to compute the Pareto front [11]. So the Pareto front of multi-objective optimization problem defined in Eq. (1) can be solved by introducing a set of weighting constants, and then incorporating them into a single-objective optimization problem, such as min f (xx) = x N 

N 

λi fi (xx),

i=1

λi = 1,

and

(11) λi ∈ [0, 1]

for i = 1, 2, . . . , N.

i=1

,

(9)

(10)

where  is the domain of flow field, B and c are the far field and solid boundaries of the domain, respectively. Once the parameterization of airfoil is chosen and the gradient is established, we can modify the airfoil in its negative gradient direction.

4 Combination of game strategies with deterministic optimization method for multi-objective design Deterministic optimization methods are mostly used in single objective design problems. The EAs and Pareto front concept are more and more used in solving practical multi-objective optimization problems instead [7, 14, 15], but it is rather time consuming. Here, we will explain how the deterministic optimization method can be used to treat efficiently the multi-objective design problem with the above three game strategies efficiently. 4.1 Pareto front capture by deterministic optimization method Herein, the deterministic optimization method was implemented by incorporating weighting constants to reduce the multi-objective functions into a single objective function. A different choice of weighting constant will result in a different optimum shape. The optimum shapes should not dominate each other, and therefore should lie on the Pareto front, where no improvement can be achieved in one objective component that does not lead to degradation in the remaining of component.

For a set of weighting constants, the minima solutions of f (xx) are on the Pareto front of problem (1). Unfortunately, the deterministic optimization method can only be used in convex and some concave Pareto front capture [13]. 4.2 Combination of Nash equilibrium with adjoint method We will combine adjoint method with Nash strategy to give a complete description on building the Nash equilibrium between the confronted criteria in multi-objective aerodynamics optimization. Here, any player is an optimizer confronted with the others. So at first we split the design variables into several subsets, the number of which is equal to the number of targets. Each subset is associated with a player. The splitting of design variables be guided by physical consideration of the aerodynamic problem. Then allocate the design targets to the players. According to the definition in Eqs. (3) and (4), each player optimizes his own criterion by modifying his own subset of design variables while keeping the others’ subsets unchanged, and then exchanges symmetric information with the others at regular intervals [16]. Let us consider N players optimizing a set of N function objectives (f1 , f2 , . . . , fN ). The optimization variables are distributed among the players, in such a way that each player handles a subset of the set of optimization variables. Let (x1 , . . . , xN ) be the optimization variables (each xi can be a vector or scalar variable), where x ∈ X and X = X1 ⊗ X2 ⊗ · · · ⊗ XN , xi ∈ Xi . We further assume that each of the targets is a minimization problem for convenience. Player i is responsible for fi by modifying xi , so the design problems can be explained as follows: Player i : min fi (x1 , x2 , . . . , xN ), xi ∈Xi

i = 1, 2, . . . , N,

(12)

310

Z. Tang

where xi is the free design variable of cost function fi , all xk , k = i are fixed in Player i and come from the result of Player k. The Nash adjoint will then work by using the same starting point, say x0 = (x01 , . . . , x0N ). The first player will optimizes x1 using criterion f1 while the other variables are fixed by the other players. The second player will optimizes x2 using criterion f2 while the other variables are fixed by the other players, and so on. All the players still have to send each other the information of their best result after every Nash design cycle. Say, the starting m−1 ), where point at m step is the x m−1 = (x1m−1 , . . . , xN m−1 is the best design found by Player i at m − 1 step. xi Then player i optimizes xi starting from xim−1 by using xkm−1 , k = 1, 2, . . . , N, k = i, and the best solution of player i at step m is m−1 m m−1 m−1 , xi , xi+1 , . . . , xN ) fi (x1m−1 , . . . , xi−1 m−1 m−1 m−1 , xi , xi+1 , . . . , xN ). = inf fi (x1m−1 , . . . , xi−1 xi ∈Xi

(13)

After the optimization process, each player sends his best solution to form the best global solution of m step, m i.e. x m = (xm 1 , . . . , xN ). Nash equilibrium is reached when no player can further improve his criterion. 4.3 Combination of Stackelberg equilibrium with adjoint method The idea is to bring together adjoint method and Stackelberg strategy. In other words, we will use the adjoint method to build the Stackelberg equilibria. Let us consider N players optimizing a set of N function objectives (f1 , f2 , . . . , fN ), player 1 is the leader and all the other players are followers, (x1 , . . . , xN ) be the optimization variables (each xi can be a vector or scalar variable). The optimization variables are distributed among the followers, in such a way that player 1 optimizes his design variable, say x1 , each of the followers handles a subset xi of optimization variables x along criterion fi , i = 2, . . . , N. All the followers are symmetric, but they are asymmetric to the leader. Thus, as advocated by Stackelberg definition, the followers provide the leader their best solution (say Nash equilibrium), then the leader makes his decision based on the followers’ decision. Stackelberg equilibrium is a hierarchical game, the leader and the followers do not make decision simulta, xm−1 , . . . , xm−1 ) be neously. Let x m−1 = (x1m−1 , xm−1 2 3 N m−1 m−1 the best solution at step m − 1, where (x2 , x3 ,..., xm−1 ) is the Nash equilibrium between the followers N based on the leader’s decision at m−2 step, say x1m−2 . At step m, the followers achieve a Nash equilibrium based

on the leader’s decision x1m−1 which is kept constant within Nash strategy of the followers. A sub-iteration procedure described in Sect. 4.2 is needed for the followers to get their Nash equilibrium. That is m fi (x1m−1 , xm 2 , . . . , xN ) m m m = inf fi (x1m−1 , xm 2 , . . . , xi−1 , xi , xi+1 , . . . , xN ) xi ∈Xi

i = 2, . . . , N.

(14)

Subsequently, the leader will modify the design variable x1 using criterion f1 based on the followers’ decision m m (xm 2 , x3 , . . . , xN ), i.e. m m m m f1 (xm 1 , x2 , . . . , xN ) = inf f1 (x1 , x2 , . . . , xN ). x1 ∈X1

(15)

m m Therefore,the best solution at step m is (xm 1 , x2 , x3 , . . . , m xN ). Stackelberg equilibrium is reached when neither the leader nor the followers can further improve their criteria. The assignment of roles between the players (i.e. which player should be the leader and the others should be the followers), depends on the structure of the problem, i.e. precisely the physical features of the optimization problem.

5 Implementation of adjoint method in game strategies for multi-objective aerodynamic design 5.1 Implementation of adjoint method to capture the Pareto front As stated in Eq. (11), the multiple criteria are reduced to a single objective by using weighting constants. So the variation of the reduced cost function is δf (xx) =

N 

λi δfi (xx).

(16)

i=1

Consequently, the reduced gradients are δfi (xx) N δf (xx) N = i=1 λi = i=1 λiG radi , δxx δxx δfi (xx) = . δxx

G rad = G radi

(17)

This indicates that the gradient of reduced cost function with respect to the design variables is the same linear combination as the reduced cost function. Therefore, in order to compute the reduced gradients. We should solve the state and the adjoint equations at each design point for calculating the gradient of each objective function with respect to its design variables (see Sect. 3).

Multi-objective optimization strategies using adjoint method and game theory in aerodynamics

311

5.2 Implementation of adjoint method in Nash game strategies Differing from the global gradient computation in Pareto front calculation, the Nash game needs a partial gradient of the cost function with respect to a partial design variables. In Sect. 3, we give a global gradient computation by adjoint method, then the partial gradient is just the projection of the global gradient into the corresponding subspace, the projection matrix is the same as the global design variable space projected into the subspace of the partial design variables. For example, n-dimensional global design variable space is X and its m-dimensional subspace is X  , where m ≤ n. The projection matrix from X to X  is A m×n , where A m×n satisfies X  = A m×nX . Then the relation between global gradient G rad and partial gradient G rad is (suppose the cost function is f ) G rad = Am×nG rad , δf G rad =  , x  ∈ X  , δxx δf G rad = , x ∈ X, X ⊂ X. δxx

(18)

Fig. 1 High-lift and low-drag profiles. (a) High lift airfoil in subsonic regime (b) low drag airfoil in transonic regime

need to find all profiles existing between the low-drag profile and the high-lift profile.

5.3 Implementation of adjoint method in Stackelberg game strategies As stated in Sect. 4.3, partial gradients are needed in the implementation of adjoint method in Stackelberg strategy. The partial gradient calculation is the same as in Sect. 5.2.

6.2 Parameterization of the airfoil shape Two different kinds of parameterization are used in the experiments. The global controll parameterization is Bézier curve for upper and low surface, respectively, degree is 9 [16]. The local controll one is Hicks–Henne functions, 14 Hicks–Henne design variables for the upper and lower surface, respectively [16].

6 Numerical optimization examples and results 6.3 Nash and Stackelberg strategies 6.1 Optimization problem The optimization problem is two-objective inverse design in aerodynamics [7] as follows:

min f1 (c ) = (p − pd1 )2 dc c

c



α = 10.8◦ ,

(p − pd2 )2 dc

min f2 (c ) = c

at M∞ = 0.2 and

(19)

c

at M∞ = 0.77

and

In Nash game, the splitting is front/rear. Player 1 designs the leading edge marked with red color (see Fig. 2) to improve its subsonic performance, and player 2 designs the rest portion of airfoil marked with green color to improve its transonic performance. The mathematical description is as follows:

green red ) = (p − pd1 )2 dc min f1 (c , c cred

c

at M∞ = 0.2



α=1 ,

where, c denotes airfoil shape, pd1 is pressure distribution on a “high-lift” profile (see Fig. 1a) at M∞ = 0.2, α = 10.8◦ , pd2 is pressure distribution on a “lowdrag” profile (see Fig. 1b) at M∞ = 0.77, α = 1◦ . We

green

min f2 (cred , c green

c

and



α = 10.8◦ , (20a)

(p − pd2 )2 dc

)= c

at M∞ = 0.77

and

α = 1◦ . (20b)

312

Z. Tang

In Stackelberg strategy, if f1 is the leader, and f2 is the follower, then player 1 optimizes the red portion of airfoil based on the well converged solution of player 2. Similarly, if f2 is the leader, and f1 is the follower, the optimization strategy is just opposite to the former. The mathematical descriptions are as follows: (i) Player 1 is the leader, player 2 is the follower



green = (p − pd1 )2 dc min f1 cred , c cred

c

at M∞ = 0.2 green

where c

α = 10.8◦ ,

and

(21a)

comes from the solution of





green red  = (p − pd2 )2 dc f ,  min 2 c c green

c

c

at M∞ = 0.77

and

α = 1◦ . (21b)

(ii) Player 2 is the leader, player 1 is the follower



green red  = (p − pd2 )2 dc min f ,  2 c c green

c

c

at M∞ = 0.77and

α = 1◦ ,

(22a)

where cred comes from the solution of



green min f1 cred , c = (p − pd1 )2 dc cred

c

at M∞ = 0.2

and

α = 10.8◦ . (22b)

6.4 Optimization results Here, we have solved the optimization problem defined in Eq. (19) using all the three games. Different parameterizations have also been used to compare the influence of parameterization on the Pareto front and equilibrium solutions. It turns out that the converged Pareto front representations remain the same with respect to the different parameterizations, but Nash and Stackelberg equilibrium solutions are quite different even with the same splitting and optimization strategy but under different parameterizations, see Fig. 3. This is associated with the design space which is modified partly within each player’s optimization. So the modifications on design space are different by each player due to the splitting of territory in the competitive and hierarchical game strategies. The numerical results show clearly that all the noncooperative equilibria solutions are inferior to the

Fig. 2 Airfoil splitting and game strategies

Pareto front, see Fig. 3. But they are still good satisfactory solutions of multi-objective optimization problem, see Figs. 4, 5 and 6. This is due to the fact that the best non-dominated solution is obtained by cooperative strategy and the design space is modified globally during the optimization. Nash and Stackelberg equilibria are non-cooperative games, the design space is modified partly within each player’s optimization. However the CPU cost to capture a set of solutions makes the Pareto front an expensive optimizer to the designer. In Pareto front capture, all the players make decision cooperatively and mutual-dependently because there is only one decision maker in the game, so they play equivalent roles in the game [1, 2]. In Nash game, all the players make decisions simultaneously and independently, and no player is informed of the choice of any other player prior to making his own decision. Moreover, each player must be concerned only with his instantaneous payoff and ignore the effects of his current action on the other players’ future behavior. So all the players are competitive and conflicting [3, 4]. In Stackelberg game, the players are not symmetric to each other. Someone plays a centrally controlling role, in other words he is a leader, and the other players make decision following him, say they are the followers [5, 6]. So players are hierarchical and non-cooperative.

7 Conclusion and future work 1.

2.

Game strategies was introduced into aerodynamic design, and combined with adjoint method to solve multi-criteria aerodynamic optimization problems successfully. The detailed numerical implementation was given in this paper. Nash and Stackelberg equilibria solutions are inferior to the Pareto front, but they are still good satisfactory solutions of multi-objective optimization problem and efficient to achieve.

Multi-objective optimization strategies using adjoint method and game theory in aerodynamics

313

Fig. 5 Comparison of Stackelberg equilibria with their closed points on Pareto front (Parameterization is Hicks–Henne function). (a) f1 is the leader; (b) f2 is the leader

Fig. 3 Performance comparison of Pareto front with Nash and Stackelberg equilibria under different parameterizations. (a) Parameterization is Hicks–Henne function; (b) parameterization is Bezier curve, degree = 9

Fig. 6 Comparison of Stackelberg equilibria with their closed points on Pareto front (Parameterization is Bezier curve, degree = 9). (a) f1 is the leader; (b) f2 is the leader Fig. 4 Comparison of Nash equilibrium with its closed points on Pareto front (Parameterization is Hicks–Henne function)

3.

Game strategies investigated in this paper will be generalized and applied to multi-objective robust optimization in aerodynamics to treat design problems with uncertain input parameters.

Acknowledgment The author would like to acknowledge Association Franco-Chinoise pour la Recherche Scientifique et Technique (AFCRST) for their partial support.

314

References 1. Cohon, J.L.: Multiobjective programming and planning. Academic, New York (1978) 2. Deb, K.: Multi Objective Optimization Using Evolutionary Algorithms. Wiley, New York (2001) 3. Nash, J.F.: Non-coorperative games. Ann. Math. 54(2), 286– 295 (1951) 4. Basar, T., Olsder, G.J.: Dynamic noncooperative game theory. Academic, Bodmin (1995) 5. Ehtanzo, H.: Incentive strategies and equilibria for dynamic games with delayed information. J. Opt. Theory Appl. 63(3), 355–370 (1989) 6. Ehtanzo, H.: A cooperative incentive equilibrium for a resource management problem. J. Eco. Dynam. Control 17(4), 659–678 (1993) 7. Marco, N.M., Désidéri, J.A., Lanteri, S.: Multi-objective optimization in CFD by genetic algorithms, INRIA Report RR3686, INRIA Sophia Antipolis (1999) 8. Lions, J.L.: Contrôle optimal des systèmes fouvernés par des equations aux dérivées partielles. Gauthier-Villars, Paris (1969) 9. Jameson, A.: Aerodynamic design via control theory. J. Sci. Comput. 3(3), 233–260 (1988) 10. Tang, Z.L.: The research on optimum aerodynamic design using CFD and control theory. (Ph.D. Thesis), NUAA, Nanjing (2000) 11. Leoviriyakit, K., Kim, S., Jameson, A.: Viscous aerodynamic shape optimization of wings including planform variables. In: 21st applied aerodynamics conference, Orlando, AIAA paper 2003–3498 (2003)

Z. Tang 12. Osborne, M.J., Rubinstein, A.: A course in game theory. Massachusetts Institute of Technology, Cambridge (1997) 13. Tang, Z.L., Désidéri, JA., Périaux, J.: Multi-objective design strategies using deterministic optimization with different parameterization in aerodynamics. In: Neittaanmäki, P., Rossi, T., Korotov, S., Oñate, E., Périaux, J., Knörzer, D. (eds.) Proceedings of European congress on computational methods in applied sciences and engineering, Jyväskylä (2004) 14. Whitney, E.J., Gonzalez, L.F., Srinivas, K., Periaux, J.: Adaptive evolution design without specific knowledge: UAV application. In: Bugeda, G., Désidéri, J.A., Periaux, J., Schoenauer, M., Winter, G. (eds.) Proceedings of international congress on evolutionary methods for design optimization and control with application to industrial problems, CIMNE, Barcelona (2003) 15. Padovan, L., Pediroda, V., Poloni, C.: Multi objective robust design optimization of airfoils in transonic Field (M.O.R.D.O). In: Bugeda, G., Désidéri, J.A., Periaux, J., Schoenauer, M., Winter, G. (eds.) Proceedings of international congress on evolutionary methods for design optimization and control with application to industrial problems, CIMNE, Barcelona (2003) 16. Tang, Z.L., Désidéri, J.A., Périaux, J.: Virtual and real game strategies for multi-objective optimization in aerodynamics, INRIA Report RR-4543, INRIA Sophia Antipolis (2002)

Suggest Documents