Algorithms and Complexity in Combinatorial Optimization Games Xiaotie Deng Toshihide Ibaraki and Hiroshi Nagamochi Dept. of Computer Science Dept. of Applied Mathematics and Physics City University of HK Graduate School of Engineering Tat Chee Avenue Kyoto University Kowloon, Hong Kong Kyoto, Japan 606-01
[email protected] fibaraki,
[email protected]
Abstract
We introduce an integer programming formulation for a class of combinatorial optimization games, which includes many interesting problems on graphs. We prove a simple (but very useful) theorem that the core, an important solution concept in cooperative game theory that no subset of players should be able to gain advantage by breaking away from the collective decision of all players[40, 45], is nonempty if and only if the associated linear program has an integer optimal solution. Based on this, we completely clarify the relationship between related prime/dual linear programs for the corresponding games to be totally balanced [40, 26]. This also reveals an interesting connection of these conditions with balanced matrices, which guarantee integer solutions for a certain class of similar linear programs [39]. These theorems open the door to apply techniques developed in combinatorial optimization to a large class of cooperative games. For example, it immediately allows us to extend the algorithmic result for the maximum ow game of edge players on unit directed networks by Kalai and Zemel [26] to undirected networks and also to the games of vertex players. We further apply these theorems to several other games on graphs related to connectivity, matching, vertex cover, edge cover, independent set, chromatic number and so on.
1 Introduction Game theory has a profound in uence on methodologies of many dierent branches of sciences, especially those of economics, operations research and management sciences. In these application areas, very often a joint project may require resolve the problem of cost allocation or revenue distribution among many participants. Many criteria may come into consideration, in particular, fairness and stability. The concept of core lays down an important principle for a collective decision: Every subgroup of the players would not be able to do better if they break away from the decision of all players and form their own coalition. Emphases on other principles lead to dierent solution concepts for cooperative games [43]. In addition, the thesis of bounded rationality is introduced as a crucial concept for game theoretical solutions to have practically meanful implementations in real life situations [44, 33, 38]. Informally, this principle of bounded rationality states that players would not spend an unbounded amount of resources to gain a small amount of improvements in the outcome. We are particularly interested in the computational resources required for questions related to a solution. There have been more and more studies on the computational aspects of game theory problems, though early works may even be traced back to two decades ago [29, 20, 33, 36, 24, 31, 37, 6, 28, 38, 14, 32, 27]. An extensive discussion can be found in a review by Kalai on interplays of operations research, game theories, and theoretical computer science [23]. 1
There is another important observation that applying computational issues to game theory problems will be fruitful and enlightening. Like many other branches of mathematical sciences, a necessary and sucient condition is often sought to characterize concepts in game theory. In many cases, this leads to, in complexity terminology, a proof that a problem is in NP \ co-NP , if these conditions are checkable in polynomially time. Very often, this results in a polynomial time algorithm. However, a good complete characterization may not be easy to nd. Tighter and tighter sucient or necessary conditions are sought towards the goal that these eorts may lead to, nally, a beautiful characterization theorem. In such cases, an NP-complete proof will clarify the situation in the sense that there may never be any succinct necessary and sucient condition, or we can only have those as computationally equivalent restatements of the original problem [18]. Collective optimization problems of combinatorial natures have long attracted the attention of researchers. An important feature of these games is that the value of each subset of players can be presented succinctly as the optimal solution to a combinatorial optimalization sub-problem for these players. Therefore, the input size may be polynomial in the number of players instead of exponential for the general case. Shapley and Shubik studied a market in which players start with a vector for the amount of commodities they own and wish to redistribute the commodities so as to maximize their utility functions [41]. Shapley and Shubik studied an assignment game for which whether an imputation is in the core can be tested eciently [42]. Claus and Kleitman initiated the discussion of the cost allocation problem of communication networks shared by many users and introduced several cost allocation criteria [3]. Bird [1], and independently, Claus and Granot [2] formulated it as a minimum cost spanning tree game (a terminology coined by Granot and Huberman [20]). Megiddo introduced an alternative formulation with Steiner trees [30]. Kalai and Zemel studied games of network ows [25, 26]. Tamir studied a traveling salesman cost allocation game [46], and network synthesis games [47]. Deng and Papadimitriou discussed a game for which the game value for any subset of players is the total weight of the edges in the subgraph induced by them [6]. Faigle, et al., studied an Euclidean TSP game and a matching game [13, 14]. Nagamochi, et al., studied a minimum base game on matroid [32]. In another direction, Owen introduced a linear production game in which each player j controls a certain resource vector bj [34]. P Jointly, they maximize a linear objective function cx, subject to the resource constraints Ax all j bj . The value a subset S of players can achieve on their own is the maximum they can achieve with resources collectively owned by this subset: maxfcx : Ax Pj2S bj g. Owen showed that the core for linear production game is always nonempty and one imputation in the core can be immediately obtained from his proof using variables of the dual linear program [34]. Dubey and Shapley studied games related to some nonlinear programs which result in totally balanced games, that is, games for which cores of subgames are all nonempty [9]. Kalai and Zemel considered a class of combinatorial optimization game associated with the maximum
ow from a source to a sink on a network, where each player controls one arc in the network. The maximum ow game is totally balanced, and on the other hand, every non-negative totally balanced game is a maximum ow game [25, 26]. Curiel proved that the class of linear programming games is equivalent to the class of totally balanced games [4]. These reductions for the equivalence proof, however, involve in exponential time and space in the number of players [4]. Therefore, complexity for computational problems for the cores of these models are not necessarily the same. Notice that combinatorial optimization problems can usually not be formulated as linear programs. The above mentioned work of Dubey and Shapley studied games related to nonlinear programs. By introducing a resource vector b(S ) for each subset S of players (thus leading to an exponential number of linear inequality constraints), Granot formulated several combinatorial optimization games, which cannot be handled by Owen's model as a generalized linear production model. These include the minimum cost spanning tree game, its directed version, a network syn2
thesis game, and a weighted matching game [19]. In fact, introduction of an exponential number of linear inequalities may allow the corresponding linear program to have integer solutions for some combinatorial optimization problems [10]. Tamir studied several combinatorial optimization games which or its special cases can be reduced to the Owen's linear production game model. For these cases, an integer solution always exists for the linear program and the nonemptiness of the core of the corresponding combinatorial optimization game follows from Owen's result [46, 47]. The study by Megiddo on minimum cost spanning tree is the earliest work we know of which explicitly takes into consideration computational complexity and explicitly requires polynomial time algorithms, as good algorithms following Jack Edmonds, for solutions of combinatorial optimization games [29]. Kalai and Zemel studied a linear production game formulation for the maximum ow game on unit networks, which results in a polynomial time algorithm for computing an imputation in the core through the corresponding dual linear program [25]. Granot and Huberman designed a polynomial time algorithm to generate additional points in the core and apply it to nd a polynomial time algorithm for the nucleolus in some special cases [20]. Deng and Papadimitriou took a step further to use computational complexity as a rationality measure to characterize solution concepts of cooperative game theory [6]. Nagamochi, et al., applied the same approach to compare dierent solution concepts for the minimum base game on matroids [32]. Faigle, et al., studied the complexity of testing membership in the core of min-cost spanning tree games [14]. In general, these studies in algorithms and complexity for combinatorial optimization games require techniques speci c to the problems involved. The motivation of our study is to design a general model which allows for general mathematical/computational methods to deal with computational issues for combinatorial optimization games. As a natural alternative to Granot's model, we make the integrality condition explicit in Owen's model. As discussed above, for several combinatorial optimization games, Tamir noticed that these game or their special cases can be reduced to Owen's model because the corresponding linear programming formulations always result in some integer solutions. In this paper, we focus on a class of combinatorial optimization games with their game values de ned by the following integer programs of packing type, maxfcx : Ax b; x 0; x integralg, for revenue distribution problems (and similar minimization formulations of covering type for cost allocation problems), where matrix A is of 0-1 values and vector b is of all ones. We are interested in algorithmic and complexity issues as put forward in [6, 32]: decide whether the core is empty; if the core is empty, nd an imputation in the core; given an imputation x, test whether x is in the core. In Section 2, we introduce the de nitions of our combinatorial optimization games of packing and covering types, as well as some necessary notations. In Section 3, we show that the core for such a game is nonempty if and only if the corresponding linear programming relaxation has an integer optimal solution. The suciency of the above condition follows immediately from Owen's work [34] and, according to Owen, can be traced further to Debreu and Scarf [5]. Since the proof is short in our case, we provide proofs in both directions for the bene t of the readers. This result opens the door for techniques central to combinatorial optimization problems to be applied to our cooperative game problems. In the network ow game on unit networks [26], it turns out that the linear programming relaxation always has an integer solution, as guaranteed by Menger's Theorem [21]. We also discuss the relationship between two games which are dual in the sense of the corresponding linear programming relaxations, and relate this duality with algorithmic design issues. We have surprising and tight results in terms of totally balanced games, which reveals an asymmetry between the games de ned by minimization problems and the games de ned by maximization problems. We further discuss some integrality results of linear programs, studied in the matrix and polyhedral theory, and discuss the implication of such results on our computational game theory problems. 3
After the publication of the preliminary version of this paper [7], it is brought to our attention that Faigle and Kern [16] de ned a class of partition games and obtained results very similar to Theorem 2 (and hence to Theorem 1) of ours, saying that the core for the game is nonempty if and only if the relaxed linear program has an optimal integer solution. Since they restrict the corresponding combinatorial optimization problems to be partitions, however, applications of their results are often limited. As an example, their model can formulate neither the maximum ow game on unit networks studied by Kalai and Zemel, nor many of the above mentioned combinatorial optimization games and those games on graphs studied in this paper. In Section 4, we apply our new cooperative game model to study several fundamental optimization problems related to graphs. For the single source single sink maximum ow game, the algorithm of Kalai and Zemel only works for edge players in a directed unit network. As an advantage of our integer programming formulation, this can be immediately extended to an undirected graph with the game value of s-t edge connectivity, to vertex players with the game value of s-t vertex-connectivity, as well as the matching game for bipartite graphs. This approach can also be applied to solve several other games. Of course, the integrality condition does not always hold. A specially interesting case is the matching/vertex-cover games for general graphs. Though one integer program is polynomially solvable and the other is NP-hard for this pair of graph optimization problems, the linear programming relaxations of the pair are dual to each other, and the condition for the nonemptiness of the core is polynomially checkable for both games. This is not necessarily true for all the NP-hard combinatorial optimization problems. There are cases in which none of the integrality and polynomial time solvability holds. For the graph coloring game, it is NP-hard to decide whether the core is empty and decide whether an imputation is in the core. For the graph coloring game, we could show the equivalence between perfect graphs and totally balanced games. Closely related to the maximum ow game on unit network is the edge-connectivity game on undirected graphs. We have complexity results similar to the above games though it does not have a natural integer programming formulation introduced in this paper. In summary our main results are as follows: 1. simple and tight theorems for the nonemptiness of the core of a large class of combinatorial optimization games; 2. tight relationships in the totally balancedness of pairs of combinatorial optimization games of corresponding prime/dual linear programs; 3. revealing interesting connections between totally balanced games and balanced matrices for these problems; 4. applications to studies of algorithmic and computational complexity issues for the cores of many combinatorial optimization games; 5. and as a by-product, full characterization on some linear programming relaxations of several graph problems to have integer optimal solutions. In Section 5, we conclude the paper with a discussion on interesting problems opened up by our work.
2 Preliminaries For a cooperative game (N; v ), we have a set N of players, and a value function v : 2S ! R: for each subset S N of players, v(S) is the revenue the subset can obtain by forming a coalition of the players in S only. The question is how to distribute the total value v (N ) to the players, i.e., to nd an 4
P
P
imputation x : N ! R+ such that i2N x(i) = v (N ). Usually we denote i2S x(i) by x(S ). Then, the above condition can be written as x(N ) = v (N ). The concept of core introduces a principle to resolve this problem. An imputation x is in the core if and only if 8S N : x(S ) v (S ). That is, no subset of players can gain advantage by breaking away from the collective decision and forming their own coalition. The above formulation works only for the revenue distribution problem. For cost allocation, the de nitions is similar with the above inequalities in the reversed direction. In general, the domain of the revenue function v is of size 2jN j. For many combinatorial optimization games, however, the revenue is an optimal solution for a combinatorial optimization problem. Thus, the input would be a structure related to the set N of players and its size would usually be a polynomial in the number jN j of players. In Owen's linear production game model, j each player j controls a certain resource vector P b .j The total revenue is the maximum value of cx under the resource constraints Ax j 2N b . The revenue for a subset S of player is P j maxfcx : Ax j 2S b g. Therefore, the input size is that of the matrix A and the vectors bj ; j 2 N , not necessarily exponential in the number jN j of players. Many combinatorial optimization games can be represented by Owen's model by introduction of the integrality constraint. Thus, P we de ne the total revenue to be the maximum value of cx under the resource constraints Ax j 2N bj and the constraints that x are integer values. Similarly, the P j revenue for a subset S of player is maxfcx : Ax j 2S b ; x integralg.
2.1 Packing and Covering Games
In particular, we are interested in the following special subclass of combinatorial optimization games. We restrict A to be an m n f0; 1g-matrix. Let 1k and 0k denote the column vectors with all ones and all zeros, respectively, of dimension k. We may denote these vectors by 1 and 0 for simplicity. Let M = f1; 2; ; mg and N = f1; 2; ; ng be the corresponding index sets, and let t denote the transposition operation. Consider the following linear program,
LP (c; A; max) : max y tc s:t: y t A 1tn; y 0m; and its dual,
DLP (c; A; max) : min 1tn x s:t: Ax c; x 0n; where c is an m-dimensional column vector 2 Rm , y is an m-dimensional column vector of variables and x is an n-dimensional column vector of variables. We denote the corresponding integer programming version of LP (c; A; max) by ILP (c; A; max). Since A is a f0; 1g-matrix, the integrality constraints are equivalent to require y to have f0; 1g values. We de ne the packing game Game(c; A; max) as follows, where S = N ? S : 1. The player set is N . 2. For each subset S N , v (S ) is de ned as the value of the following integer program: max y t c
s:t: y t AM;S 1tjSj ; y t AM;S 0tn?jSj ; y 2 f0; 1gm; 5
where AT;S is the submatrix of A with row set T and column set S , and v (;) is de ned to be 0. Since this is a maximization problem, we may as well assume that cj > 0 for j with Aj 6= 0. Otherwise, we can always choose yj = 0. We then introduce a covering game Game(c; A; min) for cost minimization problems in the similar manner: 1. The player set is M . 2. For each subset T M , v (T ) is de ned as the value of the following integer program: min
s:t:
dt x AT;N x 1jT j; x 2 f0; 1gn;
where v (;) is de ned to be 0. Again we can assume dj > 0 for all j . Otherwise we may always choose xj = 1 to simplify the problem. Since the value of the game is de ned by a solution to the minimization problem, this is in fact a problem of sharing the cost of the game. Thus, we would revise the de nitions of imputation and core. A vector w : M ! R+ is an imputation if w(M ) = v (M ), and an imputation is in the core if w(T ) v (T ) holds for all T M . From the de nitions, we easily see that both packing game and covering game are monotone (i.e., v (S 0) v (S ) for S 0 S N holds in Game(c; A; max), and v (T 0) v (T ) for T 0 T M holds in Game(d; A; min)). As an application of this general model, the maximum ow game on a digraph D = (V; E ) with source s and sink t, studied in [26], can be formulated as Game(1m; A; max) if we take A to be the path-arc incidence matrix: Aij = 1 if arc j is in the i-th directed s-t path, and 0 otherwise. Constraint y t A 1tn requires that, for each arc j , there is at most one path chosen by y , which goes through the arc (i.e., arc capacity is 1). Another example of a packing game is found in [15] which studies a minimum cost matching game on an edge-weighted complete graph Kn . This game can be formulated as Game(c; A; max), where A is the edge-vertex incidence matrix of Kn and d is the vector for edge weights. We point out that these two examples can also be formulated by Granot's extension to the Owen's model. In Section 4, we introduce a number of optimization games on graphs, which are formulated as the above packing and/or covering games but may not be formulated by Granot's model since facet constraints for these problems may not be completely known.
2.2 Issues Related to Core
We study the following properties and questions concerning their cores. 1. Nonemptyness : Is the core of the game always nonempty? 2. Convex characterization : Can any imputation in the core be represented as a convex combination of some well-de ned dual objects (such as minimum s-t cuts)? 3. Testing nonemptiness : Can it be tested in polynomial time whether a given instance of the game has nonempty core? 4. Checking membership : Can it be checked in polynomial time whether a given imputation belongs to the core? 6
5. Finding a core member : Is it possible to nd an imputation in the core in polynomial time? For discussion of games on graphs, the polynomiality is determined in terms of the input size of the graph (i.e., the number of vertices jV j and the number of arcs or edges jE j). Even though the sizes of the constraint matrices A in the above formulations are sometimes exponential in jV j and jE j (e.g., the A for the maximum ow game has exponentially many rows), we would like to obtain algorithms of polynomial time in the input size of the graphs. We note at this point that two games Game(c; A; max) and Game(d; A; min) with c = d are not dual in the sense of the underlying linear programs since the roles of objective function and the right hand side of the constraint are not interchanged. In the case of c = d = 1, however, the corresponding linear relaxations become dual to each other. We will study several problems of this feature.
3 Properties of the Core In this section, we introduce several mathematical theorems for the core of packing/covering games.
3.1 Nonemptiness of the Core
We rst simplify the conditions for an imputation to be in the core for our model.
Lemma 1 For Game(c; A; max), de ne Si = fj 2 N j Aij = 1g for i 2 M . Then v(Si) ci holds for all i 2 M . Proof: By de nition, v(Si) is the optimal value to the following integer program: max y t c
s:t: y tAM;Si 1tjSij ; y tAM;Si 0tn?jSi j ; y 2 f0; 1gm:
Set yi := 1, and yk := 0 for k 2 f1; 2; : : :; mg ? fig. This vector y is an integer feasible solution with objective value ci . 2
Lemma 2 A vector z : N ! R+ is in the core of Game(c; A; max) if and only if 1. z (N ) = v (N ) (i.e., z is an imputation), 2. z (Si) ci for all i 2 M , where Si = fj 2 N j Aij = 1g (i.e., z is feasible to the dual DLP (c; A; max) of LP (c; A; max)).
Proof: The necessity follows from Lemma 1. To prove its suciency, we show that z(S ) v(S ) holds for all S N . Consider an optimal solution y to the integer program: max y t c
s:t: y t AM;S 1tjSj ; y t AM;S 0tn?jSj ; y 2 f0; 1gm; 7
P
which yields v (S ). Let I = fi 2 M j yi = 1g; i.e., v (S ) = i2I ci . From (y )t AM;S 0tn?jS j , it )t AM;S = P Ai;S 1t implies that all Si , i 2 I , are holds Si S for all i 2 I . Furthermore ( y i2I jS j P P P disjoint. Therefore, z (S ) i2I z (Si). On the other hand, v (S ) = i2I ci i2I z (Si) by the assumption 2 on z . Hence v (S ) z (S ). 2 Lemma 2 leads to the following theorem:
Theorem 1 The core for Game(c; A; max) is nonempty if and only if LP (c; A; max) has an integer optimal solution. In such case, a vector z : N ! R+ is in the core if and only if it is an optimal solution to DLP (c; A; max).
Proof: Let z : N ! R+ be a vector in the core. In Lemma 2, the rst condition states that z (N ) is equal to the optimal value of ILP (c; A; max). The second condition of Lemma 2 states that z is a feasible solution to DLP (c; A; max), the dual of LP (c; A; max). Now if the two conditions of Lemma 2 hold, then z (N ) = v (N ) opt(ILP (c; A; max)) opt(LP (c; A; max)) opt(DLP (c; A; max)) (by the duality theory of linear programming) z(N ), and we have opt(ILP (c; A; max)) = opt(LP (c; A; max)), where opt(P ) denotes the optimum value of problem P. On the other hand if opt(ILP (c; A; max)) = opt(LP (c; A; max)), then let z : N ! R+ be an optimal solution of DLP (c; A; max). Then z (N ) = opt(DLP (c; A; max)) = opt(LP (c; A; max)) = opt(ILP (c; A; max)) = v(N ) implies z(N ) = v(N ) (i.e., condition 1 of Lemma 2). The condition 2 also holds since z is feasible to DLP (c; A; max). Then, z is in the core by Lemma 2. The second statement in the theorem follows from the above argument. 2 Similarly, we have the following lemma and theorem for the minimization game.
Lemma 3 A vector w : M ! R+ is in the core of Game(d; A; min) if and only if 1. w(M ) = v (M ) (i.e., w is an imputation), 2. w(Tj ) dj for all j 2 N , where Tj = fi 2 M j Aij = 1g (i.e., z is feasible to the dual DLP (d; A; min) of LP (d; A; min)).
Proof: For the only-if part, condition 1 is immediate since w is an imputation. Now let v(Tj ) be the optimal value of the integer program: min
s:t:
dt x ATj ;N x 1jTj j ; x 2 f0; 1gn:
Set xj := 1 and xk := 0 for k 2 f1; 2; : : :; ng ? fj g. This vector x is an integer feasible solution with objective value dj . Thus, dj v (Tj ). On the other hand, for w to be in the core, we need to have w(Tj ) v (Tj ), and condition 2 follows. The if part is proved as follows. Condition 1 gives us w(M ) = v (M ). To show that w(T ) v (T ) for all T M , let x be the optimal solution to: min
s:t:
dt x AT;N x 1jT j; x 2 f0; 1gn: 8
P
Let J = fj 2 N j xj = 1g. Then, v (T ) = P j 2J dj . Furthermore, T [j 2J Tj Pholds by the condition AT;N x 1jT j, and hence w(T ) j 2J w(Tj ), which is no more than j 2J dj since w(Tj ) dj by condition 2. Therefore, we have w(T ) v(T ). 2
Theorem 2 The core for Game(d; A; min) is nonempty if and only if LP (d; A; min) has an integer optimal solution. In such case, a vector w : M ! R+ is in the core if and only if it is an optimal solution to DLP (d; A; min).
Proof: Follows from Lemma 3, in the same way as Theorem 1 follows from Lemma 2.
2
We notice that Theorems 1 and 2 can be applied to deal with the rst two questions mentioned at the end of Section 2.1, while Lemmas 2 and 3 are useful to resolve the last three questions regarding algorithmic and complexity issues of cores.
3.2 Totally Balanced Games
For a game with set of players N and game value v : 2N ! R+ , the game with a subset S of players with ; 6= S N and the game value vS (S 0) = v (S 0) for all S 0 S is called the subgame induced by S . A game is called totally balanced if any of its subgames has nonempty core [25]. In this section, we discuss the relationship of the total balancedness between Game(1m; A; max) and Game(1n; A; min). Among the games discussed in this paper, we see that the maximum ow game (and its variants in Corollary 2) and the maximum arborescence game are totally balanced. Given a packing game Game(c; A; max), a subset S N of players induces the following subgame: 1. The player set is S . 2. For each subset S 0 S , the game value vS (S 0) is de ned as the value of the following integer program: max y t c
s:t: y tAM;S 0 1tjS 0j ; y t AM;N ?S 0 0tn?jS 0 j ; y 2 f0; 1gm:
By noting that constraint y t AM;N ?S 0tn?jS j is always implied for any S 0 S , we see that the subgame is equivalent to the packing game Game(cU ; AU;S ; max), where U = fi 2 M j Aij = 0; for all j 2 N ? S g. Similarly for a covering game Game(d; A; min), a subset T M of players induces the following subgame: 1. The player set is T . 2. For each subset T 0 T , vT (T 0) is de ned as the value of the following integer program: min
s:t:
dt x AT 0;N x 1jT 0j; x 2 f0; 1gn: 9
Clearly, this subgame is equivalent to the covering game Game(d; AT;N ; min). In this case, if Game(d; A; min) is totally balanced, so is Game(d; AT;N ; min).
Theorem 3 If Game(1m; A; max) is totally balanced, then the core for Game(1n; A; min) is nonempty. Proof: Since Game(1m; A; max) is totally balanced, it has a nonempty core. Therefore, by Theorem 1, the following linear program has an integer optimal solution y . LP (1m; A; max) : max y t 1m s:t: y tA 1tn ; y 0m: Without loss of generality, assume y1 = y2 = = yr = 1 and yr+1 = yr+2 = = ym = 0. We can rearrange the columns of A so that A11 = A12 = = A1p = 1 and A1(p+1) = A1(p+2) = = A1n = 0. Then, by the feasibility of y , all entries in the submatrix Af2;:::;rg;f1;:::;pg are zeros. Consider the following linear programs for 1 k p: LPk : max y t1m s:t: y tA 1tn; y t Ak 0; y 0m; which correspond to the subgames Game(1m; AM;N ?fkg; max), and let y k be their optimal solutions. By the total balancedness, 1tm y k are integers which are at least 1tm y ? 1, for all k (since a vector y 2 f0; 1gn such that y1 = 0 and yi = yi for all i = 6 1 is a feasible solution to LPk ). t k t We now claim that 1m y = 1m y ? 1 holds for at least P one k. Assume not, and we will have 1tm y k = 1tm y , k = 1; 2; ; p. Let y 0 = p1 [(1; 0; ; 0)t + pk=1 y k ]. Then for p + 1 j n, p X (y 0)tAj = p1 ( y k Aj ) 1: k=1 (y k )tAj
For 1 j p, we have (y j )t Aj 0 and 1 for all k 2 f1; 2; : : :; pg ? fj g. Since (1; 0; ; 0)Aj = 1, this implies (y 0)t Aj 1 for all j = 1; 2; ; n. Therefore, y 0 is a feasible solution to LP (1m ; A; max), but 1tm y 0 = 1tm y + p1 , a contradiction to the optimality of y . Based on this claim, we prove that LP (1m ; A; min) also has an integer optimal solution by induction on the number n of columns of A (which proves by Theorem 2 that Game(1n; A min) has nonempty core). For the base case of n = 1, the matrix A must be a vector of all ones, since otherwise, LP (1m ; A; max) is unbounded. Then x1 = 1 is the optimal solution for LP (11; A; min), which is an integer solution. For general n, let x be the optimal solution of LP (1n ; A; min). By the above claim, we may assume without loss of generality that 1tm y 1 = 1tm y ? 1. Let S = N ?f1g and T = fi 2 M j Ai1 = 0g. It is easy to see that the linear program LP1 and its dual DLP1 can be written as follows.
LP1 : max yTt 1jT j s:t: yTt AT;S 1tn?1 ; yT 0jT j: DLP1 : min 1tjSj xS s:t: AT;S xS 1jT j; xS 0jSj: Since Game(1m; A; max) is totally balanced, Game(1jS j; AT;S ; max) is totally balanced and has a nonempty core. Thus, by Theorem 1, we have an integer optimal solution yT0 to LP1 . By induction 10
hypothesis, we also have an integer optimal solution x0S to DLP1 . De ne w 2 f0; 1gn by w1 = 1 and wj = (x0S )j for j 2 S . Then AT;N w = AT;S x0S 1jT j and AM ?T;N w = 1jM ?T j + AM ?T;S x0T 1jM ?T j. Therefore w is a feasible integer solution to LP (1n ; A; min), and furthermore, 1tn w = 1 + 1tjS jx0S = 1 + 1tjT jyT0 = 1 + 1tm y 1 = 1tm y (=optimum value of LP (1m ; A; max)) implies that it is an optimal solution to LP (1n ; A; min) by the duality theory of linear programming. By Theorem 2, therefore, the core for Game(1n ; A; min) is nonempty. 2 We remark in passing that a weaker condition such as only the nonempty core for Game(1m; A; max) would not give the same result: The following matrix A has a nonempty core for Game(16; A; max) but the core for Game(14; A; min) is empty. 2 6 6 6 A = 666 6 4
0 0 0 1 1 1
0 1 1 0 0 1
1 0 1 0 1 0
1 1 0 1 0 0
3 7 7 7 7 7 7 7 5
To see this, rst note that y = (1; 0; 0; 0; 0; 1)t and x = (1=2; 1=2; 1=2; 1=2)t are optimal solutions to LP (16 ; A; max) and LP (14; A; min), respectively, because they satisfy the complementary slackness condition of linear programming. Since y is integer, Game(16; A; max) has a nonempty core by Theorem 1. However, LP (14 ; A; min) cannot have an integer optimal solution x whose entries consist of two ones and two zeros, because, for any choice of two entries, the corresponding entries in some row of A are zeros. Then, by Theorem 2, the core for Game(14; A; min) is empty. In addition, a stronger result that Game(1n; A; min) is totally balanced would not hold either: The following matrix A gives a totally balanced game Game(16; A; max) but Game(13; A; min) is not totally balanced. 2 6 6 6 A = 666 6 4
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
3 7 7 7 7 7 7 7 5
This can be seen as follows. First note that A contains 3 3 identity matrix. For any choice of a subset S N = f1; 2; 3g, it is easy to see that y with yi = 1 for i 2 S and yi = 0 for other i is an optimal solution to LP (16 ; A; max). Hence, by Theorem 1, Game(16; A; max) is totally balanced. However, for T = f4; 5; 6g M = f1; : : :; 6g, LP (13; AT;N ; min) has the optimal value 3/2, and hence no integer optimal solution. Surprisingly, for the opposite direction, a stronger result holds, as will be stated in Theorem 4. First we prove the next lemma, which is similar to Theorem 3 but requires a somewhat dierent proof.
Lemma 4 If Game(1n; A; min) is totally balanced, then the core for Game(1m; A; max) is nonempty.
11
Proof: To prove that the core of Game(1m; A; max) is nonempty, we only have to show by Theo-
rem 1 that LP (1m ; A; max) has an integer optimal solution. We prove this by induction on the number m of rows of A. The base case m = 1 is trivial, since there is only one variable y1 in LP (1m ; A; max) and y1 = 1 is an integer optimal solution to the problem. We then prove the general case of m > 1 on the premise that it is true for m ? 1. Let x be any integer optimal solution of LP (1n ; A; min). Denote by Ai the submatrix of A excluding the i-th row Ai , and let xi be an optimal solution for LP (1n ; Ai ; min), where m 2. Clearly, for each i = 1; 2; : : :; m, 1tn x ? 1 1tn xi 1tn x and Ai x 1 hold. We now show that there is an integer optimal solution to LP (1m ; A; max), as this completes the proof by Theorem 1, by considering the following three cases. Case-1: 1t xi = 1t x for some i 2 f1; 2; : : :; mg. This implies that LP (1n ; Ai; min) and LP (1n; A; min) have the same optimum value. Since Game(1n; A; min) is totally balanced, Game(1n ; Ai; min) is also totally balanced. By inductive hypothesis, LP (1m?1 ; Ai ; max) has an integer optimal solution y^ : (M ? fig) ! f0; 1g, which can be extended to a feasible solution y^ : M ! f0; 1g of LP (1m; A; max) by assigning y^i = 0 for this i. Since LP (1n ; Ai; min) has the same objective value as LP (1n ; A; min), this y^ 2 f0; 1gm is an optimal solution for LP (1m ; A; max) which is integer. Case-2: Ai x > 1 holds for some i 2 f1; 2; : : :; mg. We show that x is also an integer optimal solution to LP (1n ; Ai ; min) for such i. Let y be an optimal solution to LP (1m ; A; max), and y i be obtained from y by removing its i-th component yi . Clearly, x (resp., y i ) is feasible to LP (1n; Ai; min) (resp., its dual, LP (1m?1 ; Ai ; max)). Since Aix > 1 implies yi = 0 by complementary slackness of linear programming, it follows that 1tm?1 y i = 1tm y = 1tn x . Thus, x and y i are optimal solutions to LP (1n ; Ai; min) and its dual, respectively, and LP (1n; Ai ; min) and LP (1n; A; min) have the same optimum value. Then, we can apply the same argument as in Case-1 to show that LP (1m ; A; max) has an integer optimal solution. Case-3: For any integer optimal solution x to LP (1n ; A; min), 1tn xi = 1tn x ? 1 and Ai x = 1 hold for all i = 1; 2; : : :; m. Now let x be an integer optimal solution to LP (1n ; A; min). Renaming the indices if necessary, we may assume that x1 = x2 = = xp = 1 and xp+1 = = xn = 0. We will show below that p = m and the submatrix AM;f1;:::;mg is the identity matrix. Let I (j ) = fi j Aij = 1g for j = 1; 2; : : :; n. Then I (j ) 6= ; for all j = 1; : : :; p, since x is an optimal solution of LP (1n ; A; min). Without loss of generality, we can permute the rows of A so that A11 = 1. We claim the following properties. 1. fI (1); I (2); : : :; I (p)g is a partition of the set M . 2. For all j with 2 j p, we have A1j = 0. 3. For all i with 2 i m, we have Ai1 = 0. From the assumption of Case-3, Ax = 1 holds, which means that fI (1); I (2); : : :; I (p)g is a partition of the set M , i.e., property 1. Clearly, property 1 and A11 = 1 imply property 2. To show the property 3, we extend the optimal solution x1 of LP (1n ; A1; min) to an integer solution x1o of LP (1n ; A; min) by assigning x11o = 1 and x1j o = x1j , 2 j n. Clearly, x1o is feasible in LP (1n ; A; min), and 1tn x1o 1tn x1 + 1 1tn x holds by the above assumption 1tn x1 1tn x ? 1 of Case-3. Then, we see that 1tn x1o = 1tn x1 + 1 = 1tn x and x11 = 0 must hold. Therefore, x11o is an integer optimal solution of LP (1n ; A; min), and we can assume that Ax1o = 1 holds (otherwise, Case-2 can be applied to this x1o ), from which fI (j ) j x1j o = 1g is a partition of M . By the feasibility of x1 , we have [x1j =1 I (j ) = M ? f1g, and [x1j =1 I (j ) is also a partition of M ? f1g (since fI (j ) j x1j o = 1g is a partition of M ). Therefore, x11 = 0 implies that I (1) ? f1g = ; (otherwise, for an i 2 I (1) ? f1g, A1x1 = 0 would result). This proves property 3. 12
By applying this to other indices j with 2 j p, we see that the j -th column Aj contains exactly one nonzero entry for each j = 1; : : :; p. From this and property 1, we have p = m and hence AM;f1;:::;mg is the identity matrix. This property and the fact that x is an optimal solution of LP (1; A; min) imply that every column Aj for m < j n also contains at most one nonzero entry. Therefore, the vector y = 1m is feasible, and hence, optimal to LP (1m ; A; max). 2 The condition that Game(1; A; min) is totally balanced cannot be relaxed to the nonemptiness of the core of Game(1; A; min) as shown by the following example. 2 0 0 0 1 1 1 66 0 1 1 0 0 1 A = 64 1 0 1 0 1 0
1 1 0 1 0 0
3 7 7 7 5
To see this, rst note that y = (1=2; 1=2; 1=2; 1=2)t and x = (1; 0; 0; 0; 0; 1)t are optimal solutions to LP (14 ; A; max) and LP (16; A; min), respectively, because they satisfy the complementary slackness condition of linear programming. Since x is integer, Game(16; A; min) has a nonempty core by Theorem 2. However, LP (14; A; max) cannot have an integer optimal solution x whose entries consist of two ones and two zeros, because, for any choice of two entries, the corresponding entries in some row of A are ones. Then, by Theorem 1, the core for Game(14; A; max) is empty. However, we can make the conclusion stronger. Theorem 4 If Game(1n; A; min) is totally balanced, then Game(1m; A; max) is also totally balanced.
Proof: To show that Game(1m; A; max) is totally balanced, it is enough to show that the following linear programs have integer optimal solutions for all S N . max y t 1m
s:t: y tAM;S 1tjSj ; y t AM;S 0tn?jSj ; y 0m ;
where S = N ? S . Let T = fi 2 M j Aij = 0 for all j 2 S g, T = M ? T = fi 2 M j Aij = 1 for some j 2 S g. For this, consider the following linear program:
LP (1jT j; AT;S ; max) : max yTt 1jT j s:t: yTt AT;S 1tjSj ; yT 0jT j: Given any optimal solution yT 2 f0; 1gjT j to this linear program, the vector y de ned by yi = 0 for all i 2 T and yi = (yT )i , i 2 T is an optimal solution tom LP (1m ; AM;S ; max). Therefore, it is sucient to show that yT can be chosen as an integer optimal solution. For this, consider its dual DLP (1jT j; AT;S ; max) which is described as follows:
LP (1jSj; AT;S ; min) : min 1tjSj xS s:t: AT;S xS 1tjT j; xS 0jSj : 13
This can be rewritten as follows, since AT;S has all zero entries.
LP (1n ; AT;N ; min) : min 1tn x s:t: AT;S xS + AT;S xS 1tjT j; x 0n: Since Game(1n ; A; min) is totally balanced, so is subgame Game(1n ; AT;N ; min). By Theorem 2 and Lemma 4, this implies that the following linear program has an integer optimal solution.
LP (1jT j; AT;N ; max) : max yTt 1jT j s:t: yTt AT;N 1tn; yT 0jT j: Obviously, any feasible solution yT to LP (1jT j; AT;S ; max) is feasible to LP (1jT j; AT;N ; max), since yTt AT;S 1tjSj is automatically satis ed by the fact that AT;S is of all zero entries. This proves that LP (1jT j; AT;S ; max) has an integer optimal solution. 2
3.3 Some Integrality Conditions
In the history of combinatorial optimization, characterizing the matrices A which can guarantee the associated linear programs to have integer optimal solutions has been one of the major theoretical goals. We brie y review here some of such concepts and discuss how they are related to our characterization of combinatorial optimization games. Given an m n matrix A and n-vector b of reals, de ne
P (A; b) = fy 2 Rm j y t A b; y 0g PI (A; b) = fy 2 Rm j y 2 P (A; b); y 2 Z m g: In other words, P (A; b) is the polyhedron de ned by linear inequalities y t A b and y 0, and PI (A; b) is the convex hull of all integer solutions in the polyhedron, which is called the integer polyhedron of P (A; b). P (A; b) is said to be integral if P (A; b) = PI (A; b) holds; i.e., all extremal points of P (A; b) are integral. The following three de nitions of matrix A are relevant to our discussion. (i) A is totally unimodular if P (A; b) is integral for all b 2 Z n . (ii) A is balanced if P (A0 ; 10) is integral for any submatrix A0 of A and the vector of ones 10 of the conformable dimension. (iii) A is perfect if P (A; 1n ) is integral. From these de nitions, we see that a totally unimodular matrix A is balanced, and a balanced A is perfect. There are various interesting characterizations of these matrices, as described in textbooks [39, 35] and relevant references. Comparing the above de nitions with that of LP (c; A; max) of our games and Theorem 1, it is now evident that Game(c; A; max) has nonempty core if A is perfect, and that it is totally balanced if A is balanced. However, we note that these are only sucient conditions, since Theorem 1 requires the integrality only for a particular c, which is c = 1m in most of our examples of games, whereas the perfectness (or balancedness) guarantees the integrality for all c. It is not dicult to 14
construct a matrix A, which is not perfect but satis es the integrality condition for a particular LP (c; A; max). For example, the matrices A of the maximum ow game, discussed in Section 3.1, is not in general totally unimodular, but it is shown therein that LP (1jPj; A; max) satis es the integrality condition, where P denotes the set of all s-t paths.
4 A Selection of Examples There are many interesting optimization games on graphs, which can be formulated as packing and covering games in Section 2. We will focus on the following games. 1. Maximum ow game on unit networks, s-t edge connectivity game on undirected graphs, s-t vertex connectivity game on undirected graphs, and maximum matching game on bipartite graphs. 2. Maximum r-arborescence game. 3. Maximum matching game and minimum vertex cover game. 4. Maximum independent set game and minimum edge cover game. 5. Minimum coloring game. Kalai and Zemel [25] have a polynomial time algorithm for the core of the maximum ow game on directed graphs. We can immediately extend their results to s-t edge and vertex connectivity games on undirected graphs and to maximum matching game on bipartite graphs, by using our more powerful formulation. The game in the second category falls into the same group as the rst category because of the integrality of the corresponding linear programs. For the pairs of games in the third and the fourth categories, there are polynomial time algorithms for integer programs associated with one game of the pair but not the other. However, we will see that the condition for the nonemptiness of the core is polynomially checkable for both games in each pair. Finally, we show that the problems associated with the core of the minimum coloring game in the fth category is intractable. The following table summarizes the results on these games. Table 1. Summary of the results for optimization games on graphs.
Games Max ow (G; D) s-t connectivity (G; D) r-arborescence (D) Max matching (G) Min vertex cover (G) Min edge cover (G) Max indep. set (G) Min coloring (G) Edge-connectivity (G)
Core Convex Testing Checking if an Finding an nonemptiness characterization nonemptiness imputation imputation of the core of the core is in the core in the core yes yes | P P yes yes | P P yes yes | P P no no P P P no yes P P P no no P P P no yes P P P no no NPC NPC NPH yes no | coNPC P
D: digraphs, G: undirected graphs P: polynomial time, NPC: NP-complete, coNPC: coNP-complete, NPH: NP-hard, |: trivial
4.1 The Maximum Flow Game and Its Variants
Consider the maximum ow game on a unit directed network D = (V; E ) with source s 2 V and sink t 2 V , which is also denoted simply by D = (V; E; s; t). A unit network (i.e., with arc capacity 15
one) is also a simple directed graph, or digraph. Without loss of generality, we assume in this section that there is at least one path from s to t (called an s-t path, for short) in D. The value of a maximum ow in the case of a digraph D is the number of arc disjoint s-t paths in D. For this reason, it is also called the s-t arc-connectivity of D. A partition of V , C = (U; V ? U ), is called a cut in G, and represents the set of arcs f(i; j ) 2 E j i 2 U; j 2 V ? U g. A cut is an s-t cut if s 2 U and t 2 V ? U . A minimum s-t cut is an s-t cut with the minimum cardinality as an arc set. The following property is well known as Menger's theorem (which is a special case of a more general result called the max- ow min-cut theorem for capacitated networks) [17].
Lemma 5 Given a digraph D = (V; E; s; t), the s-t arc-connectivity of D (i.e., the value of a maximum ow) is equal to the size of a minimum s-t cut in D.
2
In the s-t arc-connectivity game on a digraph D = (V; E; s; t), each player controls an arc, and the value v (S ) of a subset S E is de ned to be the maximum ow from s to t (=the s-t arc-connectivity) in the subnetwork D[S ] = (V; S ). To represent this game as a packing game Game(c; A; max) of Section 2, let P be the set of s-t paths in G, and A = AP ;E be the arcpath incidence matrix, where the rows of A corresponds to the set of s-t paths, and the columns correspond to the set of arcs; Aij = 1 if and only if the arc j is in the s-t path i. Then (E; v ) is given by Game(1jPj; A; max). In this case, the linear program LP (1jPj; A; max) has an integer optimal solution due to Lemma 5 (even though the matrix A is not in general totally unimodular). By Lemma 2, an imputation z is in the core if and only if z is an optimal solution for the dual linear program. However, this does not immediately provide us polynomial time algorithms because there are an exponential number (in the graph size) of constraints for the dual linear program. From this point of view, it is important to know that, if testing feasibility of an imputation or nding a violating constraint can be done in polynomial time, then a solution in the core can be found in polynomial time by the ellipsoid method [21] even if there are exponentially many constraints. In the case of the maximum ow game, an imputation z is feasible if and only if there is no path such that the sum of z on the path is less than one; in other words, if and only if the sum of z on the shortest path is not less than one. Since such a shortest path can be obviously obtained in polynomial time, there are polynomial time algorithms to check and to generate an imputation in the core. However, we point out here that the same result follows from the above Menger's theorem. Take a minimum s-t cut C in D, and let IC be its characteristic vector; IC (e) := 1 if e 2 C and 0 otherwise. Let z := IC . Then z is an imputation since z (E ) = jC j = v (E ) by Lemma 5. Furthermore, for any S E , we have z (S ) = jC \ S j v (S ) by Lemma 5 and the fact that C \ S is an s-t cut in D[S ]. Therefore, this IC is indeed in the core, and the core of the maximum ow game is nonempty. This establishes the next lemma which was rst observed in [25].
Lemma 6 The core of the maximum ow game is always nonempty.
2
At this point, to make the following argument more general, we introduce the concept of dummy players. A player j in a game is dummy if its imputation value is constrained to be z (j ) = 0. We say that an imputation z is in the core of a game with a set N^ N of dummy players if it is in the core of the original game and z (j ) = 0 holds for all j 2 N^ . For the maximum ow game (and some other games later), we introduce a set E^ E of dummy players, and call E^ valid if E ? E^ contains at least one minimum s-t cut C . For a given E^ , we can test its validity by nding a minimum s-t cut for the auxiliary graph obtained by changing the capacities of all arcs in E^ to 1. In other words, E^ is valid if and only if the maximum ow in the auxiliary graph is the same as that of the original graph. 16
Now we present the following lemmas, which is a slight generalization of Lemma 6 in the sense that a valid set E^ of dummy players is introduced. Lemma 7 For a digraph D = (V; E; s; t) and a set E^ of dummy players, the maximum ow game has nonempty core if and only if E^ is valid.
Proof:
By slightly modifying the above argument preceding Lemma 6, in order to take into account the valid arc set E^ , it is easy to see that the characteristic vectors of minimum cuts C contained in E ? E^ are in fact in the core. This proves the if-part. To show the only-if-part, we rst prove the next claim. Claim 1 Let z : E ! R+ with z(e) = 0, e 2 E^, be in the core. Then each arc e with z(e) > 0 belongs to a minimum s-t cut C E ? E^ of D for which z (a) > 0 holds for all a 2 C . 2 Let Ez+ = fe 2 E j z (e) > 0g. From this claim, we see that Ez+ (6= ;) contains at least one minimum cut C Ez+ E ? E^ , implying that E^ is valid. Let us now prove the claim. We duplicate all the arcs a 2 E with z (a) = 0, and denote the new digraph by D0 = (V; E [ E 0), where E 0 denotes the set of duplicated arcs, and extend z to D0 by assigning zeros to E 0. We rst show that this z is in the core of the game of D0 with dummy players E^ . If this is not true, then, by Lemma 2, there is an s-t path P 0 E [ E 0 such that z(P 0 ) < 1. However, this is a contradiction to the fact that z is in the core of the game of D, since z(P 0 ) = z(P ) 1 must hold for the path P E obtained from P 0 by replacing the duplicated arcs in P 0 with their original arcs. We then show that each arc e with z (e) > 0 must be in some minimum s-t cut C of D0 . Otherwise, we can remove such e without reducing the value of a minimum s-t cut of D0 , from which we have v (E [ E 0) = v (E [ E 0 ?feg) z (E [ E 0 ?feg) < z (E [ E 0), contradicting that z is an imputation. Furthermore, note that, since the game values (= the value of a minimum s-t cut) of D0 and D are the same by the rst claim, the above C is also a minimum s-t cut of D. Also, z (a) > 0 holds for all a 2 C , because otherwise C must contain at least one duplicated arc and it implies that C is not a minimum s-t cut in D. Therefore, each arc e with z (e) > 0 belongs to a minimum s-t cut C of D for which z (a) > 0 holds for all a 2 C . This proves Claim 1. 2 As will be discussed at the end of this section, the generalization of introducing dummy players is useful to transfer the results of the maximum ow game to other games such as s-t edge-connectivity and s-t vertex-connectivity games on undirected graphs, and maximum matching game on bipartite graphs. This is dicult with the approach of Kalai and Zemel [25] since their approach does not distinguish an arc associated with a player and an arc not associated withPa player. jE j Now a vector P z 2 R is a convex combination of a family of C if z = C C IC holds for some C such that C C = 1 and C 0 for all C . If the family of C is nite, the set of such z forms a convex polyhedron whose extremal points are precisely those characteristic vectors IC . Kalai and Zemel [25] went one step further to show that an imputation z is in the core if and only if it is a convex combination of IC of minimum s-t cuts C . This can be generalized to the next theorem. Theorem 5 Let z : E ! R+ be an imputation of the maximum ow game on a digraph D = (V; E; s; t) with a valid set E^ of dummy players. Then z is in the core with respect to E^ (i.e., z(e) = 0, e 2 E^ ) if and only if it is a convex combination of the set of the characteristic vectors for minimum s-t cuts C contained in E ? E^ .
Proof: As proved in Lemma 7, the characteristic vectors of minimum cuts C E ? E^ are in the
core. This proves the if-part, since it is easy to see from de nition that the core (with z (e) = 0, e 2 E^ ) is convex. 17
To prove the only-if-part, let z be an imputation in the core with z (e) = 0 for all e 2 E^ , and let Ez+ = fe 2 E j z (e) > 0g. We assume that z is not a stated convex combination, and choose the one minimizing jEz+ j. By Claim 1, for any arc e with z (e) > 0, there is a minimum s-t cut C = (U; V ? U ) such that z(a) > 0 holds for all arcs a 2 C . Now let us choose such a minimum s-t cut C = (U; V ? U ) with the maximal cardinality jU j. We show that z (e) = 0 holds for all arcs e = (u; u0) 2 E with fu; u0g V ? U . If there is an arc e = (u; u0) 2 C with z(e) > 0, then by Claim 1, D has a minimum cut C 0 = (U 0; V ? U 0 ) such that (u; u0) 2 C 0 and z (a) > 0 for all a 2 C 0. Then, s 2 U \ U 0, u 2 U 0 ? U , u0 2 (V ? U ) \ (V ? U 0 ) and U ? U 0 6= ; (by the maximality of jU j) hold (i.e., two cuts C and C 0 are crossing), and we see, from the submodularity of cut function, that C 00 = (U [ U 0; V ? (U [ U 0)) is also a minimum s-t cut, contradicting the maximality of jU j. Let = minfz (a) j a 2 C g(> 0), let IC : E ! f0; 1g be the characteristic vector of C , and de ne y = z ? (i.e., y (e) = z (e) ? for all e 2 E ) and z 0 = z(Ez)(?E)jC j y . Clearly, z 0(E ) = z (E ) = jC j (since z is an imputation) and hence z 0 = 1?1 y is an imputation. We then claim that this imputation z 0 is also in the core. If not, by Lemma 2, there is a path S E such that z 0 (S ) < 1. If jS \ C j > 1, consider a directed path from s to a vertex in V ? U consisting only of arcs in S . Let e be the rst edge in C = (U; V ? U ) on this path and let h(e ) be the head vertex of e which is in V ? U . There must exists an h(e ) ? t path P in the subgraph D[V ? U ] since C is a minimum cut (otherwise we would obtain a smaller s-t cut). We obtain a new arc set T = (S ? C ) [ (e [ P ). On the other hand, if jS \ C j = 1, then let T = S . In any case we have z 0(T ) z 0(S ) since z 0(a) = 0 for all arcs a in G[V ? U ] was proved above. By jT \ C j = 1 and z(T ) 1 (since z is in the core), we have y(T ) = z(T ) ? 1 ? . Therefore, z0 (T ) = 1?1 y(T ) 1. But, this contradicts the fact that z 0(T ) z 0(S ) < 1, and z 0 must be in the core. By construction of z 0 , we have Ez+0 Ez+ . Furthermore, by our choice of z , z 0 can be expressed as a convex combination of the characteristic vectors of minimum s-t cuts. One the other hand, z = IC + (1 ? )z 0 is also a convex combination of minimum s-t cuts. This is a contradiction to the assumption on z . 2 The following algorithmic results are immediate from the discussion so far. Corollary 1 For a valid set E^ of dummy players, testing nonemptiness, checking membership and nding a core member of the maximum ow game, can all be answered in polynomial time. 2 We emphasize at this point that the above results can be extended to other optimization games on graphs, which can be reducible to the maximum ow game on a directed network. As found in the literature such as [12, 17], those problems include: P1: s-t edge-connectivity game on an undirected graph G = (V; E; s; t), where players are on edges and v (S ), S E , is de ned to be the value of maximum ow from s to t in the induced network G[S ], P2: s-t vertex-connectivity game on a digraph D = (V; E; s; t) (resp. undirected graph G = (V; E; s; t)), where players are on vertices except s and t, and v (S ), S V ? fs; tg, is de ned to be the maximum number of arc (resp., edge) disjoint paths from s to t in the induced digraph D[S ] (resp., graph G[S ]), P3: maximum matching game with vertex players on a bipartite graph G = (V1; V2; E ), where v(S ), S V1 [ V2, is de ned to be the size of a maximum matching in the induced graph G[S ]. 18
In applying the standard reduction techniques for network ow problems, however, we have to be careful about which arcs of the resulting digraph the players of the original game are assigned to. The arcs to which players are not assigned are treated as arcs with dummy players. For a digraph D = (V; E; s; t) (or undirected graph G = (V; E; s; t)), a subset W V ?fs; tg is called an s-t vertex-cut if the graph induced by V ? W has no path from s to t.
Corollary 2 For every game in the above P1, P2 and P3, the core is always nonempty, and an
imputation z is in the core if and only if it is a convex combination of the set of characteristic vectors of minimum s-t cuts (for P1), minimum s-t vertex-cuts (for P2) and minimum vertexcovers (for P3), respectively. Furthermore, testing nonemptiness, checking membership and nding a core member can all be answered in polynomial time.
Proof: P1: We reduce the s-t edge-connectivity game on an undirected graph G = (V; E; s; t) to
the maximum ow game on a digraph D = (V 0 ; E 0; s; t) with dummy players. We replace each edge e = (u; v) by ve arcs (u; we), (v; we), (we; we0 ), (we0 ; u), (we0 ; v) introducing two new vertices we and we0 . In the resulting digraph D = (V 0; E 0; s; t), we denote arc eD = (we ; we0 ) 2 E 0 for an edge e 2 E , and SD = feD j e 2 S g E 0 for a subset S E . Let E^ = E 0 ? ED . Then it is easy to see that the value v (S ) of the s-t edge-connectivity game on G for a subset S E is equivalent to the value v 0(SD [ E^) of the maximum ow game for subset SD [ E^ E 0. In particular, v(E ) = v 0 (E 0). Also it is easy to see that E^ = E 0 ? ED is valid in the resulting digraph D. Therefore, an imputation z : E ! R+ in the s-t edge-connectivity game on G is in the core if and only if z 0 : E 0 ! R+ with z0 (eD ) = z(e) for e 2 E and z 0 (a) = 0 for a 2 E^ is in the core of the maximum ow game on D with set E^ of dummy players. Since E^ is valid, we can apply Lemma 7 and Theorem 5 and Corollary 1 to this maximum ow game on D to obtain the statements for P1. P2: Consider the s-t vertex-connectivity game on a digraph D = (V; E; s; t) (or an undirected graph G = (V; E; s; t)). In the case of an undirected graph G, we rst replace each edge e by two oppositely oriented arcs between the end vertices of e, and we see that value v (S ) for any S V ? fs; tg remains unchanged. Then we consider only the s-t vertex-connectivity game on a digraph D = (V; E; s; t). We replace each vertex u 2 V ? fs; tg with a new arc au = (u0 ; u00) after splitting u into two new vertices u0 and u00, where all arcs (w; u) 2 E now enter u0 while all arcs (u; w) 2 E leave from u00. The resulting digraph is denoted D0 = (V 0 ; E 0; s; t). We denote AS = fau j u 2 S g E 0 for a subset S V ?fs; tg. Let E^ = E 0 ? A(V ?fs;tg). Then it is easy to see that the value v (S ) of a subset S V ?fs; tg in the s-t vertex-connectivity game on D is equivalent to the value v 0(AS [ E^ ) of the maximum ow game on D0 for the same subset AS [ E^ E 0. In particular, v (V ? fs; tg) = v 0(E 0), and it is easy to see that E^ = E 0 ? A(V ?fs;tg) is valid in the resulting digraph D0. Therefore, an imputation z : E ! R+ in the s-t vertex-connectivity game on D is in the core if and only if z 0 : E 0 ! R+ with z 0(au ) = z(u) for u 2 V ? fs; tg and z 0 (a) = 0 for a 2 E^ is in the core of the maximum ow game on D0 with set E^ of dummy players. Since E^ is valid, we can apply Lemma 7 and Theorem 5 and Corollary 1 to this maximum ow game on D0 to obtain the statements for P2. P3: For an undirected bipartite graph G = (V1; V2; E ), let D = (V = V1 [ V2 [ fs; tg; E 0; s; t) be the digraph obtained from G by regarding each edge (u; w) 2 E as an arc (u; w) oriented from u to w, and by adding two new vertices s and t and new arcs (s; u), u 2 V1, and (w; t), w 2 V2. Then we see that the value v (S ) of the maximum matching game on G for a subset S V1 [ V2 is equal to the value v 0(S ) of the s-t vertex-connectivity game of the resulting digraph D for the same subset S V1 [ V2 = V ? fs; tg. Therefore, the results for P2 can be immediately applied to this game. 2 19
Remark 1 Corresponding to the maximum ow game Game(1jPj; A; max) on a digraph D = (V; E; s; t), one may de ne the covering game Game(1jE j; A; min) on the same D, where A = AP ;E is the arc-path incidence matrix described after Lemma 5. This game can be interpreted as the minimum cut game since its v (P ) represents the size of a minimum cut separating s and t in D. This game has players on all s-t paths in D, which may be exponentially many in the size of jV j and jE j (suggesting that such game may not be interesting from computational complexity view point). Recall that, for any subset S E , the linear program max y t 1jPj
s:t: y tAP ;S 1tjSj ; y t AP ;S 0tjE?Sj ; y 0jPj
naturally represents the maximum ow problem in the induced digraph D[S ] = (V; S; s; t), and hence it has an integer optimal solution. However, this is not the case in the minimum cut game. Although LP (1jE j; A; min) has an integer optimal solution due to Lemma 5, the corresponding linear programs min 1tjE jx
s:t: AT;E 1tjT j; x 0jEj
for subsets T P may not enjoy such an integral property. For example, consider a digraph D containing three s-t paths Pi , i = 1; 2; 3, such that Pi \ Pj = feij g, 1 i < j 3, hold for some arcs fe12; e23; e13g E . For T = fP1 ; P2; P3g P , LP (1jE j ; AT;E ; min) has an optimal solution x : E ! R+ such that x(e12) = x(e23) = x(e13) = 0:5 and x(e) = 0 for other arcs (i.e., v(T ) = 1:5) and hence there is no integer optimal solution. For these reasons, we do not investigate further details of the minimum cut game in this paper.
4.2 The Arborescence Game
The maximum r-arborescence game and minimum r-cut game is played on a digraph D = (V; E ) with a root r 2 V . Recall that an r-arborescence in D is a spanning directed tree such that every vertex in D is reachable from r. For each subset S E of arcs (i.e., players), the game value v (S ) is de ned to be the maximum number of arc-disjoint r-arborescences on the subdigraph D[S ] = (V; S ). We call the maximum r-arborescence game also as the single source connectivity game (on digraphs). This game can be formulated as a packing game Game(1jM j; A; max) by matrix A such that the rows correspond to all r-arborescences and the columns correspond to all arcs; Aij = 1 if and only if arc j is in the i-th r-arborescence. The model of the r-arborescence game can arise when we want to maintain paths from a distinguished source r to all the vertices in the network. For each arc in D, there is one player in control of this arc. By Lemma 2, an imputation is in the core of this game if and only if there is no r-arborescence such that the sum of the imputation on the r-arborescence is less than one. The questions about of the core of this game can be answered in polynomial time, similar to that of the maximum ow game, because the integrality of the corresponding linear programs for both the r-arborescences and the r-cuts follows from the next well known result of Edmonds [11]. A cut C = (U; V ? U ) in a digraph D is called an r-cut if r 2 U holds.
Lemma 8 [11] In a digraph D = (V; E ) with root r 2 V , the maximum number of pairwise arcdisjoint r-arborescences is equal to the minimum cardinality of an r-cut.
20
2
Let the maximum number of disjoint r-arborescences be k. Since these k pairwise disjoint rarborescences form a feasible solution to the primal linear program LP (1jM j; A; max) of this game, and the minimum cardinality r-cut of size k is a feasible solution to its dual linear program, it follows that they are optimal solutions to the primal and the dual, respectively, by the duality theory of linear programming. Therefore the primal linear program has an integer solution. A set E^ of arcs (for dummy players) is called valid if E ? E^ contains at least one minimum r-cut C E of D. Analogously with Lemma 7 and Theorem 5, we have the following results (proofs of the next two theorems are obtained from those for Lemma 7 and Theorem 5 just by replacing \Lemma 5", \s-t path", \s-t cut" and \h(e)-t path" with \Lemma 8", \r-arborescence", \r-cut" and `h(e ){arborescence", respectively). Theorem 6 For a digraph D = (V; E ) with root r 2 V and a set E^ of dummy players, the maximum r-arborescence game with v (E ) > 0 has nonempty core if and only if E^ is valid. 2
Theorem 7 Let z : E ! R+ be an imputation of the maximum r-arborescence game on a digraph D = (V; E ) with root r 2 V and a valid set E^ of dummy players. Let v(E ) > 0. Then z is in the core with respect to E^ (i.e., z (e) = 0, e 2 E^ ) if and only if it is a convex combination of the set of the characteristic vectors for minimum r-cuts C contained in E ? E^ . 2 Corollary 3 Given a set E^ of dummy players, testing nonemptiness, checking membership and
nding a core member of the maximum r-arborescence game, can all be answered in polynomial time. 2
4.3 Matching and Vertex Cover
Given a graph G = (V; E ), the maximum matching game has players on vertices and the game value v(S ) for S V de ned by the maximum matching size in the induced subgraph G[S ]. Similarly, the minimum vertex cover game has players on edges and the game value v (S ) for S E de ned by the size of a minimum vertex cover in the subgraph G[S ] = (V; S ). These games are formulated by packing game Game(1jE j; A; max) and covering game Game(1jV j ; A; min), respectively, where the constraint matrix A is the incidence matrix of G in which rows correspond to edges E and columns correspond to vertices V ; Aij = 1 if and only if edge i and vertex j are incident. As discussed at the end of Section 4.1, the maximum matching game for bipartite graphs can be formulated as a special case of the maximum ow game having a subset of edges as players. Similar results also hold for the minimum vertex cover game on bipartite graphs (as will be discussed in Section 3.3.2). However, these nice properties break down for general graphs, and the core is nonempty only for some special classes of graphs.
4.3.1 Matching
By Lemma 2, an imputation z is in the core of the matching game if and only if z (u) + z (u0) 1 holds for all edges (u; u0) 2 E . Based on this observation, we can easily nd two classes of graphs for which the cores are always nonempty: The class of graphs for which the size of a minimum vertex cover is the same as the size of a maximum matching, and the class of graphs with a perfect matching. For a graph G = (V; E ) in the rst class, we assign z (v ) = 1 if v is in the minimum vertex cover and z (v ) = 0 otherwise. It is easy to see that this z is indeed in the core. For a graph G = (V; E ) in the second class, we assign every vertex v 2 V with z (v ) = 0:5. Then z(V ) = jV j=2 = v(E ), since G has a perfect matching. Furthermore, since the size of a maximum matching in any subgraph G[S ] induced by S V is no more than jS j=2, this z is indeed in the core. 21
Furthermore, one can easily construct other graphs with non-empty cores, which are not in the above two classes. For example, take two graphs one from each of the above classes, and connect them with edges between the vertices in the minimum cover and the vertices in the perfect matching. However, the next theorem says that these are essentially all graphs which have nonempty cores for the maximum matching game.
Theorem 8 An undirected graph G = (V; E ) has a nonempty core for the maximum matching game if and only if there exists a subset V1 V such that 1. the subgraph G1 = G[V1] induced by V1 has a minimum vertex cover W with the same size as its maximum matching, 2. the subgraph G2 = G[V ? V1] induced by V ? V1 has a perfect matching, 3. all the remaining edges (u; u0) 2 E between G1 and G2 satisfy u 2 W for the vertex cover W in 1.
Proof: The if-part is immediate: Assign z(u) = 1 for each vertex u 2 W and assign z(u) = 0 for u 2 V1 ? W , and assign z(u) = 0:5 for all vertices u in V ? V1. Clearly this z is in the core. For the only-if-part, consider a graph G = (V; E ) with a nonempty core for the maximum matching game, and let z be in the core. Without loss of generality, we assume that G is connected. Find a maximum matching M of G. If M is a perfect matching, then we are done with V1 = V . Let T0 be the (nonempty) set of vertices which are not incident to any matching edge in M . Since z is in the core, for each edge e = (u; u0) in G, we have z(u) + z(u0) 1 by Lemma 2. Furthermore, jM j = v(V ) = z(V ) implies that z(u) + z(u0) = 1 holds for all matching edges (u; u0) of M and z(u) = 0 for all u 2 T0 (but an edge (u; u0) 2= M with u; u0 2= T0 may satisfy z(u) + z(u0 ) > 1). Let S0 = ;, and let Si be the set of vertices incident to a vertex in Ti?1 but not in [ik?=01 Sk ; let Ti be the vertices, each of which is incident to a vertex Si by an edge in M . By induction starting with T0, one can show for any i that (i) all the vertices u in Ti are assigned z (u) = 0, and set [ik=0 Tk is independent, (ii) all the vertices u in Si is assigned z (u) = 1, and set [ik=1 Sk is a vertex cover of the subgraph induced by ([ik=1 Sk ) [ ([ik=0 Tk ). Since the graph G is nite, this induction process ends after a nite number of steps, leaving vertices which are incident only to some Si with nonmatching edge. Let T = [k Tk and S = [k Sk , and let G1 = G[T [ S ] and G2 = G[V ? (T [ S )] induced by vertex sets, respectively. As noted in (ii), S is a vertex cover of G1 . This S is minimum because, by de nition, its size is the same as the size of a matching in G1. Thus G1 satis es condition 1 of the theorem. Furthermore, by de nitions of Ti and Si , and edge e = (u; u0) of G not included in G1 or G2 satis es u 2 S , u0 2 V ? (T [ S ) and e 62 M . Since S is a minimum vertex cover of G1, this proves condition 3 of the theorem. Also G2 has a perfect matching (hence condition 2 is proved) since any vertex u in G2 is in a matching edge of M , which is also contained in G2 .
Notice that the above constructive proof also gives us a polynomial time algorithm to decide whether a graph has a nonempty core for the maximum matching game since a maximum matching can be found in polynomial time. It also gives an imputation z in the core when the core is indeed nonempty. In addition, to check whether an imputation is in the core or not, it is enough to check whether, the sum of the values of the two end vertices of each edge is at least one. 22
Corollary 4 For the core of the maximum matching game, testing nonemptiness, checking membership and nding a core member, can be done in polynomial time.
2
Recall that an integer solution of its dual LP problem DLP (1jE j; A; max) = LP (1jV j; A; min) of the maximum matching game implies a minimum vertex cover. In some class of graphs which have a nonempty core, the size of a maximum matching is not equal to the size of a minimum vertex cover (e.g., K4 has a perfect matching, but its minimum size of a vertex cover is 3). In such a case, the core of the matching game cannot be represented by a convex combination of integer solutions of its dual problem LP (1jV j ; A; min), i.e., minimum vertex covers (because their optimal values are dierent), implying that the convex characterization of the core of the maximum matching game is not possible.
4.3.2 Vertex Cover
In the minimum vertex cover game Game(1jV j ; A; min), the players are on edges and the value of a subset S V is the minimum vertex cover in the induced subgraph G[S ]. The matrix A for this game is the same as the maximum matching game. By Lemma 3, an imputation is in the core if and only if there is no vertex u such that the sum of the imputation over the edges incident with u is more than one. We characterize the class of graphs that have a nonempty core of the minimum vertex cover game.
Theorem 9 The core for the minimum vertex cover game on graph G = (V; E ) is nonempty if and only if the size of a maximum matching is equal to the size of a minimum vertex cover. Proof: The if-part is easy, since a maximum matching M (having the size of a minimum vertex cover) provides a member w in the core by w(e) = 1 if e 2 M and 0 otherwise. To show the only-if-part, assume that the core is nonempty, and let w : E ! R+ be in the core. Let E + = fe 2 E j w(e) > 0g, and let E + (u) denote the set of edges in E + which are incident to a vertex u 2 V . Let S V be a minimum vertex cover of G. We rst prove: 1. The edge set E + induces a bipartite graph G0 = (S; S; E +), where S = V ? S . P 2. w(E +(u)) (= e2E + (u) w(e)) = 1 holds for all u 2 S . Since w(E +(u)) 1 for all u 2 V , w(E + ) = jS j and every edge is incident to a vertex in S , we have w(E +(u)) = 1 for all u 2 S , and w(e) = 0 for all edges e = (u; u0) with u; u0 2 S (otherwise, P + u2S w(E (u)) < jS j would result). This proves properties 1 and 2. We now claim that there is a matching of size jS j in G0 , and it concludes the theorem. For this, Hall's Theorem [22] says that it suces to prove that the number of vertices adjacent to T in G0 is at least jT j (i.e., jT j jNG0 (T )j) for any T S , where NG0 (T ) denotes the set of vertices adjacent to some vertices in T . From property 1, we have
jT j = w(E +(T; NG0 (T ))); where E + (X; Y ) denotes the set of edges e = (u; u0) 2 E + with u 2 X and u0 2 Y . On the other hand, by Lemma 3, we have w(E +(u)) 1 for all u 2 NG0 (S ). Therefore w(E +(T; NG0 (T ))) 2 jNG0 (T )j, and jT j jNG0 (T )j follows. We note here that the condition in the above theorem holds if G is a bipartite graph by Konig's
theorem. From Theorem 9, we have the following results. 23
Theorem 10 For the core of the minimum vertex cover game, testing nonemptiness, checking membership and nding a core member, can be done in polynomial time.
Proof: We show that testing if a given graph G = (V; E ) has a maximum matching whose size is
equal to the size of a minimum vertex cover can be done in polynomial time (by Theorem 9, this proves that testing nonemptiness, checking membership and nding a core member can be done in polynomial time). Let M E be a maximum matching of G, which can be found in polynomial time. Now consider a minimum vertex cover S V . Clearly, jM j = jS j holds if and only if S is a vertex cover such that S contains exactly one of end vertices of each edge in M and also contains all vertices adjacent to an unmatched vertex. Then, testing if such an S exists in G can be formulated by as an instance PG of 2SAT as follows. Let us denote M = f(x1; x1); : : :; (xjM j; xjM j)g, X = fx1; : : :; xjM jg and X = fx1; : : :; xjM jg. We consider vertices xi and xi as positive and negative literals of the i-th variable. For each edge ei = (x; x0) 2 E with x; x0 2 X [ X , we create a clause Ci = (x _ x0 ). To guarantee that all vertices x 2 X [ X adjacent to vertices v 2 V ? (X [ X ) are contained in S , we prepare a pair of a dummy variable x0 and x0 , and create clauses Ci0 = (x _ x0 ) and Ci00 = (x _ x0) for each edge ei = (v; x) with v 2 V ? (X [ X ) and x 2 X [ X . We see that the resulting 2SAT instance PG is satis able if and only if G has a vertex cover S with size jM j, because any true assignment to X [ X provides such a cover S and vice versa. Since 2SAT is polynomially solvable[18], so is the problem of checking if the size of a maximum matching is equal to the size of a minimum vertex cover. 2
Theorem 11 Assume that the minimum vertex cover game on an undirected graph G = (V; E ) (E = 6 ;) has nonempty core. Then an imputation is in the core if and only if it is a convex combination of the characteristic vectors of maximum matchings in G.
Proof: The if-part is easy, since the characteristic vector IM of a maximum matching M is in the
core (by Theorem 9) and the core is known to be convex. For the only-if-part, take an imputation w : E ! R+ in the core which is not a convex combination of the characteristic vectors of maximum matchings. We may choose, among all the possible such w, the one such that the size of E + = fe 2 E j w(e) > 0g is rst minimized and then the number of vertices u with w(E +(u)) = 1 is minimized, where E + (u) denotes the set of edges in E + which are incident to a vertex u 2 V . Recall that w(E +(u)) 1 holds for all u 2 V by Lemma 3. Let E0 = E ? E + = fe 2 E j w(e) = 0g, and consider the graph G0 = (V; E +) induced from G by E + . From E 6= ;, v (E ) = w(E ) = w(E +) > 0 and E + 6= ; follow. We rst show that G0 still has a minimum vertex cover (resp., a maximum matching) with the same size of a minimum vertex cover (resp., a maximum matching) in G. If G0 has a minimum vertex cover SG0 of size jSGj with < 1, where SG is a minimum vertex cover in G, then the scaled vector w := w is a core in the minimum vertex cover game on G0 by Lemma 3, since w(E +(u)) < 1 holds for all u 2 V . However, such w and SG0 6= ; (by E + 6= ;) cannot satisfy property 2 of the proof of Theorem 9, a contradiction. This proves that G0 has a minimum vertex cover with the same size of a minimum vertex cover in G. Then it is easy to see that wjE + (restriction of w to E + ) is in the core of the minimum vertex cover game on G0 . Therefore, by Theorem 9, the size of a maximum matching in G0 is equal to that of a minimum vertex cover in G0 (hence the size of a maximum matching in G). Therefore, it remains to show that wjE + is a convex combination of the characteristic vectors of maximum matchings in G0. Let S be a minimum vertex cover in G0. Note that property 1 in the proof of Theorem 9 says that G0 is a bipartite graph G0 = (S; S; E +), where S = V ? S . In what follows, for a given matching M in G0 , S [M ] (resp., S [M ]) denotes the set of vertices in S 24
(resp., in S ) which are incident to the edges in M . Since S is a minimum vertex cover in G0, it follows jS j jS j. Let M be a maximum matching in G0 . Then jS j = jM j = v (E +) = w(E +) holds (jS j = jM j follows from Theorem 9 and that wjE + is in the core). If jS j = jS j, then let " = minfw(e) j e 2 M g, and consider the imputation + w0 = v(E +v)(E? ")jM j (w ? "IM ) = 1 ?1 " (w ? "IM ); where IM is the characteristic vector of M . Clearly, w0(E + ) = w(E +), and w0 (E +(u)) = 1 + 0 1?" (w(E (u)) ? ") 1 for each vertex u 2 V , since M is a perfect matching in G by jM j = 0 0 jS j = jSj. Therefore, w is also in the core, which has one more edge e with w (e) = 0, contradiction to the choice of w. Then assume jS j > jS j, and let X = fu 2 S j w(E +(u)) = 1g: We rst show that there is a maximum matching M in G0 (with size v (E 0) = jS j) such that X S[M ]. Let us choose a maximum matching M with jX \ S [M ]j being maximized. If there is a vertex u1 2 X ? S [M ], then we consider the set S M (u1 ) of vertices in S [M ] which are reachable from u1 by an alternating path with respect to M . Then S M (u1) X (otherwise, for a u2 2 S M (u1) ? X , there would exist another maximum matching M 0 with S [M 0 ] = (S [M ] ?fu2g) [fu1 g, contradicting the maximality of jX \ S [M ]j). Hence, SM (u1) [ fu1g X: Now let SM (u1) be the set of vertices in S [M ](= S ) which are reachable from u1 by an alternating path with respect to M . Then [ [ E +(u) E + (u): u2SM (u1 )
u2S M (u1 )[fu1 g
Note that SM (u1) is also the set of vertices in S which are matched to vertices in S M (u1 ) by M . Hence, jSM (u1)j = jSM (u1)j: However, from the above, we have X w(E +(u)) jSM (u1) [ fu1gj =
=
u2S M (u1 )[fu1g X w(E +(u)) u2SM (u1 ) jSM (u1)j;
which contradicts jS M (u1)j = jSM (u1)j. Therefore, there is a maximum matching M with X S [M ]. Now let = min[minfw(e) j e 2 M g; minf1 ? w(E +(u)) j u 2 S ? S [M ]g]; which is positive by de nition. Then consider the imputation w0 = 1?1 (w ? IM ). It is easy to see that w0 is in the core and w0 (E +(u)) = 1 holds if w(E +(u)) = 1 holds for a u 2 V . Also, G0 has an edge e with w0 (e) = 0 or a vertex u 2 S ? S [M ] with w0 (E +(u)) = 1, any of which contradicts the choice of w. This proves that the assumed imputation w in the core does not exist, and hence the theorem. 2 25
4.4 Edge Cover and Independent Set
For an undirected graph G = (V; E ), we can de ne a mutually dual pair of the minimum edge cover game and the independent set game by Game(1jE j; A0; min) and Game(1jV j ; A0; max), respectively, where the constraint matrix A0 is the incidence matrix of G in which the rows correspond to vertices and the columns correspond to edges (i.e., the transposition of the matrix A used for the pair of the maximum matching game and the minimum vertex cover game). Thus, for the minimum edge cover game, the players are on vertices, and the game value v (S ) for S V is the minimum number of edges that cover all vertices in S , i.e., minfjF j j F \ E (u) 6= ; for all u 2 S g;
where E (u) denotes the set of edges in E which are incident to u. Note that v (S ) is not necessarily the size of a minimum edge cover in the subgraph G[S ] induced by vertex set S . For the minimum edge cover game, we assume that G has no isolated vertex to prevent it from becoming infeasible. Similarly, the players for the maximum independent set game are on edges and the game value v 0(T ) for T E is the size of a maximum independent set in the subgraph G[V hT i] induced by V hT i, where V hT i is de ned by
V hT i = fi 2 V j i is adjacent only to edges in T g: (Note that v 0(T ) is not the size of a maximum independent set in the subgraph G[T ].)
4.4.1 Edge Cover We rst observe that the minimum edge cover game is equivalent to the maximum matching game in the following sense.
Lemma 9 For an undirected graph G = (V; E ), which has no isolated vertex, let v and v be the
game values of the minimum edge cover game and the maximum matching game on G, respectively. Then v (S ) + v (S ) = jS j holds for all S V .
Proof: For any S V , let MS E be a maximum matching in G[S ], and V (MS ) denote the set of end vertices of the edges in MS . Clearly, there is no edge between the vertices in S ? V (MS ), but for any u 2 S ? V (MS ), there is an edge e 2 E incident with it (e may not be an edge in G[S ]) since G is assumed to have no isolated vertex. Then, v (S ) jMS j + jS ? V (MS )j = jS j?jMS j = jS j? v(S ). To see that the converse inequality also holds, let FS be a minimum edge cover of S . Clearly, no edge in FS ? E [S ] is adjacent to an edge in FS \ E [S ], where E [S ] = f(u; v ) 2 E j u; v 2 S g. Therefore, we can see that FS \ E [S ] forms a collection of vertex-disjoint stars on G[S ]. Hence, we have jS j = jFS j +(the number of the stars). Since we obtain a matching in G by choosing one edge from each of those stars, it holds v(S ) (the number of the stars)= jS j ? v (S ). 2
Theorem 12 Let G = (V; E ) an undirected graph with no isolated vertex. Then w : V ! R+ is in the core of the minimum edge cover game on G if and only if w = 1jV j ? w is in the core of the maximum matching game on G.
Proof: We show the only-if-part. If w(S ) v(S ) holds for S V , then w(S ) = jS j ? w(S ) jS j ? v(S ) = v(S ) by Lemma 9. Especially equality holds for S = V by w(V ) = v(V ). This shows the only-if-part. The if-part is analogous and is omitted. 26
2
Combining this theorem and Theorem 8, we can characterize the graphs with nonempty cores for the minimum edge cover game. Note that, we take W 0 = V1 ? W in condition 1 of Theorem 8, and make use of the fact that a vertex cover W in G is minimum if and only if V ? W is a maximum independent set in G.
Corollary 5 An undirected graph G = (V; E ) has a nonempty core for the minimum edge cover game if and only if there exists a subset V1 V such that 10. the subgraph G1 = G[V1] induced by V1 has a maximum independent set W 0 with the same size as its minimum edge cover, 20. the subgraph G2 = G[V ? V1 ] induced by V ? V1 has a perfect matching (i.e., an edge cover with jV j=2 edges), 30. all the remaining edges (u; u0) 2 E between G1 and G2 satisfy u 2 V ? W 0 for the maximum independent set W 0 in 10 .
2 Since the core of the edge cover game may be nonempty even if the size of a minimum edge cover is not equal to the size of a maximum independent set (e.g., K4), this game does not have a convex characterization of the core by the set of maximum independent sets.
4.4.2 Independent Set We rst prove that the counterpart of Theorem 9 is also true for the maximum independent set game.
Theorem 13 Let G = (V; E ) be an undirected graph with no isolated vertex. Then the core for the maximum independent set game on graph G is nonempty if and only if the size of a maximum independent set is equal to the size of a minimum edge cover in G.
Proof: The if-part is easy, since a minimum edge cover (having the size of a maximum independent set) provides an imputation in the core of the maximum independent set game. To show the only-if-part, assume that the core for the maximum independent set game on G is nonempty, and let z : E ! R+ be in the core. Let E + = fe 2 E j z (e) > 0g, and let E +(u) denote the set of edges in E + which are incident to a vertex u 2 V . Let S V be a maximum independent set of G. We rst prove: 1. The edge set E + induces a bipartite graph G0 = (S; S; E +), where S = V ? S . 2. z (E +(u)) = 1 holds for all u 2 S . Since z (E +(u)) 1 for all u 2 V , z (E +) = jS j and there is no edge between two vertices in S , we have z (EP+(u)) = 1 for all u 2 S , and z (e) = 0 for any edge e = (u; u0) with u; u0 2 V ? S (otherwise, u2S z (E + (u)) > jS j would result). This proves properties 1 and 2. We now prove that there is a matching of size jV ? S j. According to Hall's Theorem [22], it suces to show that jT j jNG0 (T )j holds for all T V ? S , where NG0 (T ) is the set of vertices adjacent in G0 to some vertices in T . From property 2, z(E + (NG0 (T ); T )) jNG0 (T )j; 27
where E + (X; Y ) denotes the set of edges e = (u; u0) 2 E + with u 2 X and u0 2 Y . On the other hand, since z (E +(u)) 1 holds for all u0 2 V ? S ,
jT j z(E +(NG0 (T ); T )); and jT j jNG0 (T )j follows. Now by choosing one edge incident to each vertex in S which is not matched by M , and by adding the chosen edges to M , we can obtain an edge cover of size jS j (recall that G contains no isolated vertex). This proves the theorem. 2
Theorem 14 For the core of the maximum independent set game, testing nonemptiness, checking membership and nding a core member, can be done in polynomial time.
Proof: We only have to show that it can be tested in polynomial time if a given graph G = (V; E )
has a minimum edge cover whose size is equal to the size of a maximum independent set. Let M E be a maximum matching of G, and U V be the set of vertices incident to no edge in M . Then it is easy to see that jM j+jU j is the size of a minimum edge cover. Let S V be a maximum independent set. Clearly, jM j + jU j = jS j holds if and only if S is an independent set such that S contains exactly one of the two end vertices of each edge in M and also S contains all vertices in U (hence it contains no vertices adjacent to a vertex in U ). In other words, S = (V ? U ) ? S is a vertex cover such that S contains exactly one of the end vertices of each edge in M and also S contains all vertices adjacent to a vertex in U . Therefore, the size of a minimum edge cover jM j + jU j is equal to the size of a maximum independent set jS j if and only if the size of a maximum matching jM j is equal to the size of a minimum vertex cover jS j. Based on this, the proof of Theorem 10 can be applied to this theorem. 2
Theorem 15 Given an undirected graph G = (V; E ) with no isolated vertex, an imputation is in the core of the maximum independent set game if and only if it is a convex combination of the characteristic vectors of minimum edge covers.
Proof: The if-part is straightforward.
For the only-if-part, take an imputation z : E ! R+ in the core which is not a convex combination of the characteristic vectors of minimum edge covers. We may choose, among all the possible such z , the one such that jE +j is minimized, and then the number of vertices u with z (E +(u)) = 1 is maximized, where E + = fe 2 E j z (e) > 0g and E +(u) = fe 2 E + j e = (u; u0)g. Recall that z(E +(u)) 1 holds for all u 2 V by Lemma 2. Let E0 = E ? E + = fe 2 E j z(e) = 0g, and consider the graph G0 = (V; E +) induced from G by E + . We rst show that G0 still has a maximum independent set (resp., a minimum edge cover) with the same size of a maximum independent set (resp., a minimum edge cover) in G. If G0 has a maximum independent set SG0 of size jSG j with > 1, where SG is a maximum independent set in G, then the scaled vector z := z is in the core of the maximum independent set game on G0 by Lemma 2, since z (E + (u)) > 1 holds for all u 2 V . However, such z and SG0 cannot satisfy property 2 in the proof of Theorem 13, a contradiction. Hence, G0 has a maximum independent set with the same size of a maximum independent set in G. Therefore, it is easy to see that z jE + (restriction of z to E + ) is in the core of the maximum independent set game on G0. Then by Theorem 13, the size of a minimum edge cover in G0 is equal to that of a maximum independent set in G0 (hence the size of a minimum edge cover in G). 28
Therefore, it remains to show that z jE + is a convex combination of the characteristic vectors of minimum edge covers in G0 . Let S be a maximum independent set in G0 , and let S = V ? S . Then, G0 satis es properties 1 and 2 in the proof of Theorem 13. Note that jS j jS j always holds by jS j = z(E +) = Pu2S z(E +(u)) jSj. In what follows, we denote the set of vertices in S (resp., in S) which are incident to an edge in a matching M by S [M ] (resp., S[M ]). Since G0 has a minimum edge cover E ( E + ) with size jS j, such E contains exactly one edge incident to each vertex in S . Hence it contains a matching ME of size jSj and E ? ME covers all vertices in S ? S [ME ]. If jS j = jS j, then E = ME and z 0 = (z ? "IME )=(1 ? "), where " = minfz (e) j e 2 ME g, is also in the core and has an edge e with z 0 (e) = 0, a contradiction to the choice of z . Therefore assume jS j > jSj, and let X = fu 2 S j z(E +(u)) > 1g: We rst show that there is a maximum matching M with size jS j such that any vertex in S ? S [M ] is adjacent to a vertex in X . Let us choose such M that minimizes
jfu 2 S ? S [M ] j u is not incident to any vertex in X gj: (1) If there is such a vertex u1 2 S ? S [M ], then we consider the set S M (u1 ) of vertices in S [M ] which are reachable from u1 by an alternating path with respect to M . Then S M (u1) \ X = ; (otherwise, for an u2 2 S M (u1) \ X , we can obtain a matching M 0 with S [M 0] = (S [M ] ?fu02g) [fu1g, where u02 is de ned by (u2; u02) 2 M , contradicting the minimality of (1)). Let SM (u1) be the set of vertices in S which are matched to a vertex in S M (u1) by M (then jSM (u1)j = jS M (u1)j holds). However, X jSM (u1) [ fu1gj = z(E +(u)) =
u2SM (u1 )[fu1 g X z(E + (u))
u2S M (u1 ) jSM (u1)j
(since S M (u1 ) \ X = ;)
contradicts the condition jSM (u1)j = jS M (u1)j. Therefore, G0 has a maximum matching M such that any vertex in S ? S [M ] is adjacent to a vertex in X . Now for each vertex u 2 S ? S [M ], we choose an edge (u; u0) 2 E 0 with u0 2 X , and denote the set of these edges by F . Then clearly M [ F is a minimum edge cover. Let F (u0 ) denote the set of edges in F which are incident to vertex u0 2 S . Then choose + 0 ?1 j u0 2 S; jF (u0)j 1g]: = min[minfz(e) j e 2 M g; minf z(EjF((uu)) 0 )j This is positive because any u0 in the second min satis es u0 2 X by condition jF (u0)j 1. Then consider the imputation z 0 = (z ? IM [F )=(1 ? ). It is easy to see that z 0 is also in the core and z0 (E +(u)) = 1 holds if z(E +(u)) = 1 holds for u 2 V . Furthermore G0 has an edge e with z 0(e) = 0 or a vertex u 2 X with z (E + (u)) = 1, any of which contradicts the choice of z . This proves the theorem. 2 One may de ne the maximum clique problem in an undirected graph G = (V; E ) as the maximum independent set problem on its complement graph G = (V; E). Obviously, it is described as a packing game Game(1jEj ; A00; max) which has players on the edges in G, where A00 is the vertexedge incidence matrix A00 of the complement graph G. Therefore, all the results in this subsection can be generalized to the maximum clique game. 29
4.5 Chromatic Number
Let (G0) denote the chromatic number of an undirected graph G0 (i.e., the minimum number of maximal independent set which together covers all vertices of G0). For the minimum coloring game on a graph G = (V; E ), we de ne the game value v (S ), S V as (G[S ]), i.e., the chromatic number of the induced subgraph G[S ]. This game can be represented by a covering game Game(1jIj ; A; min), the rows of the matrix A correspond to the vertices of G, and the columns correspond to maximal independent sets, where I denotes the set of all maximal independent sets in G. The minimum coloring game arises frequently in applications if the smallest number of con ictfree groups are sought in a system where vertices represent members and edges represent con icts between members. This type of con ict graphs, for example, can be found in many resource sharing problems. By Lemma 3, a vector w : V ! R+ is in the core of the minimum coloring game if and only if 1. w(V ) = (G), 2. w(S ) 1 for any independent set S V . Let ! (G) denote the size of a maximum clique in G, which satis es ! (G) (G), as widely known in the coloring problem. We can easily observe that the characteristic vector IC of a maximum clique C V is a core of the coloring game if !(G) = (G) holds. Therefore the minimum coloring game on such a graph has nonempty core. However, the converse is not true. That is, there is a graph G = (V; E ) such that !(G) < (G) but the core of the coloring game is nonempty. For example, for a graph G with ! (G) < (G) and (G)(G) = jV j, the z de ned by z (u) := (G)=jV j for all u 2 V is in the core, where (G) is the stable number of G, i.e., the size of a maximum independent set. Therefore, in general, the coloring game has no convex characterization by the set of maximum cliques. Also, from such a graph G, construct the graph G0 = G + K(G) by annexing complete graph K(G) via a single common vertex. Then G0 satis es ! (G0) = (G0 ), but has nonempty core because its core is the same as that of G, which is not a convex combination of cliques. That is, in general, the coincidence of the optimum values of ILP (1m ; A; max) and IPL(1n ; A; min) does not imply the convex characterization of the core of a covering game Game(1n; A; min). Let us start with the case of bipartite graphs G = (V; E ). We de ne for each edge e = (i; j ) 2 E its characteristic vector Ie : V ! f0; 1g by Ie (k) := 1 if k 2 fi; j g and 0 otherwise.
Theorem 16 If a graph G = (V; E ) is bipartite and E 6= ;, an imputation w : V ! R+ is in the core of the minimum coloring game P if and only ifPit is a convex combination of the characteristic vectors of edges in E ; i.e., w = e2E e Ie , with e2E e = 1 and e 0 for all e 2 E . This can be tested in polynomial time.
Proof: The if-part is easy because w(S ) 2 holds for any S V , and w(S ) 1 holds if G[S ] does not have any edge, whereas (G[S ]) = 2 if G[S ] contains an edge and 1 otherwise. Now we will focus on the only-if part. Since G is bipartite, let V = L [ R be a partition of V into two independent subsets (i.e., all e = (u; v ) 2 E satisfy u 2 L and v 2 R). Take any imputation w which is in the core. Because w is in the core and (G[L]) = (G[R]) = 1, we have w(L) 1 and w(R) 1. Therefore, w(V ) = 2 implies w(L) = w(R) = 1. Let us build an auxiliary directed network H from G. We rst regard G as a directed graph with arcs in E directed from L to R. Let H be the graph obtained from G by introducing a source s and a sink t, connecting s to all the vertices u in L and assigning a capacity w(u) for the arc (s; u), connecting all the vertices v in R to t and assigning a capacity w(v ) for the arc (v; t), and nally assigning capacity +1 to all arcs in E . The value of a maximum ow in H is at 30
P
most u2L w(u) = 1. Now we claim that the maximum ow value is indeed 1. For this, assume otherwise. Then there must be a minimum cut (X; X = V ? X ) such that s 2 X and t 2 X , and has capacity smaller than 1. This cut contains no arc in E from X \ L to X \ R since their capacities are 1. We also have
w(X \ R) + w(X \ L) < 1; and it implies w(X \ L) + w(X \ R) > 1. However, since there is no arc from X \ L to X \ R, we have (G[(X \ L) [ (X \ R)]) = 1. This contradicts that w is in the core. On the other hand, if the maximum ow value in HPis 1, let f (e) be the ow value on edge eP2 E . Then P the above w can be represented as w = e2E e Ie with e = f (e). This satis es = e e2E f (e) = 1 and e 0 for all e 2 E . e2E Finally the above condition can be tested in polynomial time in jV j and jE j by making use of a maximum ow algorithm. For general graphs, expectedly, the problem is NP-complete.
Theorem 17 For the minimum coloring game, it is NP-complete to decide whether the core is empty or not. It is also NP-complete to decide whether a given imputation is in the core or not.
Proof: We prove the NP-completeness to decide whether the core is empty. The other statement also follows immediately from this proof. For this, we show a reduction from the not-all-equal SAT problem, which is known to be NP-complete [18]: Given a set fx1; x2; ; xn g of n boolean variables, and a set of clauses fC1; C2; ; Cmg (m 1) each of which contains three literals from such variables, nd an assignment to variables such that each clause contains at least one true literal and at least one false literal. For the reduction, we construct a graph G from a given instance of not-all-equal SAT as follows: 1. For each variable xi , we construct a complete bipartite graph Km;m of 2m vertices. We denote the two parts of the independent vertices in this Km;m as Xi = fx1i ; x2i ; : : :; xmi g and X i = fx1i ; x2i ; : : :; xmig. We call the vertices in all the n Km;m the variable vertices and denote their set by Vvar . The variables xji and xji correspond to the j -th clause Cj . 2. For each clause Cj , we construct a triangle graph K3, one new vertex for each literal in Cj . We call the vertices in all these m disjoint triangles the clause vertices and denote their set by Vcl . 3. For each clause vertex, we create one edge connecting it with the corresponding variable vertex, so that any variable vertex is adjacent to at most one clause variable in the resulting graph. For example, if there is a clause Cj = fxi ; xi0 ; xi00 g (whose clause vertices are also xi; xi0 ; xi00 ), we construct three edges (xji ; xi), (xji0 ; xi0 ), (xji00 ; xi00 ). 4. Finally, we create a super vertex s which has 2mn edges, one for each variable vertex. Let G = (V; E ) denote the resulting graph, where V = Vvar [ Vcl [fsg. Clearly, (G) 3, since G contains triangles. Note that (G[Vvar ]) = 2. In this coloring, all vertices in Xi must be colored by one color and all vertices in X i must be colored by the other color (because they induce Km;m). Without loss of generality we denote these two colors by 0 and 1; if Xi (resp., X i ) receives color 1, then we associate this with assignment xi := 1 (resp., xi := 1) in the not-all-equal SAT. For any 2-coloring of subgraph G[Vvar ], we need at most two new colors to color all triangles and vertex s in the entire graph G. In particular, we need only one new color to color all triangles if every triangle 31
is adjacent to two distinct colors in a 2-coloring of G[Vvar ]. Hence 3 (G) 4, and (G) = 3 holds if the instance of not-all-equal SAT has a solution speci ed by the coloring. Conversely, any 3-coloring in G provides a true assignment which is feasible to the not-all-equal SAT instance, since in any 3-coloring in G, G[Vvar ] must be 2-colored (because of vertex s). Therefore, (G) = 3 holds if and only if the instance of not-all-equal SAT has a solution. Now we want to show that the core for the minimum coloring game is nonempty if and only if (G) = 3 (which will complete the reduction). Obviously, if the graph is 3-colorable, then an imputation w = IK3 is in the core, where IK3 is the characteristic vector of an arbitrarily chosen triangle K3 in G. To prove the converse, let w be an imputation in the core. Since w is an imputation (i.e., w(V ) = (G)), we have 3 w(V ) 4. To show w(V ) = 3, we assume w(V ) = 4 and derive a contradiction. Clearly, Vcl can be partitioned into three independent sets of G, say S1, S2 and S3, by picking one arbitrary vertex from each triangle to obtain each independent set Si of m vertices. Then w(Si) 1 (i = 1; 2; 3) holds because Si is independent and w is in the core. On the other hand, it is easy to see that (G[V ? Si ]) 3 holds for i = 1; 2; 3, and hence w(V ? Si ) 3. By assumption of w(V ) = 4, this implies w(Si) = 1 for i = 1; 2; 3. Therefore, w(Vcl) = w(S1) + w(S2) + w(S3 ) = 3 holds. Clearly (G[Vcl [ fug]) = 3 holds for any vertex u 2 Vvar [ fsg (since u is adjacent to at most one vertex in Vcl), and hence we have w(u) = 0 for all u 2 Vvar [ fsg. These imply w(V ) = 3, contradicting the assumption w(V ) = 4. A graph G is called perfect if ! (G[S ]) = (G[S ]) holds for all S V . In closing this section, we note that the nice results for bipartite graphs (Theorem 16) can be extended to the class of perfect graphs by applying Lovasz's algorithm for nding the maximum weighted independent set for perfect graphs [21].
Theorem 18 Let G = (V; E ) be a perfect graph. Then the core of the minimum coloring game is
always nonempty. Furthermore it can be tested in polynomial time whether an imputation w is in the core or not.
Proof: A perfect graph G satis es !(G) = (G), and the minimum coloring game has nonempty
core, as already observed before Theorem 16. To prove the second statement, recall the 2nd paragraph of this subsection saying that an imputation w is in the core if and only if w(V ) = (G) and there is no maximal independent set S with w(S ) > 1. Applying Lovasz's polynomial time algorithm for nding the maximum weighted independent set for perfect graphs [21], therefore, we can test whether an imputation w is in the core or not. 2 Recall that a game is totally balanced if all its subgames have nonempty cores.
Theorem 19 The minimum coloring game on a graph G = (V; E ) is totally balanced if and only if G is perfect.
Proof: The if-part is immediate since a perfect graph G satis es !(G[S ]) = (G[S ]) for all S V . To prove the only-if-part, assume that there is a graph G = (V; E ) such that the minimum coloring game on G is totally balanced, but G is not perfect. Let G = (V; E ) be such a counterexample with the smallest jV j. Clearly, (G) 2. From the minimality of jV j, we see that
!(G) < (G) but !(G[S ]) = (G[S ]) for all S with ; = 6 S V: (If ! (G[S ]) < (G[S ]) for some S V , then G[S ] would be a smaller counterexample, since the minimum coloring game on G[S ] is also totally balanced.) Let w be an imputation in the core 32
of this coloring game. Then w(u) > 0 for all u 2 V , because if w(u) = 0 for a u 2 V , then (G[V ? fug]) w(V ? fug) = w(V ) = (G) (i.e., (G[V ? fug]) = (G)) and !(G[V ? fug]) !(G) < (G) contradict !(G[V ? fug]) = (G[V ? fug]). Now consider a minimum coloring in G such that the number of vertices colored by the 1st color is minimized, and let V1 denote the set of vertices with the 1st color. If jV1j 2, then (G[V ?fug]) = (G) (> ! (G) ! (G[V ?fug])) holds for u 2 V1 . This again contradicts ! (G[V ? fug]) = (G[V ? fug]). Hence V1 contains only one vertex, say u1, for which w(u1) = 1 holds because w(V ? fu1 g) (G[V ? fu1g]) = (G) ? 1 and w(V ) = (G). For this u1, G[V ?fu1 g] has a clique X with size !(G[V ?fu1 g]) = (G[V ?fu1 g]) = (G) ? 1. We see that X [ fu1g is also a clique in G, since if u1 is not adjacent to some x 2 X , then w(fu1; xg) = w(u1) + w(x) = 1 + w(x) > 1 contradicts that w is in the core. Therefore, (G) = jX [ fu1gj !(G) contradicts the assumption !(G) < (G). 2 By applying Lemma 4 to the minimum coloring game, we can obtain an alternative proof of Theorem 19. Since the if-part is trivial, we show the only-if-part here, assuming that the coloring game Game(1n ; A; min) is totally balanced. For any subset S M (= V ), Game(1n ; AS;N ; min) has nonempty cores and LP (1n ; AS;N ; min) has an integer optimal solution xS 2 f0; 1gn by Theorem 2. Also, since Game(1n; AS;N ; min) is totally balanced, its dual game Game(1jS j; AS;N ; max) has nonempty core by Lemma 4, and LP (1jS j ; AS;N ; max) has an integer optimal solution yS 2 f0; 1gjS j by Theorem 1. Note that xS and yS have the same optimal value, and they are the characteristic vectors of a minimum coloring and a maximum clique in G[S ], respectively. This implies !(G[S ]) = (G[S ]) for any subset S V , i.e., G is perfect.
4.6 Edge-connectivity in Graphs
As a nal example, we consider the edge-connectivity game on an undirected graph G = (V; E ). In this game, players are on edges, and v (S ) for S E is de ned by (G[S ]), where notation G[S ] = (V; S ) is used and (H ) denotes the edge-connectivity of a graph H . We are curious about this problem because of its close relationship with s-t edge-connectivity. Unfortunately this game does not have the standard formulation of Section 2. We rst see that this game always has nonempty core because the characteristic vector IC for a minimum cut C E in G is in the core; for each S E , z = IC satis es z (S ) = jS \ C j v (S ) (since S \ C is a cut in G[S ]). Since all the O(jV j2 ) minimum cuts in G can be enumerated in polynomial time [8], all imputations in the core corresponding to minimum cuts can be obtained in polynomial time. Although the edge-connectivity game is very similar to the s-t edge connectivity game studied in Section 4.1, the convex characterization of the core does not hold in general. Let k = (G). Though the convex characterization holds when k = 1, there are counterexamples to the convex characterization for both k = 2 and k = 3, as shown below. For the case of k = 2, consider two complete graphs K3 and join them by letting them share one vertex. Assign 1=2 to all edges in one of the K3 and assign 1=2 to exactly one edge of another K3. It is easy to see that this is a counterexample. For k = 3, we may consider two Peterson graphs sharing one vertex. Assign 2=11 to all edges of one Peterson graph and assign 3=11 to exactly one edge of another. It is again easy to see that this is a counterexample. These counterexamples show that some of the nice mathematical results in the previous sections break down for the edge-connectivity game. Similarly, the polynomial time solvability does not generally hold any more.
Theorem 20 Let k denote the edge-connectivity of an undirected graph G = (V; E ). To test 33
whether an imputation z is in the core of the edge-connectivity game can be done in polynomial time for k 2, but is co-NP-complete if k = 3.
Proof: For the case of k = 1, we nd a minimum spanning tree of G with weights z : E ! R+.
Obviously z satis es z (E ) = 1, since it is an imputation. It is easy to see that z is not in the core if and only if the minimum spanning tree is of weight less than 1. For k = 2, in addition to the requirement that the weights in a minimum spanning tree should sum up to at least 1, there is another requirement that every edge e with z (e) > 0 must be in a cut of size 2. Both can be checked in polynomial time. For k = 3, we reduce the Hamiltonian cycle problem for cubic 3-vertex-connected graphs (note that such graphs are 3-edge-connected), which is known to be NP-complete, to the problem of testing whether some imputation z is in the core. Given a cubic 3-vertex-connected graph G of n( 4) vertices (which has 32n edges), we join it with K4 by making one vertex of G the same as 2 for all edges e in G, and assign 3 to one edge in K4 and zeros to one in K4. Let z (e) = n+1 n+1 other edges in K4. This z is clearly an imputation. Then it is not dicult to see that z is in the core if and only if there is no Hamiltonian cycle in G. 2 The convex characterization and the test for membership in the core are open for k 4. One way to nd a counterexample to the convex characterization of the core is to construct a k-regular k-edge-connected graph for which there exists no ` < k such that there is an `-regular `-edgeconnected spanning subgraph. Then a construction similar to the above would work. In addition, we conjecture that the problem of testing whether an imputation z is in the core is co-NP-complete for all k 4.
5 Conclusion Many revenue distribution and cost allocation problems fall into the category of combinatorial optimization games introduced in this paper. General mathematical models are of great help in developing mathematical methods and algorithms to solve them. From Owen's linear production game model and Granot's generalization, it would be natural to introduce integral constraints to deal with more general problems. We are able to identify a large subclass of these problems for which a general theory on the core can be developed. This turns out to be of enormous help in algorithm designs for related questions. It is then extended to study properties of totally balanced games. As an application of these theoretical results, we extensively study the properties of cores of several fundamental combinatorial games on graphs. In addition, we characterize exactly the subclass, of several combinatorial optimization problems, for which the linear program relaxation results in an integer solution. This may be of independent interests to the eld of combinatorial optimization. Many open problems result from our approach: Would our model help in study of other solution concepts for cooperative games? Can these nice results about cores be extended to larger classes of combinatorial optimization games? The mathematical formulation of many combinatorial optimization games will be on hypergraphs instead of graphs. Can our methodology still work for hypergraphs? One may observe that totally balanced games and balanced matrices are somewhat related. Can we completely understand the relationship between totally balanced games and balanced matrices for our model?
Acknowledgements 34
A preliminary version of this paper was presented at the ACM/SIAM SODA 1997 [7]. This research was partially supported by an invitation fellowship from Japan Society for Promoting Science (by which the rst author visited Kyoto University), a Scienti c Grant in Aid by the Ministry of Education, Science and Culture of Japan, a subsidy from the Inamori Foundation, and a research grant of Natural Science and Engineering Research Council of Canada. We would like to thank Daniel Granot and Daozi Zeng for their valuable comments and suggestion, as well as pointing us to several classical literatures in this eld which are very helpful for improvements upon the early draft. We would also like to thank S. Fekete and W. Kern for pointing us to their (and their co-authors') very interesting and related papers.
References [1] C.G. Brid, \Cost-allocation for a spanning tree", Networks 6, pp. 335-350, 1976. [2] A. Claus, and D. Granot, \Game Theory Application to Cost Allocation for a Spanning Tree," Working Paper No 402, Faculty of Commerce and Business Administration, University of British Columbia (June 1976). [3] A. Claus, and D.J. Kleitman, \Cost Allocation for a Spanning Tree," Networks 3, pp. 289-304, 1973. [4] I. J. Curiel, Cooperative Game Theory and Applications, Ph.D. dissertation, University of Nijmegen, the Netherlands, 1988. [5] G. Debreu, and H. Scarf, \A Limit Theorem on the Core of an Economy", International Economical Reviews 4, pp.235-246, 1963. [6] X. Deng and C. Papadimitriou, \On the Complexity of Cooperative Game Solution Concepts," Mathematics of Operations Research 19, 2, pp. 257{266, 1994. [7] X. Deng, T. Ibaraki and H. Nagamochi, \Combinatorial Optimization Games," Proceedings 8th Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, pp. 720-729, 1997. [8] E. A. Dinits, A. V. Karzanov and M. V. Lomonosov, \On the Structure of a Family of Minimal Weighted Cuts in a Graph", Studies in Discrete Optimization (in Russian), A.A. Fridman (Ed.), Nauka, Moscow, pp. 290-306, 1976. [9] P. Dubey, and L.S. Shapley, \Totally Balanced Games Arising from Controlled Programming Problems," Mathematical Programming 29, pp.245-267, 1984. [10] J. Edmonds, \Optimum Branchings," National Bureau of Standards Journal of Research 69B, pp.125-130, 1967. [11] J. Edmonds, \Edge Disjoint Branching," Combinatorial Algorithms, edited by R. Rustin, Academic Press, New York, 1973. [12] J. Edmonds and R. M. Karp, \Theoretical Improvements in Algorithmic Eciency for Network Flow Problems," JACM 19, pp. 248-264, 1972. [13] U. Faigle, S. Fekete, W. Hochstattler and W. Kern, \On Approximately Fair Cost Allocation in Euclidean TSP Games," Technical Report, Department of Applied Mathematics, University of Twente, The Netherlands, 1994. 35
[14] U. Faigle, S. Fekete, W. Hochstattler and W. Kern, \On the Complexity of Testing Membership in the Core of Min-cost Spanning Tree Games," Technical Report #94.166, Universitat zu Koln, Germany, 1994. [15] U. Faigle, S. Fekete, W. Hochstattler and W. Kern, \The Nukleon of Cooperative Games and an Algorithm for Matching Games," Technical Report #94.178, Universitat zu Koln, Germany, 1994. [16] U. Faigle and W. Kern "Partition games and the core of hierarchically convex cost games" Universiteit Twente, faculteit der toegepaste wiskunde, Memorandum, No. 1269, June 1995. [17] L. R. Ford and D. R. Fulkerson, \Flows in Networks," Princeton University Press, Princeton, 1962. [18] M. R. Garey and D. S. Johnson, \Computers and Intractability: A Guide to the Theory of NP-completeness", San Francisco, W.H. Freeman & Company, Publishers, 1979. [19] D. Granot, \A Generalized Linear Production Model: A Uni ed Model," Mathematical Programming 34, pp.212-222, 1986. [20] D. Granot, and G. Huberman, \On the Core and Nucleolus of Minimum Cost Spanning Tree Games," Mathematical Programming 29, pp.323-347, 1984. [21] M. Grotschel, L. Lovasz and A. Schrijver, \Geometric Algorithms and Combinatorial Optimization," Springer-Verlag, Tokyo, 1988. [22] P. Hall, \On Representatives of Subsets," J. London Math. Soc., 10, pp. 26-30, 1935. [23] E. Kalai, \Games, Computers, and O.R.," ACM/SIAM Symposium on Discrete Algorithms, pp. 468-473, 1995. [24] E. Kalai and W. Stanford, \Finite Rationality and Interpersonal Complexity in Repeated games," Econometrica 56, pp. 397{410, 1988. [25] E. Kalai and E. Zemel, \Totally Balanced Games and Games of Flow," Mathematics of Operations Research 7, pp. 476-478, 1982. [26] E. Kalai and E. Zemel, \Generalized Network Problems Yielding Totally Balanced Games," Operations Research 30, pp. 998-1008, 1982. [27] V. Knoblauch, \Computable Strategies for Repeated Prisoner's Dilemma," to appear in Games Economic Behavior. [28] D. Koller, N. Megiddo and B. von Stengel, \Fast Algorithms for Finding Randomized Strategies in Game Trees," Proceedings of the 26th ACM Symposium on the Theory of Computing, pp. 750-759, 1994. [29] N. Megiddo, \Computational Complexity and the game theory approach to cost allocation for a tree," Mathematics of Operations Research 3, pp. 189-196, 1978. [30] N. Megiddo, \Cost Allocation for Steiner Trees," Networks 8, pp. 1-6,1978. [31] N. Megiddo, \On Computable Beliefs of Rational Machines," Games and Economic Behavior 1, pp. 144-169, 1989. 36
[32] H. Nagamochi, D. Zeng, N. Kabutoya and T. Ibaraki, \Complexity of the Minimum Base Games on Matroids," to appear in Mathematics of Operations Research. [33] A. Neyman, \Bounded Complexity Justi es Cooperation in the Finitely Repeated Prisoner's Dilemma," Economics Letters 19, pp. 227{229, 1985. [34] G. Owen, \On the core of Linear Production Games," Mathematical Programming 9, pp.358370, 1975. [35] M. Padberg, \Linear Optimization and Extensions," Springer, Berlin, 1995. [36] C. H. Papadimitriou, \Game Against Nature," J. of Computing Systems Science 31, pp. 288301, 1985. [37] C. H. Papadimitriou, \On Games Played by Automata with a Bounded Number of States," J. of Game Theory and Economic Behavior 4, pp. 122-131, 1992. [38] C. H. Papadimitriou and M. Yannakakis, \On Complexity as Bounded Rationality," Proceedings of the 26th ACM Symposium on the Theory of Computing, pp. 726-733, 1994. [39] A. Schrijver, \Theory of Linear Integer Programming", John Wiley, Chichester, 1986. [40] L. S. Shapley, \On Balanced Sets and Cores", Naval Research Logistics Quarterly 14, pp. 453-460, 1967. [41] L. S. Shapley, and M. Shubik, \On Market Games," J. Econ. Theory 1, pp.9-25, 1969. [42] L. S. Shapley, and M. Shubik, \The Assignment Game," International Journal of Game Theory 1, pp.111-130, 1972. [43] M. Shubik, \Game Theory Models and Methods in Political Economy," Handbook of Mathematical Economics, Vol. I, edited by Arrow and Intriligator, North{Holland, New York, 1981. [44] H. Simon, \Theories of Bounded Rationality," in Decision and Organization, R. Radner (ed.), North Holland, 1972. [45] J. Szep and F. Forgo, \Introduction to the Theory of Games," D. Reidel Publishing Company, Boston, 1985. [46] A. Tamir \On the Core of a Traveling Salesman Cost Allocation Game", Operations Research Letters 8, pp.31-34, 1989. [47] A. Tamir \On the Core of Network Synthesis Games", Mathematical Programming 50, pp.123135, 1991.
37