Algorithmic Game Theory and Graphs - Google Sites

2 downloads 236 Views 492KB Size Report
We prove some new lower and upper bounds, which improve upon the best known .... These include the Internet, numerous ec
Algorithmic Game Theory and Graphs

Oren Ben-Zwi

A THESIS SUBMITTED FOR THE DEGREE “DOCTOR OF PHILOSOPHY”

University of Haifa Faculty of Social Sciences Department of Computer Science

October, 2011

Algorithmic Game Theory and Graphs

Oren Ben-Zwi

Supervised by: Prof. Ilan Newman

A THESIS SUBMITTED FOR THE DEGREE “DOCTOR OF PHILOSOPHY”

University of Haifa Faculty of Social Sciences Department of Computer Science

October, 2011

Recommended by:

Date: (Advisor)

Approved by:

Date: (Chairman of Committee)

i

This work is dedicated to Amir Rozenberg (Amiro ). April, 29th - 1972 — June, 25th - 1992

Acknowledgments First and foremost I wish to thank Ilan Newman. I was most privileged to work with Ilan who is the best advisor one could hope for. Not only Ilan has a supreme knowledge, but he has also a phenomena ability to share this knowledge with others. Furthermore, Ilan has such a passion for conducting mathematics that he will always put everything aside and listen to any question, solution, idea or even an -idea I through at him. Once I entered his room and asked: ”are you busy?” (These days he served as the head of the department) as a reply he gloomily pointed out on a pile of bureaucracy papers on his desk. ”Never mind” I said, ”I’ll come back some other time” and turned away to leave. ”Wait” Ilan called, ”Is it mathematics?” ”yap” I replied, and so started another session of great ideas I loved so much. On top of everything, Ilan has the ability of transmitting his joy and love of math, so working with him is above all fun. During the course I have gone through I was assisted by many great people. I wish to express my gratitude mostly to Guy Wolfovitz, Eyal Ackerman, Ron Lavi and Danny Hermelin for many long and helpful discussions, for sharing their insights with me and for being such good friends. I wish to thank the Caesarea Rothschild Institute for the kind hospitality during the last years. Lastly, I wish to thank my one and only - Sharon, also for everything she does, but mostly for who she is THANK YOU.

iii

Contents Abstract Introduction 1 Algorithmic Game Theory . . . . . . 2 Hats, Auction and Derandomization 3 Social Networks . . . . . . . . . . . . 4 Local and Global Properties . . . . .

v

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

1 1 3 6 8

1 Hats, Auctions and Derandomization 1 Competitive Analysis on Digital Good Auctions 2 A General Derandomization . . . . . . . . . . . 3 Bi-valued Auctions and a Hat Game . . . . . . 4 Auction Discussion . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

10 10 12 14 22

. . . . .

23 23 26 30 34 38

. . . . .

44 44 49 55 60 62

. . . .

. . . .

. . . .

. . . .

. . . .

2 Social Networks 1 Bounded Tree Width . . . . . . . . . . . . . . . . 2 Algorithm for TSS on Small Treewidth Networks 3 A Lower Bound . . . . . . . . . . . . . . . . . . . 4 A Non-Monotone Model . . . . . . . . . . . . . . 5 Combinatorial Model and Bounds for TSS . . . . 3 Local vs. Global 1 Local Price of Anarchy . . . . . . . . . 2 A basic bound on the price of anarchy 3 Refinements of the basic bound . . . . 4 Monotonicity . . . . . . . . . . . . . . 5 Structural Properties Conclusion . . .

. . . . .

Bibliography

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

63

iv

Algorithmic Game Theory and Graphs

Oren Ben-Zwi

Abstract Algorithmic Game Theory is an exciting new research field bridging game theory and computer science. This thesis deals with some fundamental issues on this subject. We investigate derandomizations of digital good randomized auctions. We propose a general derandomization method which can be used to show that for every random auction there exists a deterministic auction having asymptotically the same revenue. In addition, we construct an explicit optimal deterministic auction for bi-valued auctions. The analysis of this construction uses the adoption of the finer lens of general competitiveness which considers additive losses on top of multiplicative losses. We show a solution to an open problem on hat games, which is strongly related to digital good auctions and serves as a building block in deriving a solution for the bi-valued scenario. The new explicit auction we introduce implies that general competitiveness is the right notion to use on this setting, as this optimal auction is uncompetitive with respect to competitive measures which do not consider additive losses. Target set selection is a graphical economic problem. We introduce this problem and prove that the tree width of the underlying network it resides in is crucial for its solution. Specifically, we prove that this problem has a polynomial time algorithms on graphs with bounded tree width, and has no polynomial time solution on graphs with unbounded tree width. We investigate a combinatorial version of the (algorithmic) target set selection problem. We prove some new lower and upper bounds, which improve upon the best known bounds for that problem. We show that the bounds are tight. We investigate the price of anarchy on graphical games. We suggest to use the combinatorics of the graph in which the game is embedded in order to derive some bounds on the price of anarchy’s size. We suggest a notion of a local price of anarchy, and use graph covers to derive these bounds. We show that the bounds are tight.

v

Introduction 1

Algorithmic Game Theory

Algorithmic Game Theory is an emerging new research field, bridging game theory and computer science. It provides an abundant of highly important and interesting research problems that attracted the attention of many researches in recent years. A comprehensive introduction to this fascinating field can be found on [88]. This dissertation focuses on some fundamental issues in this novel field. Specifically, we study the power of randomness on mechanism design, the influence of an individual in a social network and the effects of structure on non cooperative environments. Game Theory Game theory investigates the interaction between strategic self interested parties. Such parties can be nations, communities, people, corporations, and even biological species. Therefore, game theory has applications in many fields ranging from economics and political science through psychology and sociology to biology. The main notion of game theory is a game in which each party has a utility, it tries to maximize, which is derived from its own behavior (strategy) and the behavior of the other parties. Game theory is concerned with questions such as how to analyze a game, what rational behavior is, etc. Its main focus is on various equilibria concepts such as Nash, Bayesian, Evolutionary Stable Strategy and correlated. An introduction to game theory can be found in many textbooks (e.g. [90]). The development of a theory that will facilitate the understanding of real life strategic interactions is a huge challenge. Indeed game theory as it is today is not free of drawbacks. In particular, in many situations, the predictive power of game theory is still very limited (see e.g. [11]). There are several reasons for this. First, humans and organizations do not fully adhere to the underlying behavioral assumptions of game theory. Second, games are very complex objects. Thus, even if one wishes to act as game theory instructs, it is often computationally impossible, or very expensive, to figure out what this action is. Thirdly, game theoretic modeling requires a lot of knowledge about the game. In many practical situations, such knowledge is not fully available. Concepts and techniques which were developed in computer science have a good chance of addressing some of these difficulties.

1

Mechanism Design The theory of mechanism design in economics and game theory focuses on the design of protocols for non cooperative environments. An introduction to mechanism design can be found in textbooks such as [54, 71, 78]. It is concerned with a center which wishes to maximize an objective function which depends on a vector of information variables. The value of each variable is privately known only to a self interested agent, which is not controlled by the center. In order to obtain its objective, the center constructs a game in which the agents reveal their private information. The goal is to design the game such that the actions which are desired from the center’s perspective maximize the agents’ own utility. Note that if the game is not carefully constructed, the agents will manipulate the mechanism and prevent the center from achieving its goal. Randomization in Algorithmic Mechanism Design The notion of Algorithmic Mechanism Design was introduced by Nisan and Ronen in their seminal paper [87]. A survey can be found in [72]. Already in the pioneer work it was shown that the power of randomized mechanisms is greater than deterministic ones, by showing that a specific scheduling task can be solved by a universally truthful random mechanisms strictly better than any deterministic one. This result suggests that the new field is more related to online algorithmic analysis then to classical computation where a separation is not known (the P vs. BP P problem). Since then, the distinction between random and deterministic mechanisms was further established by presenting more random mechanisms (for example [73]) and almost no deterministic ones. In a recent work, Dobzinski and Dughmi [39] provided a separation between truthful in expectation polynomial-time mechanisms and polynomial-time universally truthful mechanisms, where a mechanism is truthful in expectation if a bidder always maximizes its expected profit by bidding truthfully, and a universally-truthful mechanism is a probability distribution over deterministic truthful mechanisms. Theory of Computer Science In a nutshell, computer science and theoretical computer science in particular is devoted to the understanding of the notion of computation. Over the years, computer science has gained a deep understanding of algorithms and complexity, and developed a unique set of conceptual and mathematical tools. These tools are very relevant to the understanding of games. In addition, basic settings of game theory and economics are similar to those of computer science. For example, a game or an economic system can be viewed as computational devices as they transform input to output, economic mechanisms can be viewed as distributed algorithms, etc. An intriguing phenomenon of recent years is the emergence of major computational non cooperative environments and applications. These include the Internet, numerous economic and electronic commerce applications, peer to peer systems, wide area networks, multi agent systems, grid com-

2

puting, and more. In contrast to traditional environments, the main entity in a non cooperative environment is a self interested agent or a software representing it. Such environments give rise to many computational game theory problems [91], which are often elegant and challenging. It is therefore no wonder that research on this field, flourishes. Yet, many fundamental challenges are still open. We start by describing in a nutshell the work which is presented on this thesis.

2

Hats, Auction and Derandomization

Marketing a digital good may suffer from a low revenue due to incomplete knowledge of the marketer. Consider, for example, a major sport event with some 108 potential (electronic) viewers. Assume further that every potential viewer is willing to pay $10 or more to watch the event, and that no more than 106 are willing to pay $100 for that. If the concessionaire will charge $1 or $100 as a fixed pay per view price for the event, the overall collected revenue will be $108 at the most. This is worse than the $109 that can be collected, having known the valuations beforehand. This lack of knowledge motivates the study of unlimited supply, unit demand, single item auctions. Goldberg et al. [58] studied these auctions; in order to obtain a prior free, worst case analysis framework, they suggested to compare the revenue of these auctions to the revenue of optimal fixed price auctions. They adopted the online algorithms terminology [99] and named the revenue of the fixed price auction the offline revenue and the revenue of a multi price truthful auction, i.e., an auction for which every bidder has an incentive to bid its own value, online revenue. The competitive ratio of an auction for a bid vector b is defined to be the ratio between the best offline revenue for b to the revenue of that auction on b. The competitive ratio of an auction is just the worst competitive ratio of that auction on all possible bid vectors. For random auctions, a similar notion is defined by taking the expected revenue. If an auction has a constant competitive ratio it is said to be competitive. If an auction has a constant competitive ratio, possibly with some small additive loss, it is said to be general competitive (see Chapter one Section 1 for definitions). We remark that the use of online auctions in the usual context of online algorithms appears in the literature [74] even in the unlimited supply setting [70] but here we will stick to Goldberg et al.’s [58] notion. This attempt to carefully select the right benchmark in order to obtain a prior free, worst case analysis was a posteriori justified when Hartline and Roughgarden defined a general benchmark for the analysis of single parameter mechanism design problems [60]. This general benchmark, which bridges Bayesian analysis [85] from economics and worst case analysis from the theory of computer science, collides for our setting with the optimal fixed price benchmark [60]. A further justification for taking the offline fixed price auction as a benchmark is the following. Although an online auction seems less restricted than the offline fixed price auction, as it can assign different players different prices, it was shown [56] that the online truthful revenue is no

3

more than the offline revenue1 . In fact, there even exists a lower bound of 2.42 on the competitive ratio of any truthful online auction [57]. Note, however, that the optimal offline revenue (or more precisely the optimal fixed price) is unknown to the concessionaire in advance. It is well known (see for example [82]) that in order to achieve truthfulness one can use only the set of bid independent auctions, i.e., auctions in which the price offered to a bidder is independent of the bidder’s own bid value. Hence, an intuitive auction that often comes to one’s mind is the Deterministic Optimal Price (DOP ) auction [3, 56, 98]. In this auction the mechanism computes and offers each bidder the price of an optimal offline auction for all other bids. This auction preforms well on most bid vectors. In fact, it was even proved by Segal [98] that if the input is chosen uniformly at random, then this auction is asymptotically optimal. For a worst case analysis, however, it preforms very poor. Consider, for example, an auction in which there are n bidders and only two possible bid values: 1 and h, where h  1. We denote this setting as bi-valued auctions. Let nh be the number of bidders who bid h. Applying DOP on a bid vector for which nh = n/h will result in a revenue of nh instead of the n revenue of an offline auction. This is because every “h-bidder” is offered 1 (since n − 1 > h · (nh − 1)) and every “1-bidder” is offered h (since n − 1 < h · nh ). Here an “h-bidder” refers to a bidder that bids h and a “1-bidder” refers to a bidder that bids 1. Therefore, the competitive ratio of DOP is unbounded. Similar examples regarding the performances of DOP in the bi-valued auction setting appeared already in Goldberg et al. [56], and in Aggarwal et al. [3]. Goldberg et al. [58] showed that there exist random competitive auctions. Other works with different random competitive auctions, competitive lower bounds, and better analysis of existing auctions were presented, see for example [47, 49, 57]. For a survey see the work of Hartline and Karlin that appeared in [88, Chapter 13] (Profit Maximization In Mechanism Design). In all these works, no deterministic auction was presented. In fact, Goldberg et al. [56, 58] even proved that randomization is essential assuming the auction is symmetric (aka anonymous), i.e., assuming the outcome of the auction does not depend on the order of the input bids. Aggarwal et al. [2, 3] later showed how to construct from any randomized auction a deterministic, asymmetric auction with approximately the same revenue.

In order to establish the

result the authors used guessing auctions, in which the bidder gets the good only if the price equals exactly the bid (rather than lower or equal, as in a the standard setting). The guessing auctions are then “solved” using a hat guessing game which they introduce. Therefore for every random auction a deterministic “dual” auction can be constructed, though not in polynomial time, where the deterministic one guaranties a revenue which is close to the expected revenue of the random auction. Following is a more formal claim of their result: given a randomized auction A which accepts bid-vectors in [1, h]n , there exists a deterministic, asymmetric auction AD satisfying PAD (b) ≥ PA (b)/4 − O(h) for every b ∈ [1, h]n ; here PAD (b) is the revenue of AD given a bid-vector b and PA (b) is the expected revenue of A given a bid-vector b. The same 1

This holds only for a restricted benchmark as long as additive losses are not allowed, see Chapter one for a discussion.

4

result also holds in the more restrictive case where A accepts only discrete bid-vectors in [h]n . In addition, Aggarwal et al. showed that if the bid-vectors are restricted to be vectors of powers of 2 then the multiplicative factor of 4 above can be improved to 2. A New Auction Derandomization We show how to eliminate the multiplicative factor of 4. We use Lov´ asz’s Local Lemma [45] to show that for every random auction there exists a deterministic auction that guarantees the expected revenue of the random one, on any bid vector. More formally, for a randomized auction A and bid vector b let PA (b) be the expected revenue of A on b. We show that given a random auction A, there exists a deterministic auction which given a bid-vector b ∈ [h]n , guarantees a √ revenue of PA (b) − O(h n ln hn). As is the case with the construction of Aggarwal et al. [2], our construction is also not polynomial time computable. Bi-Valued Auctions For bi-valued auctions, with bid values {1, h} and n bidders, we show a polynomial time deter√ ministic auction, for which we guaranty a revenue of max {n, h · nh } − O( n · h), where nh is just the number of bidders that bid h. We then show that this bound is unconditionally optimal by showing that every auction (including a random superpolynomial one), cannot guaranty more √ than max {n, h · nh } − Ω( n · h). That is, there exists an auction with no multiplicative loss √ and with only O( n · h) additive loss, and every auction has at-least these losses. Let us note here, that if we restrict ourselves to anonymous auctions (symmetric) then we have a multiplicative loss of Ω(h) and an additive loss of Ω(n/h) over the max {n, h · nh } revenue of the best offline [3, 56]. A Hat Trick In order to find a polynomial time deterministic auction for bi-valued auctions, we solve a certain hat guessing puzzle. Hat guessing games was broadly brought to the attention of researchers by Peter Winkler [106], and since then was studied in many works such as [4, 30, 40, 43, 46]. The beautiful work of Aggarwal et al. [3] established a connection between hat guessing games and unlimited supply, unit demand, single item auctions. In fact, the authors introduced three different hat games and used them to establish their derandomizations. We show how a different hat game can help in forming an answer to the bi-valued auctions setting. This hat game was previously studied by Doerr and by Feige [40, 46]. Our deterministic hat strategy improves Doerr’s result and answers an open question of Feige. We state next the majority hat game and the result. Consider the following game. There are n players, each wearing a hat colored red or blue. Each player does not see the color of its own hat but does see the colors of all other hats. Simultaneously, each player has to guess the color of its own hat, without communicating with

5

the other players. The players are allowed to meet beforehand, hats-off, in order to coordinate a strategy. We give a polynomial time deterministic strategy which guarantees that the number of correct guesses is at least max{nr , nb } − O(n1/2 ), where nr is the number of players with a red hat and nb = n − nr is the number of players with a blue hat. We elaborate on these notions in Chapter 1, which is based on works that were published as [16–18, 20].

3

Social Networks

Social Network, modeled by a graph with individuals or organizations as vertices, and relationships or interactions as edges, have long been a major scientific object in many science fields, including most social sciences [53, 80, 81], life sciences [38, 105] and medicine [38, 80, 92]. ”‘(Social Network) ...play a critical role in determining the way problems are solved, organizations are run, and the degree to which individuals succeed in achieving their goals”’[’Wikipedia’ (October 2011) [101]]. The adoption of everyday decisions in public affairs, fashion, movie-going, and consumer behavior is now thoroughly believed to migrate in a population through an influential network. The same diffusion process when being imitated by intention is called viral marketing. This have roots in [63] where serendipitous discovery that messages from the media may be further mediated by informal ’opinion leaders’ who intercept, interpret, and diffuse what they see and hear to the personal networks in which they are embedded. Viral Marketing has recently became a widespread technique for promoting novel ideas, marketing new products, or spreading innovation [41, 65, 66]. In this method, one wishes to find a good set of individuals in a network, persuade them to adopt the idea, product or innovation, and wait for the ’word-of-mouth’ process to take care of ’spreading the roomer’. In the threshold model [59] we are given a graph and a threshold function on its vertices. We then run some kind of iterative process by first activating a subset of the vertices, and then in each iteration activate each vertex which its neighbors fulfill its threshold. Two optimizations problems have been proposed as the Target Set Selection problem. These two problems collide to the same search version we define and tackle on Chapter 2. The first problem was suggested by Kempe, Kleinberg and Tardos [67] who asked what is the maximum number of active nodes `, given a graph G a threshold function T and an integer k as the size of the selected set. They show strong inapproximability result even in highly easy settings. They also gave a constant-factor approximation algorithm, for a special case of that model when the thresholds are taken uniformly at random [67, 84]. The second version of optimization problem was a minimization one, suggested by Chen [34]. It asks to find the minimum set size k, given a graph G a threshold function T and an integer ` as the size of the active vertices at the end of the process. Chen explored the problem when ` is of size at-least a constant fraction of all the vertices, and suggested a poly-logarithmic

6

inapproximability factor even when the input graph is bipartite with maximum degree bounded by a constant and the threshold function is of maximum value 2. On the positive side, an exact algorithm for the case where the input graph is a tree was suggested. Bounded Tree-Width Graphs In the presence of high inapproximability factor, an exact algorithm for special graph classes is desirable. Indeed we generalize Chen’s result on trees to graphs on bounded tree-width [95]. This family plays an important role in algorithmic design, and both exact algorithms and approximations have been proposed over the years for several NP-Hard problems [8, 10, 27]. We suggest a polynomial algorithm for graphs of bounded tree-width. We also show that this algorithm is essentially best possible by showing, under some reasonable complexity assumptions, that there cannot be any polynomial time algorithm for graphs with unbounded tree-width. On Chapter 2 we present the results on the subject. In particular, we present a polynomial algorithm for bounded tree-width graphs, and a conditional lower bound. These results are based on papers which were published as [15, 21]. Combinatorial Model and Bounds for Target Set Selection A Perfect Target Set is a set of vertices whose activation will eventually activate the entire graph, and the Perfect Target Set Selection Problem (PTSS) asks for the minimum such initial set. It is known [34] that PTSS is hard to approximate, even for some special cases such as bounded-degree graphs, or majority thresholds. We propose a combinatorial model for this dynamic activation process, and use it to represent PTSS and its variants by linear integer programs. This allows one to use standard integer programming solvers for solving small-size PTSS instances. We also show combinatorial lower and upper bounds on the size of the minimum Perfect Target Set. Our upper bound implies that there are always Perfect Target Sets of size at most |V |/2 and 2|V |/3 under majority and strict majority thresholds, respectively, both in directed and undirected graphs. This improves the bounds of 0.727|V | and 0.7732|V | found recently by Chang and Lyuu [32] for majority and strict majority thresholds in directed graphs, and matches their bound under majority thresholds in undirected graphs. Furthermore, our proof is much simpler, and we observe that some of these bounds are tight. One interesting and perhaps surprising implication of our lower bound for undirected graphs, is that it is easy to get a constant factor approximation for PTSS for “relatively balanced” graphs (e.g., bounded-degree graphs, nearly regular graphs) with a “more than majority” threshold (that is, t(v) = ϑ · deg(v), for every v ∈ V and some constant ϑ > 1/2), whereas no poly-logarithmic-approximation exists for “more than majority” graphs. These results are based on a paper which was published as [1], and elaborated on Chapter 2 Section 5.

7

4

Local and Global Properties

Measuring a Game One of the most basic questions which can be asked about a game is how good a game is, given a specific goal on its possible outcomes. This question is non trivial as we do not know how the participants in the game are going to behave. The price of anarchy [69] is one such measure developed in the spirit of computer science. Given a game and an objective function on its possible outcomes, the price of anarchy of the game is defined as the ratio between the value of a worst Nash equilibrium of the game and the value of a global optimum. A global optimum value is obtained when a ’super power’ entity can force strategy to the parties, as to optimize the objective of the game. The philosophy is that the game will indeed end up in one of its Nash equilibria but we do not know which one. A game with a price of anarchy close to one, thus, is good. This natural definition reflects a relation between game theory, approximation algorithms and online algorithms. It measures both the selfishness of parties and the lack of correlation amongst, in the same manner that the approximation ratio measures the lack of computational power, and the competitive ratio measures the lack of knowledge. Indeed, many beautiful and strong results have been obtained over the years [12, 96, 103]. Another measure developed in the same spirit, is the price of stability [7]. The price of stability of a game is defined as the ratio between the value of a best Nash equilibrium of the game and the global optimum value. The philosophy here is that the ’super power’ entity can suggest an equilibrium strategy to the parties, which have no incentive to divert from. Succinct Representation As mentioned, a major drawback of game theory is its high complexity. In fact the basic notion of a game is exponential in the number of participants. This creates acute problems. First, it is very difficult to compute or reason about properties such as Nash equilibria for example. Second, and even more important, the participants themselves cannot behave as game theory predicts them to. As a result, the strategic considerations of the participants vary significantly. Currently, game theory does not have a good answer to these difficulties. In reality, even when the number of participants in a game is seemingly small, the actual number of players is usually much larger. Think for example of a strategic interaction between two nations. The actual number of self interested parties may involve political parties, influential figures in both countries, etc. This creates a severe scaling problem to game theory. The notion of a graphical game was introduced on [64] in order to address some of the difficulties mentioned above. A graphical game is a graph in which each player is represented by a vertex. The utility of a player is dependent only on its own strategy and the strategy of its direct neighbors. This representation is without loss of generality as every game can be represented by a complete graph. The original motivation behind this definition was computational, as for graphical games which are embedded in graphs of bounded degree d, the representation complexity descends from nAn−1 to nAd where n is the 8

number of players and A is the maximum number of actions for a player. Furthermore, it is even possible to compute Nash of classes of graphical games in polynomial time (see e.g. [44]). We believe that many such representations have far reaching properties, and one can exploit the structure of the underlying graph to reason about the game, for instance. These representations, above all, give rise to questions of great mathematical beauty. We study relationships between local and global properties of games. Our methods strongly rely on decomposing a graph to subgraphs, in the same manner Linial et al. [75] used decompositions to investigate graph properties. Our game measure is its price of anarchy. We present this subject on Chapter 3. This chapter relies on works that appear on [19, 22].

9

Chapter 1

Hats, Auctions and Derandomization 1

Competitive Analysis on Digital Good Auctions

This chapter introduces research related to digital good auctions. We focus on a notion of general competitive analysis which considers also additive losses on top of the usual multiplicative ones. Already in the work that suggested to use competitive analysis, namely Goldberg et al. [58], a major obstacle arises. It was indicated that no auction can be competitive against bids with one high value (see Goldberg et al. [56] for details).

The first solution that was suggested to this

problem was taking a different benchmark as the offline auction. This different benchmark was again the maximum single price auction, only now the number of winning bidders is bounded to be at-least two. The term competitive was then used to indicate an auction that has a constant ratio on every bid vector against any single price auction that sells at-least two items. A few random competitive auctions were indeed suggested using this definition over the years, but, as noted before, no deterministic (asymmetric) auction was ever found. In fact, this was proved not to be a coincidence when Aggarwal et al. [2] showed that no deterministic auction can be competitive even on this weaker benchmark. Given this lower bound a new solution should be considered, and indeed Aggarwal et al. [2] suggested such. The new definition suggested to generalize the competitive notion to include also additive losses on top of the multiplicative ones considered before. We argue that our results, and in particular the auction and the tight lower bound for the bi-valued setting, indicate that this second approach of considering also the additive loss is more accurate, as it shows how analyzing with a finer granularity turns an uncompetitive auction to an optimal one. We elaborate on this agenda in the Discussion section (4).

Preliminaries For a natural number k, let [k] denote the set {1, 2, ..., k}. A bid-vector b ∈ [h]n is a vector of n bids, each taking a value in [h]. For b ∈ [h]n and i ∈ [n] we denote by b−i the vector which is the result of replacing the ith bid in b with a question mark; that is, b−i is the vector 10

(b1 , b2 , . . . , bi−1 , ?, bi+1 , . . . bn ). For every i ∈ [n], we let [h]n−i = {b−i | b ∈ [h]n }. We think of h, the highest bid value to be h  n. All our results will also hold if h = O(n). Definition 1 (Unlimited supply, unit demand, single item auction). An unlimited supply, unit demand, single item auction is a mechanism in which there is one item of unlimited supply to sell by an auctioneer to n bidders. The bidders place bids for the item according to their valuation of the item. The auctioneer then sets prices for every bidder. If the price for a bidder is lower than or equal to its bid, then the bidder is considered as a winner and gets to buy the item for its price. A bidder with price higher than its bid does not pay nor gets the item. The auctioneer’s revenue is the sum of the winners’ prices. A truthful auction is an auction in which every bidder maximizes its utility by bidding its true valuation for the item. Truthfulness can be established through bid-independent auctions (see for example [82]). A bid-independent auction is an auction for which the auctioneer computes the price for bidder i using only the vector b−i (that is, without the ith bid). Two models have been proposed for describing random truthful auctions. The first, being the truthful in expectation, refers to auctions for which a bidder maximizes its expected utility by bidding truthfully. The second model, the universally truthful is merely a probability distribution over deterministic auctions. Our results uses this second definition, however, it is known that the two models collide in this setting [79]. Definition 2 (Fixed price, offline auction). The fixed price, offline auction is the auction that on each bid vector b ∈ [h]n fixes a single price, α = α(b) for all bidders, so to maximize the P revenue given that price. Namely, α is chosen such that bi ≥α α is maximized. Definition 3 (General competitive auction). Let OP T (b) be the best fixed-price (offline) revenue for an n-bid vector b with maximum bid h. An auction A is a general competitive auction if its revenue (expected revenue) from every bid vector b, PA (b) = α · OP T (b) − o(nh) where α is a constant not depending on n or h.

1.1

A Structural Lemma

Let A be a randomized truthful auction that accepts bid-vectors from [h]n . We think of A as a distribution over deterministic auctions. Hence, we may view A’s execution in the following manner. The auction maintains a set of nm functions {gi,j : i ∈ [n], j ∈ [m]}, where gi,j is a function from [h]n−i to [h]. This corresponds to a collection of m deterministic auctions, where the jth is defined by the set of functions {gi,j | i ∈ [n]}. On a bid-vector b ∈ [h]n , the auction tosses some coins, and chooses accordingly an integer j ∈ [m]. The auction then offers bidder i the price gi,j (b−i ). Let accepti,j (b) be 1 if gi,j (b−i ) ≤ bi and 0 otherwise. Let pj be the probability that j ∈ [m] was chosen. The expected revenue of the auction on input b is then: X X PA (b) = pj accepti,j (b) · gi,j (b−i ) j

i

11

Note that for every j ∈ [m], the set of functions {gi,j | i ∈ [n]} is just a deterministic strategy, denoted Aj , and A is, as explained before, a distribution on deterministic strategies. Note also that given A, namely the set of m deterministic auctions, and a distribution, D = (p1 , . . . , pn ) on [m], A induces another randomized auction A0 as follows: for a given b, it choses for each i ∈ [n], independently, a ji ∈ [m] according to D (namely ∀i, P r(ji = j) = pj ), and acts according to the set of functions thus chosen, namely {gi,ji , i = 1, . . . , n}. By definition, the expected revenue of A0 on input b is given by: PA0 (b) =

XX i

pj · accepti,j (b) · gi,j (b−i )

j

We call A0 the bidder-self-randomness-dual of A (as the function for different bidders are “not coordinated”). Comparing the revenue of A and that of A0 immediately implies Lemma 1.1. Let A be a randomized auction and A0 be its bidder-self-randomness-dual auction. Then A and A0 have the same expected revenue on every bid-vector. This corresponds with the minimax Theorem [104] and with Yao’s Lemma [107]. We note that A0 is concentrated on possibly many more deterministic auctions then A. Not only that g1 may be chosen from the jth copy while g2 from the `’s copy, namely g1 = g1,j while g2 = g2,` with j 6= `, it could also be that for different bid vectors b, b0 , g1 for b is g1,j while g1 for b0 is g1,` . This works since no consistency requirement between different b’s, and/or different i’s, is required in the expression for the expectation above.

1.2

Probabilistic Tools

The following two well known lemmata are used in the proofs. We explicitly state both for completeness. The first is the famous Lov´asz Local Lemma [45]. We will need the following version of it which can be found for example on [5]. Lemma 1.2 (The local lemma; symmetric case). Let Badi , 1 ≤ i ≤ N , be events in an arbitrary probability space. Suppose that each event Badi is mutually independent of a set of all the other events Badj but at most d, and that Pr[Badi ] ≤ p for all 1 ≤ i ≤ N . If ep(d + 1) ≤ 1, where e V is the base of the natural logarithm, then Pr[ N i=1 ¬Badi ] > 0. The second lemma is a tail bound inequality proved by Hoeffding [61]. Lemma 1.3 (Hoeffding). Let X be the average of n independent random variables Xi , where  −2n2 t2 P Xi ∈ [ai , bi ] for all i. Then: Pr[X < E[X] − t] ≤ 2 exp n (bi −ai )2 i=1

2

A General Derandomization

This section is devoted for the proof of the following Theorem.

12

Theorem 2.1. Let A be a randomized auction which accepts bid-vectors in [h]n . Assume that A has expected revenue PA (b) for every bid-vector b ∈ [h]n . Then there exists a deterministic √ auction AD that guarantees a revenue of PAD (b) ≥ PA (b) − O(h n ln hn) for every bid-vector b ∈ [h]n . The proof of Theorem 2.1 can be outlined as follows. Given a randomized auction A we first move to the bidder-self-randomness-dual auction A0 that has the same revenue as A. Let AD be a deterministic auction that is chosen according to the distribution that A0 induces on deterministic auctions. We show that the event Badb , defined by PAD (b) < PA0 (b)−t, depends on a relatively few number of other events Badb0 . Moreover, for every b, we have that the probability of Badb is sufficiently small. We then apply the Lov´asz Local Lemma to show that there exists a single deterministic auction AD , namely a choice of a collection of functions {gi,ji , i ∈ [n]}, for which none of the events Badb occur. This will conclude the proof of the theorem. We stress the fact that the result of Aggarwal et al. [2] is more general in the sense that it deals with bid-vectors in [1, h]n , while Theorem 2.1 only deals with discrete bid-vectors. However, discrete bid-vectors make sense in real life auctions where bids are monetary bids, being made with discrete valued currency. We stipulate that the construction used in the proof of Theorem 2.1 is not known to be polynomial time computable and that this is also the case in the construction of Aggarwal et al. [2]. We now formally present the proof. Prof of theorem 2.1. Let A be a randomized auction which accepts bid-vectors in [h]n , using a distribution over m deterministic auctions. Let {gi,j : i ∈ [n], j ∈ [m]} be the set of functions that A maintains. Let (p1 , . . . , pn ) be the distribution over [m] that is used by A, and let A0 be the bidder-self-randomness-dual of A. Namely, in which for each b, gi is chosen independently for each i, among all gi,j , j ∈ [m], with the corresponding probabilities {pj , j ∈ [m]}. For every vector b let PA (b) be the revenue expected by A on b. By Lemma 1.1, the revenue of A0 on a bid vector b is PA (b) =

XX i

pj · accepti,j (b) · gi,j (b−i )

j

In the following, all events are with respect to the distribution defined by the runs of the random auction A0 . Namely, the probability space contains for each bid vector b, an n-tuple of independently chosen values (gi,j )i∈[n] as defined above by A0 . √ Let t := h n ln 2hn. For a random run of A0 , namely, for a deterministic auction A0D that is chosen at random according to the distribution induced by A0 on deterministic auctions, let Badb be the event that PA0D (b) < PA (b) − t. We need the following two claims. Claim 2.2. For all b ∈ [h]n , Pr[Badb ] < 1/(h2 n2 ). Proof. Fix b ∈ [h]n and let Xi be the revenue extracted from bidder i in a run of A0 , namely, for gi chosen for b. Then, Xi = accepti,j (b) · gi (b−i ). Note that Xi ∈ [1, h] for all i and that the Xi ’s

13

are independent random variables. Let X be the sum of the Xi ’s. Namely, X is the revenue on b for that specific run. We have already argued that E[X] = PA (b), thus, Pr [Badb ] = Pr [X < E[X] − t], which by Lemma 1.3 is at most 2 exp

−2t2 h2 n



√ . The claim now follows since t = h n ln 2hn.

Claim 2.3. For all b ∈ [h]n , Badb depends on at most hn other events Badb0 . Proof. Let b ∈ [h]n be fixed. The tuple (gi (b−i )), i = 1, . . . , n, defines the revenue on b for a strategy that chooses this tuple. Thus for a vector b0 6= b, the events Badb , Badb0 may be dependent only if for some i ∈ [n], b−i = b0−i . This is so, since otherwise, once the tuple for b is chosen, that leaves complete freedom in choosing the tuple for b0 . For a fixed b, and fixed i there are h possible vectors b0 , for which b−i = b0−i . Hence the claim follows. Combining the two claims above with the Lov´asz Local Lemma, we get that with positive probability Badb does not occur for all b ∈ [h]n . Hence, there is a set of tuples (gi (b−i ))i∈[n] , b ∈ [h]n , for which Badb does not occur for every b ∈ [h]n . This set of tuples is the deterministic strategy AD for which for every bid-vector b ∈ [h]n , PAD (b) ≥ PA (b) − t. This completes the proof of the theorem.

3

Bi-valued Auctions and a Hat Game

We establish a connection between bi-valued auctions and a specific hat guessing game known as the majority hat game. This game was studied by Doerr [40] and later by Feige [46]. We derive new results regarding this game, which enable us to solve the bi-valued auction problem optimally.

3.1

A Hat Game

A group of n players is gathered, nr of which wear a red hat and nb = n − nr wear a blue hat. Every player in the group can see the colors of the hats of the other players, but cannot see and does not know the color of its own hat, a color which has been picked by an adversary. No form of communication is allowed between the players. At the mark of an unseen force, each player simultaneously guesses the color of its hat. The objective of the players as a group is to make the total number of correct guesses as large as possible. In order to achieve this goal, the players are allowed to meet beforehand, hats-off, and agree upon some strategy. Theorem 3.1. There exists a polynomial time deterministic strategy which guarantees at least max{nr , nb } − O(n1/2 ) correct guesses. Let us give a few remarks. First, this result is optimal, in the sense that any strategy can guarantee only max{nr , nb } − Ω(n1/2 ) correct guesses in the worst case; this was proved 14

by Feige [46] and Doerr [40]. Second, this result improves a result of Doerr [40] who gave a polynomial time deterministic strategy which guarantees at least max{nr , nb } − O(n2/3 ) correct guesses, and a result of Feige [46] who gave a non-polynomial time deterministic strategy which guarantees at least max{nr , nb } − O(n1/2 ) correct guesses. Feige further asked whether there exists a polynomial time deterministic strategy which guarantees this last bound, and our result answers this question affirmatively. Lastly it should be noted that Winkler [106], who brought the problem to light, gave a simple polynomial time deterministic strategy which guarantees bn/2c correct guesses. The proof of Theorem 3.1 has two parts. First, we design a polynomial time randomized strategy for the players, a strategy which guarantees that under any hat assignment, the expected number of correct guesses is max{nr , nb }−O(n1/2 ). We then derandomize this strategy by giving a polynomial time deterministic strategy that always achieves, up to another O(n1/2 ) additive loss, the expected number of correct guesses of the randomized strategy. Randomized strategy Let the players agree in advance on some ordering so that the ith player is well defined and known to all. Under a given hat assignment, let χr (i) be the number of red hats that the ith player sees. Analogously, let χb (i) be the number of blue hats that the ith player sees. Say that a player is red (respectively blue) if she wears a red (respectively blue) hat. Our strategy is a collection of randomized strategies, one for each player. We describe the strategy of the ith player, Paula. First Paula computes two positive integers a(i) and b(i), and sets p(i) = a(i)/b(i). If |χr (i) − χb (i)| ≤ 1, then Paula takes a(i) = 1 and b(i) = 2, so that p(i) = 1/2. Otherwise, |χr (i) − χb (i)| ≥ 2 and so we have either χr (i) = n/2 + c for some c > 0 or χb (i) = n/2 + c for some c > 0 (but not both). In the former case Paula takes a(i) = min{bn1/2 c, dce} and b(i) = bn1/2 c, so that p(i) = min{1, dce/bn1/2 c} and in the latter case she takes a(i) = bn1/2 c−min{bn1/2 c, dce} and b(i) = bn1/2 c, so that p(i) = 1−min{1, dce/bn1/2 c}. Note that a(i), b(i) and p(i) can be computed in polynomial time. Having p(i) at hand, Paula draws a uniformly random real p in the unit interval, guesses red if p ≤ p(i) and blue otherwise. Lemma 3.2. If each player follows the above strategy then the expected number of correct guesses is at least max{nr , nb } − O(n1/2 ). Proof. We shall assume throughout the proof that nr ≥ nb ; the argument for the other case is symmetric. We consider the following cases. • nr = nb . In that case, every player guesses correctly with probability 1/2. Thus the expected number of correct guesses is max{nr , nb }. • nr ∈ {nb + 1, nb + 2}. In that case, every red player guesses red with probability 1/2 and every blue player guesses blue with probability 1−O(n−1/2 ). Thus, the expected number of correct guesses is nr /2 + nb (1 − O(n−1/2 )), which is clearly at least max{nr , nb } − O(n1/2 ). 15

• nr ≥ nb + 3. Let x > 1 satisfy nr = n/2 + x, so that nb = n/2 − x. First assume that dxe ≤ bn1/2 c. In that case, every red player guesses red with probability dx − 1e/bn1/2 c = (dxe − 1)/bn1/2 c, and every blue player guesses blue with probability 1 − dxe/bn1/2 c. Therefore, the expected number of correct guesses is (n/2 + x)(dxe − 1)/bn1/2 c + (n/2 − x)(1 − dxe/bn1/2 c) = (n/2 + x)dxe/bn1/2 c − (n/2 + x)/bn1/2 c + (n/2 − x)(1 − dxe/bn1/2 c) ≥ (n/2 − x)dxe/bn1/2 c − (n/2 + x)/bn1/2 c + (n/2 − x)(1 − dxe/bn1/2 c) ≥ (n/2 − x) − (n/2 + x)/bn1/2 c ≥ n/2 − 4n1/2 , which is at least max{nr , nb } − O(n1/2 ), since max{nr , nb } ≤ n/2 + O(n1/2 ). Next assume that dxe > bn1/2 c. In that case, every red player guesses its hat correctly with probability 1 and so the expected number of correct guesses is at least nr ≥ max{nr , nb }−O(n1/2 ). Derandomization The randomized strategy we gave above has two phases. In the first phase the ith player computes in deterministic polynomial time some number p(i) in the unit interval. Moreover, the strategy is symmetric namely, for some pr and pb that depend only on the number of red hats and the number of blue hats, we have p(i) = pr if the ith player is red and p(i) = pb if the ith player is blue. Given the first phase, the second phase guarantees that the expected number of correct guesses is pr nr +(1−pb )nb , which was shown by Lemma 3.2 to be at least max{nr , nb }−O(n1/2 ). We show in the following that, if for all 1 ≤ i ≤ n the ith player has determined p(i), we can replace the second phase of the randomized strategy by a non-symmetric, polynomial time, deterministic strategy which guarantees that at least pr nr − O(n1/2 ) red players make a correct guess and at least (1 − pb )nb − O(n1/2 ) blue players make a correct guess. By Lemma 3.2 this will imply Theorem 3.1. Suppose that for all 1 ≤ i ≤ n, the ith player has determined a(i), b(i) and p(i). The following is the strategy that the ith player follows in order to determine its guess. 1. Let X(i) =

P

j

j, where the sum ranges over all j 6= i such that the jth player is red.

2. Let Y (i) =

P

j

1, where the sum ranges over all j < i such that the jth player is red.

3. Let Z(i) = i + X(i) + (b(i) − 1)Y (i) (mod b(i)). 4. Guess red if Z(i) < a(i), blue otherwise. Note that the above deterministic strategy can be implemented so that its running time is polynomial in n. This fact together with the next lemma proves Theorem 3.1. Lemma 3.3. Suppose that for all 1 ≤ i ≤ n, the ith player has computed a(i), b(i) and p(i). If each player follows the above strategy, then the number of red players that make a correct guess 16

is at least pr nr − O(n1/2 ) and the number of blue players that make a correct guess is at least (1 − pb )nb − O(n1/2 ). Proof. In what follows we make use of the following facts, which follow from the definition of a(i) and b(i) in the previous section. If the ith player and the jth player both have a hat of the same color, then a(i) = a(j) and b(i) = b(j). Furthermore, for all 1 ≤ i ≤ n, 1 ≤ b(i) ≤ 2n1/2 . Let us first consider the red players. Let 1 ≤ i < j ≤ n be two indices of players so that the ith player’s hat and the jth player’s hat are both red and furthermore, for all i < k < j we have that the kth player’s hat is blue. Let a(i) = a(j) = a and b(i) = b(j) = b so that pr = a/b. We have i + X(i) = j + X(j) and Y (j) − Y (i) = 1. Thus Z(j) − Z(i) = b − 1 (mod b). This implies that out of each b consecutive red players, a guess red. Thus, since b ≤ 2n1/2 , at least pr nr − O(n1/2 ) red players guess red. Next consider the blue players. Let 1 ≤ i < j ≤ n be two indices of players so that the ith player’s hat and the jth player’s hat are both blue and furthermore, for all i < k < j we have that the kth player’s hat is red. Let a(i) = a(j) = a and b(i) = b(j) = b so that pb = a/b. We have X(i) = X(j) and Y (j) − Y (i) = j − i − 1. Thus Z(j) − Z(i) = j − i + (b − 1)(j − i − 1) (mod b) = b(j − i) − b + 1 (mod b) ≡ 1 (mod b). This implies that out of each b consecutive blue players, b − a guess blue. Thus, since b ≤ 2n1/2 , at least (1 − pb )nb − O(n1/2 ) blue players guess blue.

3.2

A Bi-valued Auction

Consider bi-valued auctions, in which there are n bidders, each can select a value from {1, h}. The auction’s revenue equals the number of bidders it offers 1 plus h times the number of bidders it offers h if indeed their value is h. Let nh (b) denotes the number of bidders who bids h in a bid vector b. Recall that the best offline revenue on vector b is max {n, h · nh (b)}. In this section we will prove the following. Theorem 3.4. For bi-valued auctions with n bidders and values from {1, h} 1. There exists a polynomial time deterministic auction that for all bid vector b has revenue √ max {n, h · nh (b)} − O( n · h) 2. There is no auction that for all bid vector b has revenue √ max {n, h · nh (b)} − o( n · h) Note that the lower bound result is unconditional and applies also for randomized superpolynomial auctions. We proceed with a proof for the upper bound in the next section and a proof for the lower bound in the adjacent one.

17

An Auction We present a solution to the bi-valued auction problem, namely we show an optimal polynomial time deterministic auction. We start again by describing a random auction. A derandomization will be built later using the same methods we presented in the former section for the hat guessing problem.

A Random Bi-valued Auction For a fixed input b, let nh be the number of h-bids in b and nh (i) be the number of h-bids in b−i . Let p0 (i) = h·n√h (i)−n . If p0 (i) ≤ 0 set p(i) = 0 and if p0 (i) ≥ 1 set p(i) = 1. Otherwise, h·

nh (i)

(0 < p0 (i) < 1), set p(i) = p0 (i). The auction offers value h for bidder i with probability p(i) and 1 otherwise. √ Lemma 3.5. The expected revenue of the auction described above is max {n, h · nh } − O( n · h) Proof. If ∃i, p(i) 6= p0 (i) then either h · nh (i) ≤ n so the auction will offer 1 to any 1-bidder and √ the revenue will be at-least n = max {n, h · nh } − h = max {n, h · nh } − O( n · h) since h  n, p √ or h · nh (i) − n ≥ h · nh (i) so every h-bidder will be offered h with probability 1 − O(1/ nh ) √ √ and the expected revenue thus is at-least hnh · (1 − 1/ nh ) = max {n, h · nh } − O( n · h). Either √ case our auction’s revenue is max {n, h · nh } − O( n · h). Assume now that ∀i, p(i) = p0 (i), note √ that in this case |n − h · nh | = O(h nh ). The expected revenue for any bid vector with nh bids of value h is then: h · (nh − 1) − n h · (nh − 1) − n h · nh − n √ √ ) + nh · (1 − ) + (n − nh ) · (1 − √ h · nh h · nh − 1 h · nh − 1 h · (nh − 1) − n h · (nh − 1) − n h · nh − n √ √ √ ≥ h · nh + nh · (1 − ) + (n − nh ) · (1 − ) h · nh − 1 h · nh − 1 h · nh − 1 h · (nh − 1) − n h · nh − n √ √ ≥ h · nh + n · (1 − )+ h · nh − 1 h · nh − 1 h · (nh − 1) − n h · (nh − 1) − n √ √ nh · (1 − ) − nh · (1 − ) h · nh − 1 h · nh − 1 h · (nh − 1) − n h · nh − n √ √ = h · nh + n · (1 − ) h · nh − 1 h · nh − 1 h · nh − n h · nh − n h · nh √ √ = h · nh · + n · (1 − )− √ h · nh − 1 h · nh − 1 nh − 1

h · nh

Observe that the sum of the first two terms in the last expression above is max {n, h · nh } − √ √ O( n · h). This is because |n − h · nh | = O( n · h). The third term however, can be absorbed √ also into the O( n · h), which completes the proof of the lemma. √ Hence our auction’s expected revenue is within an additive loss of O( n · h) from the revenue of the best offline as promised. As noted before, a derandomization for this auction can be built using the same ideas appeared in the hat guessing game solution. This derandomization √ produces an auction which has for the worst case only another additive loss of O( n · h) over 18

√ the expected revenue of the random auction. Hence, in total, an additive loss of O( n · h) over the best offline revenue is achieved. We sketch this derandomization here for completeness.

Derandomization Let a(i) = h · nh (i) − n and b(i) = h ·

p

nh (i). The auction computes the value offered to bidder

i according to the following. 1. Let X(i) =

P

j

j, where the sum ranges over all j 6= i such that the jth bidder bids h.

2. Let Y (i) =

P

j

1, where the sum ranges over all j < i such that the jth bidder bids h.

3. Let Z(i) = i + X(i) + (b(i) − 1)Y (i) (mod b(i)). 4. Offer h to bidder i if Z(i) < a(i). Otherwise offer 1 to the i’s bidder. Note that for the random auction whenever p(i) = p0 (i) (or as stated here a(i)/b(i) ∈ [0, 1]) h −n √ we can define the probability that a 1-bidder will be offered 1, p1,1 = (1− h·n h· nh ), the probability

that an h-bidder will be offered 1, ph,1 = (1 − will be offered h, ph,h =

h·(nh −1)−n √ . h· nh −1

h·(nh −1)−n √ ) h· nh −1

and the probability that an h-bidder

The proof of the following lemma resembles the proof of

lemma 3.3, noting that we should also consider the case of “wrong” offers for h-bidders. Lemma 3.6. An auction that follows the above formulation gains revenue of nh ·(h·ph,h +ph,1 )− √ √ O( n · h) from all h-bidders. From the 1-bidders, the auction collects (n − nh )p1,1 − O( n · h). Proof. Let a(1) be the (identical) value a(i) computed by all 1-bidders. In the same manner let b(1), a(h), b(h) be the (identical) values computed by all bidders. The lemma follows Lemma 3.5 and the following claim: Claim 3.7. • For every b(1) consecutive 1-bidders the auction will offer h to a(1) of them and 1 to b(1) − a(1) • For every b(h) consecutive h-bidders the auction will offer h to a(h) of them and 1 to b(h) − a(h) Proof. Consider the h-bidders first. Let 1 ≤ i < j ≤ n be the indices of two consecutive hbidders. We have i + X(i) = j + X(j) and Y (j) − Y (i) = 1. Thus Z(j) − Z(i) = b(h) − 1 (mod b(h)). This implies that out of each b(h) consecutive h-bidders, a(h) will be offered h and b(h) − a(h) will be offered 1. Next consider the 1-bidders. Let 1 ≤ i < j ≤ n be the indices of two consecutive 1-bidders. We have X(i) = X(j) and Y (j)−Y (i) = j −i−1. Thus Z(j)−Z(i) = j −i+(b(1)−1)(j −i−1) = b(1)(j − i) − b(1) + 1 ≡ 1 (mod b(1)). This implies that out of each b(1) consecutive 1-bidders, b(1) − a(1) are offered 1.

19

Since this auction can be implemented in polynomial time, the upper bound of Theorem 3.4 follows.

Informal Remark. A natural critic that should arise at first glance of our “complicated” suggested auction is its being “unintuitive”. How can one explain/excuse suboptimal actions whenever nh 6= n/h? Why not deploy DOP in these settings? Note, however, that the proposed auction does exactly the same. On most inputs it acts as the DOP and only on inputs where nh ≈ n/h it deploys the “sophisticated” auction. In particular, the auction sacrifices the accup racy of results whenever for the bid vector b we have that n ≤ hnh (b) ≤ n + h nh (b). This “sophisticated sacrifice”, however, results in turning an unbounded competitive auction into an optimal one. A Lower Bound We prove optimality of the suggested auction in the previous section. For this we prove a lower bound on the additive loss of any bi-valued auction. The lower bound is unconditional and holds also for the expected revenue of random auctions. Furthermore, the bound does not depend on the computation time needed for the auction. Lemma 3.8. Let A be an auction for the bi-valued {1, h} setting and let PA (b) be A’s revenue √ on bid vector b. Then PA (b) equals max {n, h · nh } − Ω( h · n), where b is of size n and nh is the number of bids of value h in b. Proof. To prove a lower bound on the difference between the offline revenue max {n, h · nh }, and any auction we define a distribution D on the possible two-values bid vectors {1, h}n . We then show that for any deterministic auction, the expected revenue for a random bid vector b (expectation now is with respect to D), is at most P . On the other hand, we show that the expected revenue of the offline single price (over the distribution D) is at least P + ∆, for some ∆. This implies (by standard averaging argument, see for example [107]), that for any auction, (including randomized ones), there must be some vector b for which the auction’s revenue is ∆ less than the fixed-price offline optimal auction. The distribution D in our case is quite simple: for every bidder i ∈ [n] independently, set bi = h with probability 1/h and bi = 1 with probability 1 − 1/h. Now, for every deterministic truthful auction, knowing D, the price for every element should better be in [h], otherwise there is another auction that assign prices in [h] and achieves at least the same revenue for every bid vector (the one that assigns 1 for every value less than 1 and h for every value higher than h). Further, for such auction, the revenue is the sum of revenues obtained from the n bidders. Thus the expectation is the sum of expectations of the revenue obtained from the single bidders. Since for bidder i the expectation is exactly 1 (for any fixed bi the auction must set a constant 20

price α ∈ [h] independent of bi . Hence for α > 1, the expected revenue from bidder i is at-least 1 h ·h

= 1, and for α = 1 the expected value is clearly 1). We conclude that for every deterministic

truthful auction as above, the expected revenue (with respect to D), is exactly n. We now want to prove that the expected revenue of the fixed-price offline auction, that knows √ b, is n + Ω( hn). We know, however, the exact revenue of such auction for every bid vector b. It is just M (b) = max {n, h · nh (b)}, where nh (b) is the number of h-bids in b. Thus the expected revenue is     X X n n i n−i ED [M (b)] := n· · (1/h) · (1 − 1/h) + h·i· · (1/h)i · (1 − 1/h)n−i (1.1) i i in/h   n + n· (1/h)n/h (1 − 1/h)n−n/h n/h To estimate this sum, it is instructive to examine the following deterministic auction which we note before as DOP . On each vector b, DOP assigns value h for every bidder i for which the number of h-bids in b−i , is at least n/h (we assume n/h is an integer), and 1 otherwise. On one side, as argued before, the expected revenue of DOP with respect to D is E[PDOP ] = n

(1.2)

On the other hand, the same expression, is by definition,     X X n n i n−i E[PDOP ] = n· · (1/h) · (1 − 1/h) + h·i· · (1/h)i · (1 − 1/h)n−i (1.3) i i in/h   n + (n/h) · (1/h)n/h (1 − 1/h)n−n/h n/h Comparing the expression in Equation (1.1) and Equation (1.3), and using Equation (1.2), we get: 

 n ED [M (b)] = n + (n − n/h) · (1/h)n/h (1 − 1/h)n−n/h n/h Hence we conclude that the difference in expectation between offline revenue ED [M (b)] and the expected revenue on any deterministic auction, which is n, is,   n ED [M (b)] − n = n(1 − 1/h) · (1/h)n/h (1 − 1/h)n−n/h n/h By Stirling’s approximation we know that 

 n =Θ n/h

p h/n p (1 − 1/h)(1/h)n/h (1 − 1/h)n−n/h

√ Therefore, the additive loss is at least Ω( h · n) as claimed.

21

!

4

Auction Discussion

We have presented in this chapter an existential general derandomization for unlimited supply, unit demand, single item auctions. This derandomization produces an auction with the same asymptotic revenue guaranty as the expected revenue of the randomized. Furthermore, this derandomization is direct (in the sense that no intermediate like “guessing auction” is involved). This answers an open question posed by Aggarwal et al. [2] about the existence of direct derandomizations. Another interesting open question posed in the same work on the existence of a general polynomial time derandomization, remains open and challenging. Bi-valued auctions appeared as examples in several works, such as [3, 56, 89]. We present here a connection between these auctions and a certain hat guessing game [40, 46]. Solving this puzzle optimally results in an optimal deterministic auction for bi-valued auctions. Surprisingly, the establishment of the tight lower bound for these auctions, involves with analyzing the DOP, the deterministic optimal auctions for i.i.d. inputs.

e √n) over the expected revOur general derandomization suffers from an additive loss of O(h

enue of a random auction. Aggarwal et al. [3] proved that every deterministic auction will suffer from an additive loss over the best offline auction, hence did not rule out exact derandomizations. We showed, by the lower bound on bi-valued auctions, that every auction (including a random √ one) suffers from an additive loss of Ω( nh) over the best offline. Clearly, our understanding of the additive loss is not complete yet and needs some further investigation. Further research should ask whether there exists more cases of exact derandomization? Is there a general exact derandomization? And of-course, try to deploy these derandomization techniques to other mechanism design settings. Another interesting future direction, noticing that the connection between truthful auctions and hat guessing games was not a coincidence, is to reinforce these connection, perhaps with different kind of auctions.

22

Chapter 2

Social Networks In this chapter we focus on solutions to problems resulting from the Target Set Selection formulation. The high inapproximability results for the optimization versions of the Target Set Selection problem mentioned above are a striking blow from the algorithm designer point of view. In light of these results, we must turn our consideration towards special cases of the problem, or otherwise resort to heuristic approaches. When considering special cases, it is desirable to obtain a robust algorithm that behaves relatively well also on more general cases. Furthermore, one must overcome the fact that the problem is already known to be hard for many restrictive cases; in particular, for notoriously easy classes of graphs such as bounded degree graphs and bipartite graphs.

1

Bounded Tree Width

In this section we tackle these difficulties by considering the treewidth parameter of graphs. This parameter plays an important role in the design of many exact and approximation algorithms for many NP-hard problems. The notion was introduced by Robertson and Seymour [95] in their celebrated proof of the Graph Minor Theorem. Roughly, it measures the degree in which the given graph is similar to a tree in a very deep structural sense. For instance, trees have treewidth 1. We will show that the treewidth parameter governs the complexity of the target Set Selection problem in a very strict sense. The first clue for this was given by Chen [34] who showed that the problem is polynomial-time solvable in trees. We generalize this result substantially. Letting n and w respectively denote the number of vertices and treewidth of our input graph, we prove the following theorem: Theorem 1.1. Target Set Selection can be solved in nO(w) time. The proof of this theorem involves an elaborate dynamic-programming algorithm which utilizes various combinational properties of the Target Set Selection problem; drifting somewhat from standard dynamic-programming algorithms for small treewidth graphs. It is worth 23

pointing out that the time complexity of this algorithm can be rewritten as T O(w) · n, where T is the maximum threshold of any vertex in the network. Also, the algorithm can be adopted to much more general settings, including the case of directed graphs, weighted edges, and weighted vertices. On the other hand, we will show that we cannot do much better than what is asserted by Theorem 1.1 above. We prove that, under a well-established complexity-theoretic assumption, the above algorithm is optimal up to a quadratic factor in the exponent dependency of w. This shows that the treewidth of the given network indeed determines to a large extent whether one can efficiently compute an optimal target set in the network. This, of course, does not rule out the possibility of other parameters with better bounds, but nevertheless gives an important insight to the true complexity of the problem. The second main result of this chapter is given in the following theorem. Theorem 1.2. Target Set Selection cannot be solved in no(



w)

time unless all problems in

SNP can be solved in sub-exponential time. Most of this section is devoted to elaborating on both Theorem 1.1 and Theorem 1.2.

1.1

Preliminaries and Model Definitions

All graphs in this section are simple and undirected, unless stated otherwise. For any graph G, we use V (G) and E(G) to respectively denote the vertices and edges of G. We will mostly use G to denote our input graph, or social network, and we use n to denote the number of vertices in G and w − 1 its treewidth (see definition below). We also assume we have at hand a threshold function t : V (G) → N for the vertices of G. For a subset of vertices X ⊆ V (G), we let G[X] denote the subgraph of G induced by X. That is, the subgraph G0 with V (G0 ) = X and E(G0 ) = {{u, v} ∈ E(G) : u, v ∈ X}. For two graphs G0 and G00 , we let G0 ∪ G denote the graph G with V (G) := V (G0 ) ∪ V (G00 ) and E(G) := E(G0 ) ∪ E(G00 ). Model Definition Let S be any subset of vertices in G. An activation process in G starting at S is a chain of vertex subsets Active[0] ⊆ Active[1] ⊆ . . . ⊆ V (G), with Active[0] = S, and Active[i] including all vertices u such that either u ∈ Active[i − 1], or t(u) ≤ |{v ∈ Active[i − 1] : {u, v} ∈ E(G)}|, for all i > 0. We say that v is activated at iteration i if v ∈ Active[i] \ Active[i − 1]. We assume that the activation process terminates at iteration z, where z is the smallest index for which Active[z] = Active[z + 1]. Clearly, z < n. We say that S activates Active[z] in G, and denote it also by Active[S]. We now give a formal definition of the key social networking problem we will be working on in this chapter: Target Set Selection:

24

Instance: Two integers k, ` ∈ N, and a graph G with thresholds t : V (G) → N. Goal: Find a subset S ⊆ V (G) of size at most k that activates at least ` vertices in G. There are many natural generalizations of the above formulation. First, one can consider directed graphs instead of undirected, where now the activation of a vertex is determined only by its incoming neighbors. Another natural generalization is obtained by adding weights to the vertices of the network, and asking for a target set of total weight not exceeding k. Finally, one can model the situation where different vertices have different influences on each other, by adding influence values to the edges of the network. In this case, a vertex gets activated in an activation process, if the sum of influence from all of its active neighbors exceeds its threshold. We also may allow influence values to be negative. Treewidth We next briefly discuss the treewidth parameter of graphs which plays a central role in this section. There are many ways for defining the treewidth of a graph. We will use a slightly different definition from the original version by Robertson and Seymour [95] which uses an extremely handy form of graph decompositions, namely tree-decompositions: Definition 4 (Tree decomposition, treewidth [95]). A tree decomposition of a graph G is a pair (T , X ), where X is a family of subsets of V (G), and T is a tree over X , satisfying the following conditions: 1.

S

X∈X

G[X] = G, and

2. ∀v ∈ V (G) : {X ∈ X | v ∈ X} is connected in T . The width of T is maxX∈X |X| − 1. The treewidth of G is the minimum width over all tree decompositions of G. Arnborg et al. [9] showed how to compute a tree-decomposition of width w for an n-vertex graph with treewidth bounded by w in nw+O(1) time. This algorithm was later improved by Bodlaender [28] to a linear-time algorithm for constant values of w. See also [6, 29, 68] for various approximation algorithms. Given a tree decomposition (T , X ) of G, we will assume that T is rooted at some arbitrary R ∈ X . With this in place, there is an important one-to-one correspondence between subgraphs of G and nodes X in T . For a node X ∈ X , let TX denote the subtree of T rooted at X, and let XX denote the collection of nodes in this tree, including X itself. The subgraph GX associated S with X in TX is defined by GX = Y ∈XX G[Y ]. The vertices of X are called the boundary of GX .

25

2

Algorithm for TSS on Small Treewidth Networks

In this section we we provide a proof for Theorem 1.1 by presenting an nO(w) algorithm for Target Set Selection in graphs with treewidth bounded by w. To simplify matters, we will first assume that we are required to compute what we call a perfect target set for G, which is a set S that activates all vertices of the graph. That is, we assume we are given an instance of Target Set Selection with ` = n. This simplifies many details necessary for our algorithm; however, the essence of the problem remains the same. Later in the section, we will explain how to extend our algorithm for general values of `. Algorithm blueprint Our algorithm first constructs a tree-decomposition (T , X ) for G. Then it traverses the tree T in this decomposition in bottom-up fashion, constructing solutions for the subgraph GX corresponding to the current node X ∈ X it is visiting by combining solutions for subgraphs GY corresponding to the children Y of X in T . We will actually be working with a more convenient type of compositions called nice tree decompositions, initially introduced in slightly different form by Bodlaender [27]. Definition 5 (Nice tree decomposition). A tree decomposition (T , X ) of a graph G is nice if T is rooted, binary, each node in X has exactly w vertices, and is of one of the following three types: • Leaf nodes are leaves in T , and consist of w non-adjacent vertices of G. • Replace nodes X ∈ X have one child Y in T , with X \ Y = {u} and Y \ X = {v} for some pair of distinct vertices u 6= v ∈ V (G). • Join nodes X ∈ X have two children Y and Z in T with X = Y = Z. Given a tree decomposition of width w − 1 for G, one can obtain in linear time a nice tree decomposition for G with the same width and with O(wn) nodes (see for instance [27] and [42]). We will assume in the following that we have a nice tree decomposition (T , X ) at hand, of width w − 1. Let us begin the description of our algorithm by discussing the difficulties in applying the generic solution-combining treewidth paradigm mentioned above to Target Set Selection. Consider the subgraph GX corresponding to some join node X ∈ X of our nice-tree decomposition, and let Y and Z be the two children of X in T with X = Y = Z. Suppose S ⊆ V (GX ) is a perfect target set for GX . When restricting the activation process of S in GX only to the part of GY , a boundary vertex v may have less than t(v) GY -neighbors active, before it gets activated. We know only that the total number of active GY and GZ -neighbors of v in GX is t(v) or more. For this reason, we need to consider perfect target sets for GY that activate the boundary vertices according to many different threshold values. As it turns out, we only need 26

to consider different threshold assignments to the boundary vertices; we can keep the original thresholds of all remaining vertices in the graph. Definition 6 (Threshold vector ). Let GX be a subgraph of G corresponding to a node X of T , and let [n] denote the interval of non-negative integers {0, 1, . . . , n}. A threshold vector, T ∈ [n]w , is a vector with a coordinate for each boundary vertex in X. Letting T (v) denote the coordinate in T corresponding to the boundary vertex v ∈ X, and t denote the original threshold function of G, the subgraph GX (T ) is defined as the graph GX with thresholds: • T (v) for any boundary vertex v ∈ X, and • t(u) for all other vertices u ∈ / X. Another difficulty is that when combining perfect target sets SY and SZ of GY (TY ) and GZ (TZ ), we need to make sure that their combination actually constitutes a perfect target set in GX (T ). There are several problems with this: First, we need to add up the threshold vectors at the boundary correctly, since there can be intersections in the GY and GZ -neighborhoods of boundary vertices. More importantly, there can be dependencies in the activation processes, causing a deadlock in the combined process: For instance, a boundary vertex u might require another boundary vertex v to be activated in GY (TY ) before u itself can be activated, while the situation could be reversed in GZ (TZ ). To overcome these difficulties, we introduce the notion of activation orders, and activation processes constrained by activation orders. Definition 7 (Activation order ). Let GX be some subgraph of G corresponding to a node X of T , and recall that [w − 1] denotes the interval of non-negative integers {0, 1, . . . , w − 1}. An activation order is a function A : X → [w − 1], where for any v ∈ X, A(v) represents the relative iteration in the boundary at which v is activated. We now change the definition of the activation process on GX (T ) given in Section 1.1 so that it is constrained by an activation order on the boundary of GX (T ). Given a subset S ⊆ V (GX ) and an activation order A, the A-constrained activation process of S in GX (T ) is defined similarly to the normal activation process of S in GX (T ), except that a boundary vertex v becomes active at some iteration i only if all boundary vertices u with A(u) < A(v) are active at iteration i − 1, and only if all other boundary vertices w with A(w) = A(v) will also become activate at this iteration. This includes all boundary vertices selected in the target set. Note that S may activate in a constraint activation process only a subset of the vertices it activates in the normal activation process. Nevertheless, it is clear that all vertices that are activated by S in a normal activation process get activated in an A-constrained process for some activation order A. A set of vertices which activates all vertices of GX (T ) in an A-constrained activation process is said to be a perfect target set conforming with A. We can now describe the information that our algorithm computes for each subgraph GX corresponding to node X of T . This information is stored in a table, which we denote by OP TGX , that is indexed by two types of objects: 27

• A threshold vector T ∈ [n]w corresponding to the thresholds of the boundary vertices of GX . • An activation order A which constrains the order of activation on the boundary vertices. The entry OP TGX [T, A] will store the smallest possible perfect target set in GX (T ) conforming with the activation order A. Lemma 2.1. The number of different entries in OP TGX is bounded by nO(w) . Proof. We can bound the number of different threshold vectors and activation orders by (n + 1)w and ww respectively. Thus, the number of different entries is bounded by (n + 1)w · ww = nO(w) . Recall that GX = G when X is the root of T . Therefore, if we compute the OP TGX table for the root X, we can determine the optimal perfect target set for G. Our algorithm will compute the OP TGX tables in bottom-up fashion, where the computation at the leaves will be done by brute-force. According to Lemma 2.1 above, and since T has O(wn) nodes, to obtain our promised time bound of Theorem 1.1 it suffices to OP TGX for any X ∈ X in nO(w) time. Since the graphs at the leaves only have w vertices, this can be done in nO(w) time for a leaf node X. The next section gives details on how to compute OP TGX in case X is an internal node of T . Implementation To complete the description of our algorithm, we need to show how to compute the OP TGX table corresponding to the current node X ∈ X we are visiting in T , from the table(s) correspond to its child(ren) in T . We recall that the computation of OP TGX is done by brute-force at a leaf X ∈ X . Replace Nodes: Suppose X is a replace node with child Y in T . That is, GX is obtained by adding a new boundary vertex u to GY , and removing another boundary vertex v from the boundary (but not from GX ). By the second condition of Definition 4, u can only be adjacent to other boundary vertices of GX . Let d denote the number of these neighbors of u in GX , and assume that they are ordered. Also, let GXi , for i = 0, . . . , d, denote the subgraph of GX obtained by adding the edges between u and and all of its neighbors in X, up-to and including the ith neighbor. To compute OP TGX , we will actually compute OP TG i in increasing values of X

i, letting OP TGX := OP TG d . X

When i = 0, u is isolated, and thus it must be included in any perfect target set when it has threshold greater than 0. For any threshold vector T for X, let T uv denote the threshold vector for Y obtained by setting: T uv (w) := T (w) for all w 6= v, and T uv (v) := T (u). For an order A for X, let Auv denote the set of all orderings A0 for Y with A0 (w) := A(w) for all boundary

28

vertices w 6= u, v. Observe that we allow A0 (v) 6= A(u). According to the above, when X is a replace node we get for i = 0: ( OP TG 0 [T, A] = min 0 uv X

A ∈A

OP TGY [T uv , A0 ]

if T (u) = 0,

OP TGY [T uv , A0 ] ∪ {u} if T (u) 6= 0.

(2.1)

i−1 Now if i > 0, then GXi is obtained from GX by connecting u to some boundary vertex

w ∈ X. For any threshold vector T , let T u− denote the threshold vector obtained by setting T u− (u) := max {T (u) − 1, 0}, and all remaining thresholds the same. Define T w− similarly. Since the {u, w} edge can only influence v if A(w) < A(v), and vice-versa, we have:   OP TGi−1 [T, A], if A(w) = A(u),   X OP TGi−1 [T u− , A], if A(w) < A(u), OP TG i [T, A] = X X    OP T i−1 [T w− , A], if A(u) < A(w). G

(2.2)

X

Join Nodes: Let X be a join node with children Y and Z in T . Due to the second condition of Definition 4, GY and GZ are two subgraphs who share the same boundary vertices Y = Z, GX is obtained by taking the union of these two subgraphs. Observe that this means that there are no edges between V (GY ) \ Y and V (GZ ) \ Z in GX . For a boundary vertex v ∈ X, let NG[X] (v) denote the set of boundary vertices that are connected to v in GX . For v ∈ X, and an activation order A, let A≤v be the set of all boundary vertices u such that A(u) < A(v). Given an order A, and a pair of threshold threshold TY and TZ , define the threshold vector TY ⊕A TZ as the vector T where a coordinate T (v) for v ∈ X is defined by T (v) := TY (v) + TZ (v) − |NG[X] (v) ∩ A≤v |. Observe that for a given activation order A, if SY ⊆ V (GY ) activates in an A-constrained activation process TY (v) neighbors of v in GY , and SZ ⊆ V (GZ ) activates in an A-constrained activation process TZ (v) neighbors of v in GZ , then T (v) is exactly the number of neighbors of v activated by SY ∪ SZ in an A-constrained activation process in GX . This is because only boundary vertices w with A(w) < A(v) will be active prior to v, and there are no edges between V (GY ) \ Y and V (GZ ) \ Z in GX . We thus can compute OP TGX [T, A] using the following equation: OP TGX [T, A] =

min

TY ⊕A TZ =T

OP TGY [TY , A] ∪ OP TGZ [TZ , A]

(2.3)

Correctness of the above equation is clear. Indeed, any perfect target set S for GX (T ) which conforms with A can be decomposed into two subsets SY = S ∩ V (GY ) and SZ = S ∩ V (GZ ) which activate in an A-constrained activation process all vertices in GY (TY ) and GZ (TZ ), for some pair of threshold vectors TY , TZ for which TY ⊕A TZ = T . The converse is also true; any pair of perfect target sets for GY (TY ) and GZ (TZ ) conforming with A can be united into a perfect target set for GX (TY ⊕A TZ ), also conforming with A.

29

Summary and Generalizations It is easy to see that using the equations given in Section 2 above, we can correctly compute the OP TGX table corresponding to a node X in T , in time polynomial with respect to the total sizes of the tables of its children. According to Lemma 2.1, and since |X | = O(wn), this gives us a total running-time of nO(w) , as promised by Theorem 1.1. Note that while our algorithm solves the Target Set Selection problem in case the given social network is represented by undirected and unweighted graph, it is easy to see that the algorithm can also straightforwardly be extended to natural generalizations such as directed graphs or weighted vertices. Adding influence values to edges of the network is another generalization our algorithm supports, by slightly altering the computation on the replace and join nodes of the tree decomposition. Observe that these three generalizations give an easy way to alter the algorithm from computing a perfect target set to any general target set. Given an input directed graph G which we are required to activate at least ` vertices in, we construct a directed graph G0 by adding a new universal vertex v with weight ∞ and threshold ` that has an influence value of t(u) on every vertex u in G, and every vertex u in G has influence value of 1 on v. Now clearly a subset of vertices S ⊆ V (G) that activates at least ` vertices in G is a perfect target set in G0 , and vice-versa, every perfect target set in G0 with total weight less than ∞ activates at least ` vertices in G. Note also that the treewidth of G0 differs by at most one from the treewidth of G.

3

A Lower Bound

In this section we present our lower-bounds for Target Set Selection in small treewidth graphs, and in particular, we provide a proof of Theorem 1.2. At the core of this proof is a theorem of Chen et al. [33] which shows a similar lower-bound for the Clique problem. Recall that Clique is the problem of finding a pairwise adjacent subset of k vertices in a graph with n vertices. Chen et al. proved the following lower-bound for Clique: Theorem 3.1 ([33]). Clique cannot be solved in no(k) time unless all problems in SNP can be solved in sub-exponential time. We will show a reduction from Clique to Target Set Selection where the treewidth of the graph in the reduced instance is relatively close to the size of the clique to be searched for in the graph of the source instance. For this, we will actually use an intermediate problem, called the Multi-Colored Clique problem, where we are given a graph with vertices that are each colored by one of k different colors, and the goal is to find a clique of size k where all vertices have different colors. Lemma 3.2. Multi-Colored Clique cannot be solved in no(k) time unless all problems in SNP can be solved in sub-exponential time.

30

Proof. We reduce from Clique. Given an instance (G, k) for Clique, we construct a graph G0 by taking k copies v1 , . . . , vk of each vertex v of G, and then coloring each vertex vi with color i ∈ [k]. We then add an edge in G0 between two vertices ui and vj , i 6= j, iff u and v are connected in G. It is straightforward to verify that G has a clique of size k iff G0 has a multicolored clique. Therefore if Multi-Colored Clique can be solved in no(k) time, then Clique can be solved in (k ·n)o(k) = no(k) time, implying by Theorem 3.1 that all SNP problems are solvable in sub-exponential time. The approach for using Multi-Colored Clique in reductions is described in [48], and has been proved to be very useful in showing hardness results in the parameterized complexity setting. Before giving details of our construction, we will need to introduce some new terminology. We use G to denote a graph colored with k colors given in an instance of Multi-Colored Clique, and G0 to denote the graph in the reduced instance of Target Set Selection. For a color c ∈ [k], we let Vc denote the subset of vertices in G colored with color c, and for a pair of distinct colors c1 , c2 ∈ [k], we let E{c1 ,c2 } denote the subset of edges in G with endpoints colored c1 and c2 . In general, we use u and v for denoting arbitrary vertices in G, and x to denote an arbitrary vertex in G0 . We construct G0 using two types of gadgets. Our goal is to guarantee that any perfect target set of G0 with a specific size encodes a multi-colored clique in G. These gadgets are the selection  and validation gadgets. The selection gadgets encode the selection of k vertices and k2 edges that together encode a vertex and edge set of some multi-colored clique in G. The selection gadgets also ensure that in fact k distinct vertices are chosen from k distinct color classes, and   that k2 distinct edges are chosen from k2 distinct edge color classes. The validation gadgets validate the selection done in the selection gadgets in the sense that they make sure that the edges chosen are in fact incident to the selected vertices. In the following we sketch the construction of these gadgets: • Selection: For each color-class c ∈ [k], and each pair of distinct colors c1 , c2 ∈ [k], we construct a c-selection gadget and a {c1 , c2 }-selection gadget which respectively encode the selection of a vertex colored c and an edge colored {c1 , c2 } in G. The c-selection gadget consists of a vertex xv for every vertex v ∈ Vc , and likewise, the {c1 , c2 }-selection gadget consists of a vertex x{u,v} for every edge {u, v} ∈ E{c1 ,c2 } . There are no edges between the vertices of the selection gadgets, i.e. the union of all vertices in these gadgets is an independent set in G0 . We next add a guard vertex at each (vertex and edge) selection gadget that is connected to all vertices in the gadget. In this way, a selection gadget is no more than a star centered at a guard vertex.

• Validation: We assign to every vertex v in G two unique identification numbers, low(v) and high(v), with low(v) ∈ [n] and high(v) = 2n − low(v). For every pair of distinct colors c1 , c2 ∈ [k], we construct validation gadgets between the {c1 , c2 }-selection gadget and the 31

c1 -and c2 -selection gadget. Let c1 and c2 be any pair of distinct colors. We describe the validation gadget between the c1 -and {c1 , c2 }-selection gadgets. It consists of two vertices, the validation-pair of this gadget. The first vertex of this pair is connected to each vertex xv , v ∈ Vc1 , by low(v) parallel edges, and to each edge-selection vertex x{u,v} , {u, v} ∈ E{c1 ,c2 } and v ∈ Vc1 , by high(v) parallel edges. The other vertex is connected to each xv , v ∈ Vc1 , by high(v) parallel edges, and to each x{u,v} , {u, v} ∈ E{c1 ,c2 } and v ∈ Vc1 , by low(v) parallel edges. We next subdivide the edges between the selection and validation gadgets to obtain a simple graph, where all new vertices introduced by the subdivision are referred to as the connection vertices. To complete the construction, we specify the thresholds of the vertices in G0 . First, all guard vertices have threshold 1. All selection vertices have thresholds equaling their degree in G0 . Second, the connection vertices all have thresholds 1. Finally, the vertices in the validation pairs all have thresholds 2n. Figure 2.1 depicts a schematic description of selection and validation gadgets.

vertex selection

edge selection high(u)

low(u) validation pair

xu

low(u)

x{u,v}

high(u)

Figure 2.1: A graphical depiction of the validation gadget. In the example, n = 5 and low(u) = 3. The main idea behind the validation gadgets is as follows: We bound the size of the required perfect target set, so that any solution must select at most one vertex from each selection gadget. When selecting from vertex and edge selection gadgets connected by a validation gadget, both 32

vertices in the validation pair get active only if the vertex incident to that edge has been selected: This is because for any u 6= v either high(u) + low(v) < 2n or low(u) + high(v) < 2n. This allows us to state the following lemma: Lemma 3.3. G has a k-multicolored clique iff G0 has a perfect target set of size k +

k 2



.

Proof. Suppose that K is a multi-colored clique in G of size k. We argue that the subset S of  k + k2 vertices, defined by S = {xv : v ∈ K} ∪ {x{u,v} : u, v ∈ K}, is a perfect target set for G0 . Indeed, at the first iteration of the activation process of S in G, all guard vertices will be activated, since all of these have threshold 1, and each one has a neighbor in S. Furthermore, all connection vertices adjacent to vertices in S will also be activated. In the second iteration of the activation process all validation-pair vertices are activated, since each one has exactly 2n neighbors which are active. Finally, in the third iteration, all other connection vertices are activated, since all validation-pairs are active, which causes all remaining selection vertices to be activated in the fourth iteration. For the converse direction, assume S is a perfect target set of size k +

k 2



in G0 . First observe

that we can assume, without loss of generality, that S does not include any guard vertex. This is because we can replace each guard vertex by an appropriate selection vertex, and still activate G0 . Furthermore, as guard vertices are connected only to selection vertices, there has to be at least one active vertex in each selection gadget, before all guards can be active. Since selection vertices not chosen in the target set of G0 need their guards to be active before they can be activated, it follows that exactly one vertex from each selection gadget must be in any perfect target set S of  size k + k2 in G0 . Finally, as discussed above, the only way to activate a validation pair between a vertex and edge selection gadget, is to select a pair of vertices corresponding to an incident vertex and edge pair in G. Thus all edges of G selected in the edge-selection gadgets of G0 , are incident to all vertices of G selected in the vertex selection gadgets of G0 , and thus S corresponds to a k-multicolored clique in G. Lemma 3.4. G0 has treewidth O(k 2 ). Proof. Removing all validation pairs in G0 leaves a forest which has treewidth 1. Therefore, we can add all O(k 2 ) vertices belonging to validation pairs to each node X ∈ X in a width 1 tree-decomposition of this forest, giving us a tree-decomposition of width O(k 2 ) for G0 . According to the two lemmata above, we have shown a polynomial-time reduction that maps every instance (G, k) of Clique to an instance (G0 , k 0 ) of Target Set Selection,  k 0 = k + k2 , such that G has a multi-colored clique of size k ⇐⇒ G0 has a perfect target set of size k 0 , and G0 has treewidth O(k 2 ). Combining this with Lemma 3.2 completes the proof of Theorem 1.2. Indeed, if Target Set Selection has an no(



w)

algorithm, where w is the

treewidth of the input graph, then we could use the above reduction to map an instance (G, k) 33

of Multi-Colored Clique with |G| = n, to an instance (G0 , k 0 ) of Target Set Selection with |G| = O(nc ), for a constant c ∈ N, and w = O(k 2 ), use this algorithm to determine whether G0 has a perfect target set of size k 0 , and according to this determine whether G has a multicolored clique of size k. The running time of the entire procedure will be the running-time of the reduction which is polynomial in n and independent of k, plus the running-time of the presumed algorithm for Target Set Selection which is (nc )o(



w)

= no(k) . All together this gives us an

no(k) algorithm for Multi-Colored Clique, which by Lemma 3.2 implies that all problems in SNP can be solved in sub-exponential time.

4

A Non-Monotone Model

In this section we discuss the non-monotone variant of Target Set Selection. In NonMonotone Target Set Selection, a vertex may become non-active in any iteration of the activation process once the total number of its active neighbors is smaller than its threshold. Thus, for example, the target set selected at the beginning of the process may get deactivated as the process continues. Formally, an activation process given a target set S is defined by a sequence of vertex subsets Active[0], Active[1], . . . which are no-longer necessarily a chain, where Active[0] = S, and Active[i] for i > 0 is the set of all vertices u with t(u) ≤ |{v ∈ Active[i − 1] : {u, v} ∈ E(G)}|. A subset of vertices T is said to be activated by this process if T ⊆ Active[i] for some i. The goal is thus to determine whether there exists a subset of k vertices that activates a subset of ` vertices in G. Non-monotone settings were also studied, see for example [23,83,93,94]. In what follows, the network G we consider is directed. In the following we show that Non-Monotone Target Set Selection with edge influence values is #P-hard. Before this, let us first observe that the problem is in PSPACE. Consider the configuration graph CG corresponding to G, which is a directed graph whose vertex-set is 2V (G) , and an edge (S, S 0 ) connects two subsets S, S 0 ⊆ V (G) if in an activation process Active[i] = S for some i, then Active[i + 1] = S 0 . Explicitly storing this graph requires exponential space, but we can maintain an adjacency oracle (i.e. an algorithm outputting “yes” on input S and S 0 iff (S, S 0 ) ∈ E(CG )) in polynomial-time and space. Now a non-deterministic algorithm can solve Non-Monotone Target Set Selection by guessing two vertex subsets S, T ⊆ V (G), with |S| = k and |T | = `, and then mimicking the PSPACE algorithm for S-T Connectivity on implicit graphs. Thus, Non-Monotone Target Set Selection is in NPSPACE, which is the same class as PSPACE due to Savitch’s Theorem [97]. Theorem 4.1. Non-Monotone Target Set Selection with edge influence values is #Phard. Proof (sketch):

The proof follows by a reduction from the #P-complete problem #2-SAT,

which asks to determine whether a 2-CNF formula ϕ has r satisfying assignments, for some r ∈ N. 34

We say that a circuit C is balanced if the distance between any pair of input-output gates is the same. Before we explicitly describe our construction, we first show that given a balanced circuit C, we can construct a graph G and emulate the computation of C by an activation process on G. The graph G will be the graph isomorphic to the underlying graph of C, with vertex thresholds and edge influence values set as follows: • If v corresponds to an input gate then we set its threshold to 1. • If v corresponds a ¬-gate connected to a gate u, then we set t(v) := −1, and we let the influence value of the directed edge (u, v) be −2. • If v corresponds to a ∨-gate connected to gates u1 and u2 , we set t(v) := 1 and let the influences of (u1 , v) and (u2 , v) be 1. • If v corresponds to a ∧-gate connected to gates u1 and u2 , we set t(v) := 2 and let the influences of (u1 , v) and (u2 , v) be 1. Let {x1 , . . . , xn } denote the input gates of C.

It is clear that a truth assignment α :

{x1 , . . . , xn } → {0, 1} satisfies C iff the vertex corresponding to the output gate in G gets activated when the vertices corresponding to input gates xi with α(xi ) = 1 are selected in the target set. Thus, we can simulate the computation of any balanced circuit by an activation process in a graph G. In particular, we simulate a balanced circuit which computes the binary expansion of f (x) := x + 1 given the binary expansion of x ∈ N as input, and the balanced circuit which computes the binary expansion of x + y given the binary expansion of x and y. Our construction works as follows (see Figure 2.2): We connect the outputs of a balanced circuit Cf computing f (x) := x + 1 back to its inputs, and also to the inputs of a balanced circuit Cϕ computing ϕ. We connect the output of Cϕ to the input of a circuit Cg computing g(x, y) := x + y. The output of Cϕ is connected to the input corresponding to x in g(x, y), and the outputs of Cg are connected to the inputs of Cg that correspond to y. In this way, Cf enumerates all assignments to Cϕ , and Cg counts the number of these assignments that satisfy Cϕ . Note that there might by some synchronization issues when simulating Cf , Cϕ , and Cg together. For instance, if Cf has depth (i.e. input-output distance) i, then we need to consider its output only at iterations i apart in the activation process. In this case, we can simply add a directed cycle of length i, with all vertex-thresholds and edge-influences set to 1, and connect one vertex of this cycle to the outputs of Cf by an ∧-gate. We add similar synchronization gadgets for Cϕ and Cg . Finally, to complete the construction, we add a gate u which has edges incoming from the outputs of Cg , whose influences are set in such a way so that u gets activated iff the output of Cg correspond to the binary expansion of r. We then connect u to another gate v that gets activated as soon as u gets activated, and has outgoing edges to all other vertices with influences set in such a way so that they all get activated as soon as v is activated.

35

g(x,y) := x + y

φ(x)

f(x) := x + 1

Figure 2.2: A schematic depiction of the way the circuits Cf , Cϕ , and Cg are connected together.

Let G denote the graph resulting from our construction. It is clear from our construction that ϕ has r satisfying assignments iff G has a target set of size 0 that activates all vertices in 2

G. The theorem thus follows.

4.1

Tree Width Conclusions

In this section we studied the Target Set Selection problem, a problem arising in viral marketing and other social and economic applications. We presented an algorithm running in nO(w) time for networks of size n and treewidth w, which also applies for various variants and generalizations of the problem. We also showed that this problem cannot be solved (under a √

natural complexity assumption) in time no(

w) .

Therefore, the time complexity needed to solve

Target Set Selection is, in a sense, determined by the treewidth of the network. There are several open issues stemming from these two results. The following are three natural examples: • Are there other parameters that govern the complexity of Target Set Selection? • Can our lower bound extend to the pathwidth parameter of graphs? • Can our upper and lower bounds be tightened?

36

For Non-Monotone Target Set Selection we showed that the most general case, where we have a directed network with edge influence values (which could be negative), is #P-hard and is thus much harder than the monotone problem. Note that our algorithm fails to solve even the most restrictive non-monotone variant where the graph is undirected and unweighted. We propose the following three questions: • Is Non-Monotone Target Set Selection PSPACE-complete, is it in #P? • What is the complexity of the unweighted undirected variant of this problem? • Is there a polynomial algorithm when the network is a tree?

37

5

Combinatorial Model and Bounds for TSS

There are several interesting computational and combinatorial problems related to this activation process. Here we present formal definitions to Chen’s Minimum Target Set and to Kempe Kleinberg and Tardos’ Maximum Active Set. Minimum Target Set: Input: An integer l and a digraph G = (V, E) with thresholds t : V → N. Problem: Find the smallest set S ⊆ V , such that |Active[S]| ≥ l.

Maximum Active Set: Input: An integer k and a digraph G = (V, E) with thresholds t : V → N. Problem: Find a set S ⊆ V of size k, such that any other set S 0 ⊆ V of size k satisfies |Active[S 0 ]| ≤ |Active[S]|. Our first contribution on this section, is a combinatorial model for Target Set Selection, that is, a model in which no iterative process is involved. We then use this model to represent the optimization problems as binary integer linear programs (IP). Integer programs for NP-hard problems are useful because one can use standard and powerful IP solvers (e.g., CPLEX, MINTO, lp solve) in order to solve small-size problems. Moreover, linear programming relaxations for IP are a common tool for obtaining approximation algorithms for NP-hard problems [102]. Recall that a target set is called perfect if it activates the entire graph. The term irreversible dynamic monopoly (dynamo) usually refers to a perfect target set under majority or strict majority thresholds. Where, In a majority threshold for every v we have t(v) = ddegin (v)/2e, while in a strict majority threshold we have t(v) = d(degin (v) + 1)/2e. Optimal or almost optimal bounds on the size of a minimum dynamo were obtained over the years for some special graph classes such as butterfly, cube-connected cycles, hypercube, and rings to name a few (see [51, 52, 77] and the references within). These classes are usually stem from networks topologies and were considered since the activation process described above also models the propagation of faults in a fault-tolerant majority-based distributed system. Chang and Lyuu have recently studied the size of a minimum dynamo in directed and undirected graphs. In [31] they gave an upper bound of 23|V |/27 under strict majority thresholds in directed graphs. Later, in [32], they improved this bound to 0.7732|V | and b|V |/2c in directed and undirected graphs, respectively. For (simple) majority thresholds, they proved a 0.727|V | bound for directed graphs, and a b|V |/2c bound for undirected graphs. Using our new combinatorial formulation, and a straightforward randomized argument we derive some bounds on the size of the minimum perfect target set. We give a much simpler proof that the size of the minimum perfect target set is at most 2|V |/3 under strict majority 38

thresholds. This proof applies for both directed and undirected graphs, thus, it improves the bound of Chang and Lyuu in the case of directed graphs under strict majority thresholds. The same proof gives an upper bound of |V |/2 on the size of the minimum perfect target set under majority thresholds, both for directed and undirected graphs. This is an improvement over the 0.727|V | bound of Chang and Lyuu [32] for directed graphs, and basically matches their bound for undirected graphs. We show some more bounds on the size of the minimum perfect target set for undirected graphs, using a potential function argument. Some of these bounds seem counter-intuitive in light of the hardness of approximation results of Chen [34]. For example, when t(v) = d3/4 · deg(v)e, for every v ∈ V , it can be shown that Chen’s inapproximability result holds. However, our combinatorial bounds imply that a trivial constant factor approximation exists if ∆(G)/δ(G) is bounded (∆(G) and δ(G) are the maximum and minimum degrees in G, respectively).

We

use degin (v) to denote the in-degree of a vertex v in G, and deg(v) to denote the degree of v in an undirected graph G.

5.1

A Combinatorial Model for TSS

Recall the Target Set Selection Problem. Target Set Selection (TSS): Input: Two integers k, l and a digraph G = (V, E) with thresholds t : V → N. Problem: Find a set S ⊆ V , such that |S| ≤ k and |Active[S]| ≥ l. For a set U ⊆ V , G[U ] denotes the subgraph of G induced by U . Following is an equivalent formulation of TSS. Combinatorial Target Set Selection: Input: Two integers k, l and a digraph G = (V, E) with thresholds t : V → N. Problem: Find a set S ⊆ V , such that |S| ≤ k and there is a set A ⊆ V such that S ⊆ A, |A| ≥ l, and one can remove edges such that G[A] is acyclic and degin (v) ≥ t(v) for every vertex v ∈ A \ S. Lemma 5.1. S ⊆ V is a solution of Target Set Selection if and only if it is a solution of Combinatorial Target Set Selection. Proof. let S be a solution of Target Set Selection. Set A = Active[S] and remove every edge (u, v) for which there is no i such that u ∈ Active[i] and v ∈ / Active[i]. Clearly, G[A] contains no cycles. Consider a vertex v ∈ A \ S. When v became active at least t(v) of its in-neighbors were already active. Thus, by construction v has at least t(v) incoming edges in G[A]. 39

Let S be a solution of Combinatorial Target Set Selection, and consider the corresponding A and G[A]. Since G[A] is acyclic, the vertices of A can be topologically sorted. Denote them by a0 , a1 , . . . , ar according to this order. We prove by induction on i that ai ∈ Active[i]. For every vertex v ∈ A we have t(v) > 0, therefore degin (v) = 0 if and only if v ∈ S = Active[0]. Thus, a0 ∈ Active[0]. Assume that the claim holds for every aj , 0 ≤ j < i, and consider ai , i > 0. By the induction hypothesis all of the at least t(ai ) in-neighbors of ai are in Active[i − 1], therefore ai ∈ Active[i]. We will now use the new formulation of TSS to derive 0-1 integer linear programs for Minimum Target Set and Maximum Active Set. Let G = (V, E) be a digraph, and let E 0 be the set of non-edges, i.e., the set {(u, v) | (u, v) ∈ / E}. For every vertex v ∈ V the variable sv encodes whether v is selected to the target set. The threshold of a vertex v is tv = t(v). We would like to have a subset of E ∪ E 0 that yields a tournament (an acyclic digraph whose underlying undirected graph is complete). For every (non-)edge (u, v) ∈ E ∪ E 0 the variable euv encodes whether (u, v) belongs to this subset. The integer linear program for Minimum Target Set is then: min s.t.

P

Pv∈V

sv ≥ tv · (1 − sv ) ∀v ∈ V

(u,v)∈E euv

euv + evu = 1

for every distinct u, v ∈ V

euv ∈ {0, 1}

∀(u, v) ∈ E ∪ E 0

sv ∈ {0, 1}

∀v ∈ V

euv + evw + ewu ≤ 2

for every distinct u, v, w ∈ V

(Min Target Set)

The last constraint ensures that the graph induced by the edges and non-edges we pick is acyclic. Indeed, any maximal acyclic subgraph of G can be extended to a tournament using the non-edges (this is basically a linear extension of a partial order of the vertices). Otherwise, if the edges we picked from E already induce a directed cycle, then there must be a directed cycle on three vertices no matter which of the non-edges were picked. This follows from the fact that a chord in a directed cycle creates a shorter cycle, no matter what is its orientation. For Maximum Active Set we introduce another variable for every vertex v ∈ V , av , that encodes whether v is in the set A. max s.t.

P

Pv∈V

Pv∈V

av sv ≤ k

(u,v)∈E euv

≥ tv · (av − sv ) ∀v ∈ V

euv + evu = 1

for every distinct u, v ∈ V

euv ∈ {0, 1}

∀(u, v) ∈ E ∪ E 0

av , sv ∈ {0, 1}

∀v ∈ V

euv + evw + ewu ≤ 2

for every distinct u, v, w ∈ V

av ≥ sv

∀v ∈ V 40

(Max Active Set)

Note that the second constraint guarantees that every vertex in A is in S or has enough incoming edges, while the last constraint ensures that the vertices of S are also counted as vertices in A. In both programs the number of variables is Θ(n2 ) and the number of constraints is Θ(n3 ).

5.2

Combinatorial Bounds for Perfect TSS

In this section we derive some combinatorial bounds on the size of the minimum perfect target set in terms of the vertices’ degrees and thresholds. Consider the definition of Combinatorial Target Set Selection and assume that a set A is known, but S is not known. We can find a set S that activates A as follows: start by taking a random permutation π of the vertices in A, then remove the edges in G[A] that violate this order of the vertices, that is, edges (u, v) such that π(u) > π(v). Now for a vertex v ∈ A, it should be in S or have at least t(v) incoming edges in G[A]. Let S denote the set of vertices in A that do not satisfy the latter, then clearly A ⊆ Active[S]. The expected number of vertices in S is E[|S|] =

X v∈A

t(v) degin (v) + 1

(2.4)

since there are t(v) ‘bad’ spots for v out of the degin (v)+1 possible spots it has in the permutation of v and its in-neighbors. Therefore (2.4) gives an upper bound on the size of S in terms of t(·) and degin (·). However, in general we do not know the set A, and thus, cannot compute such a set S, that activates it. The Minimum Perfect Target Set Problem asks for a minimum target set that activates the entire graph, i.e., it is a special case of Minimum Target Set with l = n or, equivalently, A = V . Applying (2.4) we get an upper bound on S for this case as well. Moreover, since A is known in this case, we can compute a target set S whose size is at most the guaranteed bound. Since the conditional expectations can easily be computed, we can also do that deterministically, by the method of conditional expectation (see [100] for an introduction of the method). Recall that under strict majority thresholds t(v) = d degin2(v)+1 e for every v ∈ V , and observe that in this case the ratio (d degin2(v)+1 e)/(degin (v) + 1) in (2.4) gets its worst value, 2/3, when (v) degin (v) = 2. Similarly, with majority thresholds we have (d degin e)/(degin (v) + 1) ≥ 1/2. 2

Corollary 5.2. Let G be a (directed) graph with strict majority thresholds, such that every vertex has a positive (in-)degree. Then there is an algorithm which finds in polynomial time a target set of size at most 2n/3. Corollary 5.3. Let G be a (directed) graph with majority thresholds, such that every vertex has a positive (in-)degree. Then there is an algorithm which finds in polynomial time a target set of size at most n/2.

41

Remark:

The upper bound described in (2.4) is tight, as can be seen by the following con-

struction: Take an undirected graph with n/k non-adjacent k-cliques and thresholds k − 1. A perfect target set S contains at least k − 1 vertices from every clique. Thus, |S| ≥

n(k−1) , k

which

is the upper bound from (2.4) in this case. In particular, for strict (resp., simple) majority thresholds, the bound in Corollary 5.2 (resp., 5.3) is tight as is demonstrated by a set of disjoint triangles (resp., edges). More-than-majority thresholds Chen [34] studied Minimum Target Set for various threshold functions. If the threshold of a vertex is equal to its degree, for all the vertices, then, as mentioned in the Introduction, the problem is equivalent to the Vertex Cover problem, and hence has a good approximation factor [14]. On the other side of the scale, in the case where all thresholds are equal to 1, it is trivial to see that one vertex will activate its connected component, thus, the problem can be solved in linear time. When all the threshold are 2, the problem becomes hard to approximate within a polylogarithmic factor. The same lower bound applies for majority thresholds, i.e., when t(v) = ddeg(v)/2e, for all v ∈ V . However, here we show that for undirected graphs, when the thresholds are only slightly bigger, namely when t(v) ≥ deg(v)/2 + 1 for every v ∈ V , then the size of the minimum perfect target set is at least n/T , where T = maxv∈V t(v). This implies that the algorithm from the previous section that in this case finds a perfect target set of size at most 2n/3 is a 2T /3-approximation. Moreover, it follows that for every graph with polylogarithmic average degree, Chen’s hardness result does not apply. Theorem 5.4. Let G = (V, E) be an undirected graph and t : V → N a threshold function on its vertices, such that t(v) ≥ deg(v)/2+1 for every v ∈ V . If S ⊆ V is a set such that Active[S] = V , then |S| ≥ n/T , where T = maxv∈V t(v). Proof. Let z be the smallest integer such that Active[z] = V . For every i, 0 ≤ i ≤ z, define a potential function Φ(i) =

X

t(v) + |E [Active [i]] |,

v ∈Active[i] /

where E[U ] denotes the set of edges induced by a vertex set U ⊆ V . Note that Φ(i + 1) ≥ Φ(i) and that Φ(z) = |E|, therefore, Φ(0) ≤ |E|. On the other hand, P P clearly Φ(i) ≥ v∈Active[i] t(v) so together we have for the initial set v∈S / / t(v) ≤ |E|. Applying the assumption on t(v), together with the last inequality we get: |E| + |V | ≤

X v∈V

Thus, |V | ≤

P

v∈S

(deg(v)/2 + 1) ≤

X

t(v) ≤

v∈V

X v ∈S /

t(v) +

X v∈S

t(v) ≤ |E| +

X

t(v)

v∈S

t(v).

Note: A similar function to this one appears already in Berger’s work [23]. P The inequality v∈S / t(v) ≤ |E| can be extended for more general settings. For example, when 42

t(v) = dϑ · deg(v)e, for a constant ϑ ∈ (1/2, 1], we can obtain a lower bound on |S| of the form

|V |δ(G)(2ϑ−1) ∆(G)+δ(G)(2ϑ−1) ,

where δ(G) and ∆(G) are the minimum and maximum degrees in G,  ϑ ∆(G)+δ(G)(2ϑ−1) respectively. This gives an approximation ratio of = O( ∆(G) δ(G)(2ϑ−1) δ(G) ). When G is d-regular, the lower bound on S becomes tion ratio of

2ϑ2

(2ϑ−1) .

|V |(2ϑ−1) 2ϑ

which gives an approxima-

Once the ratio between ∆(G) and δ(G) can be linear, we get the same

inapproximability result as Chen’s. Theorem 5.5. Assume that Minimum Perfect Target Set cannot be approximated within a factor of f (n). Then for any constant ϑ ∈ [1/2, 1) Minimum Perfect Target Set cannot √ be approximated within a factor of f ( n) when the threshold function is of the form t(v) = dϑ · deg(v)e, for every v ∈ V , Proof. Assume that there is such a lower bound f (n). Let G = (V, E) be a graph and let t : V → N be an arbitrary threshold function. We construct a new graph G0 with a new threshold function t0 as follows. Consider a vertex v ∈ V . If t(v) > dϑ deg(v)e then add t(v)/ϑ − deg(v) new dummy vertices all with threshold 1 and define t0 (v) = t(v). Otherwise, if t(v) < dϑ deg(v)e ϑ deg(v)−t(v) 1−ϑ ϑ add 1−ϑ new

ϑ 1−ϑ

and define t0 (v) = t(v) +

ϑ 1−ϑ .

then add

new dummy vertices all with threshold

We also

vertices, connect each of them to all the dummy vertices that where added

at the last phase, and set their threshold to ϑ times their degree. Note that every vertex in G0   has threshold t0 (v) = ϑ deg0 (v) , where deg0 (v) is the degree of v in G0 . The new graph G0 has √ at most n2 vertices, therefore, an f ( n)-approximation for Minimum Perfect Target Set on G0 with the threshold function t0 would imply an f (n)-approximation on G. Remark.

Note that the requirement that ϑ will be constant is just for simplicity of presenta-

tion. We can actually have the same results as long as

43

ϑ 1−ϑ

does not exceed d − 1.

Chapter 3

Local vs. Global 1

Local Price of Anarchy

In real life almost every game is embedded in a larger game and players can only reason about their close vicinity. In this chapter we are interested in the question whether good local properties of a game imply good global properties of it. Specifically we start by introducing the notion of local price of anarchy of graphical games, a concept which quantifies how well subsets of agents respond to their environments. We then show several methods of bounding the global price of anarchy in terms of the local. One possible interpretation of our results is as follows: if a decentralized system is comprised of smaller, well behaved units, with small overlap between them, then the whole system behaves well. This holds independently of the size of the small units, and even when the small units only behave well on average. This phenomenon may have implications, for example, on organizational theory. From a computational perspective, the price of anarchy of large games is likely to be extremely hard to compute. However, computing the local price of anarchy of small units is relatively easy since they correspond to much smaller games. Once these are computed, our methods can be invoked to bound the price of anarchy of the overall game.

Related work The model of graphical games was introduced in [64]. The original motivation for the model was computational as it permitted a succinct representation of many games of interest. Moreover, for certain graph families, there are properties that can be computed efficiently. For example, although computing a Nash equilibrium is usually a hard task [35, 36], it can be computed efficiently for graphical games on graphs with maximum degree 2 [44]. Rather surprisingly, the proofs of the hardness of computing Nash equilibria of normal form games are conducted via reductions to graphical games [36]. Several works have studied the connections between combinatorial structure and game theoretic properties. For example, Galeotti et al. [55] investigate the structure of equilibria of

44

graphical games under some symmetry assumptions on the utility of the agents. It shows that in these games, there always exists a pure strategy equilibrium. For such games of incomplete information, [55] shows that there is a monotone relationship between the degree of a vertex and its payoff, and investigates further the connections between the level of information the game possesses and the monotonicity of the players’ degree in equilibria. In addition, a few works coauthored by Michael Kearns also explore economic and game theoretic properties which are related to structure (e.g. [62]). The questions addressed in these works are somewhat different from the ones we address here. The price of anarchy [69] is a natural measure of games. After the discovery of fundamental results regarding the price of anarchy of congestion games [12, 96], the price of anarchy and the price of stability1 [7] have become almost standard methods for evaluating games. We use the price of anarchy as the sole criteria throughout this work. Another work that presents bounds on the price of anarchy is [37], where a special graphical game was built by imposing the same two player game on each edge of a graph, and letting the utility of a player be its aggregate utility over all its neighbors. For a game taken from a class called coordination game, upper and lower bounds were given on the price of anarchy of the graphical game in terms of the original two player game. Bil`o et al. [24] analyze the impact of a social knowledge among the players on congestion games with linear latency functions. On games where the payoff of each player is affected only by the strategies of the neighbors in a social knowledge graph, they give a characterization of the games which have a pure Nash equilibrium. They also give bounds on the price of anarchy and price of stability in terms of the global maximum degree of the graph. In [25, 26] Bil`o et al. considered the price of anarchy and the price of stability of graphical multicast cost sharing games, and proved that if a central authority can enforce a certain graph it can lower the price of anarchy to a large extent. Throughout the work we derive global bounds by only testing local properties. In the same sense Linial et al. [75] investigate deductions that can be made on global properties of graphs after examining only local neighborhoods. They show that for any graph G, where V [G] = n, and a function f : V → d/ we get:

α 1+ 1+ = > = 1/γ = GPoA(G) β d+γ γ + γ

Refinements of the basic bound Averaging the parameters

In the biased consensus game (Example 2.13), all the induced sub-games of the cover have the same local price of anarchy. Most games do not possess this property, and the basic theorem is 55

thus often wasteful (as LPoAS (G) is the minimum local PoA of the sets in the cover). Similarly, β is the maximum width. For this purpose we generalize the definitions of the local price of anarchy and the width to be an average instead of the minimum and maximum, respectively. We introduce improved bounds on the global price of anarchy using the new definitions. Definition 18 (Average local price of anarchy). Let G be a graphical game and let S = {S1 , S2 , . . . , Sl } be a cover of V [G] such that S (−) is also a cover. Let αi be the local PoA of Si . The average local price of anarchy of G w.r.t. S, LPoAS (G), is the average of αi by the (−)

maximum utilities of Si

, that is LPoAS (G) =

(−)

Pl

i=1

αi UMAX (Si

)

(−) ) i=1 UMAX (Si

Pl

Theorem 3.1. Let G be a graphical game and let S = {S1 , S2 , . . . , Sl } be a cover of V [G] such that S (−) is also a cover and S is of width β. Let α = LPoAS (G), then GPoA≥ α/β Proof (sketch): The proof resembles the one of Theorem 2.1, and we thus only sketch it. Let Si ∈ S. If we follow the steps of the proof of Claim 2.2 in the proof of theorem 2.1, with the (−)

new definition of αi , we will get: UWN (Si ) ≥ αi UMAX (Si l X i=1

UWN (Si ) ≥

l X

(−)

αi UMAX (Si

)=α

i=1

). Now: l X

(−)

UMAX (Si

)

i=1

where the 2nd equality is due to the definition of LPoAS (G), and the first is just a summation of the former. P P Like in Claim 2.4, since S is of width β we have that β ni=1 UWN (i) ≥ li=1 UWN (Si ). Since P (−) S (−) is a cover we have that li=1 UMAX (Si ) ≥ |UOPT | (Similarly to Claim 2.3 in the proof of Theorem 2.1). Putting all together we conclude that:

n X

UWN (i) ≥ 1/β

i=1

l X i=1

UWN (Si ) ≥ 1/β

l X

(−)

αi UMAX (Si

) ≥ α/β

i=1

l X

(−)

UMAX (Si

i=1

)≥

α |UOPT | β 2

The above refinement is also interesting for the algorithmic task of finding a good cover. This is because one can look for sub-games with high average PoA instead of a cover with a high minimum PoA. Next, we consider a weighted version of the width parameter. Theorem 3.2. Let G be a graphical game and let S = {S1 , S2 , . . . , Sl } be a cover for V [G] such that S (−) is a cover, and the width of node i ∈ V [G] in S is βi . Define β as the average of β weighted by the agent utilities in a predefined global worst Nash equilibrium, that is β = P i n βi UWN (i) Pi=1 . n i=1 UWN (i)

Let α = LPoAS (G). Then GPoA(G)≥ α/β

56

Proof (sketch): We proceed according to the proof of Theorem 3.1 and the definitions P P P β ni=1 UWN (i) = ni=1 βi UWN (i) = li=1 UWN (Si )

2

Going back to the star-of-cliques (Example 2.8), one can see now that in this case β = 1 +  for a small  = (k, l) whereas β = k + 1 is the non weighted width. In the proposed cover, the center w is of width k + 1, the k vertices of type v are of width 2, and all the k(l − 1) vertices of type x are of width 1, and the weights are roughly the same. Thus, Theorem 3.2 yields a bound of GPoA(G)≥

1 2(1+) ,

instead of the much weaker bound of

1 2(k+1)

of the basic theorem. As we

noted before, it can be shown that the actual global price of anarchy is slightly greater than 1/2, so the above bound is tight. Note that in the last theorem we took the average according to the utilities of the agents in the global equilibrium. Computationally wise, a global equilibrium might not be easy to find. Therefore, averaging the β parameter may sometimes be less constructive. We address this issue on section 3.3.

3.2

Nash expansion

The above methods are not always applicable. For example the width parameter may be computationally intractable. We now introduce a different local parameter that can help in analyzing games which are not well addressed by the previous theorems. This parameter resembles graph expansion parameters but refers directly to the equilibrium welfare so it cannot be deduced solely from the graph. Later we will define a combinatorial expansion parameter that can be deduce from the graph. Definition 19 (A set Nash expansion). Let G be a graphical game and S ⊆ V [G]. We say that the Nash expansion of S is ξ if, for all sets of strategies for the neighbors of S, for all Nash equilibria of S P

j∈S ξ≤ P

(−)

j∈S

uj

uj

In other words, in every Nash equilibrium, the ratio between the welfare of S (−) and the welfare of its (external) boundary is bounded by

ξ 1−ξ

Definition 20 (A cover Nash expansion). Let G be a graphical game and S a cover. We say that the Nash expansion of S is ξ = ξG (S) if ξ is the minimum Nash expansion of a set Si ∈ S. Observation 3.3. Let G be a graphical game and S a cover. If the Nash expansion of S is at least ξ = ξG (S) then: P (−) Si UWN (Si ) P ≥ξ Si UWN (Si )

57

It is possible to show that if a cover S, where S (−) is a partition, has a Nash expansion of ξ, then its weighted width β is bounded by 1/ξ as well. Hence the following can be derived as a corollary of Theorem 3.2: Theorem 3.4. Let G be a graphical game. Let S = {S1 , S2 , . . . , Sl } be a cover with α = LPoAS (G) and a Nash expansion ξ, such that S (−) is a partition. Then GPoA(G)≥ αξ

2

In the next section (3.3), we discuss the properties of the expansion parameter further. Specifically, we show that if we can bound the maximum ratio between pairs of players’ utilities in a global worst Nash equilibrium, then we can replace the Nash expansion parameter by a simple combinatorial parameter. This is appealing, for instance, from a computational point of view.

3.3

Balanced games and expansion

In many games it is natural that the utilities of the players will be relatively balanced. We now show that when this is the case, the Nash expansion parameter can be replaced by a simple combinatorial parameter. This can greatly assist in the analysis of many games of interest. For example, good bounds can be obtained without even finding any Nash. Definition 21 (Inequality parameter ). We say that the inequality parameter of a game is at least ρ ≤ 1 if there exists a global worst Nash equilibrium3 such that for every two players i, j, UWN (i) ≥ ρUWN (j) 4 . Definition 22 (Combinatorial expansion). Let S be a cover for a graph G. The combinatorial expansion ξcomb (S) of S equals

P (−) | Si |Si P S |Si | i

In other words ξcomb is the ratio between the sum of the number of elements in the sets without the boundary, and this sum of the whole sets. Note that this local parameter is purely combinatorial and does not refer to the utilities of the players. Let Ξ be the graph theoretic vertex expansion of a graph, then ξcomb =

1 1+Ξ

Proposition 3.5. Let G be a graphical game with an inequality parameter ρ. Define S = {S1 , S2 , . . . , Sl } to be a cover such that: 1. S (−) is a partition 2. α = LPoAS (G) 3. ξcomb = ξcomb (S) 2

It is of-course possible to define ξi for every set Si and obtain a similar theorem. Note that it suffices that this condition holds for the set of all utilities and then it naturally holds for global worst Nash utilities. This way we avoid the need to know a global worst Nash. 4 Like in previous cases we can also ’average’ this parameter, for example by defining ρS for every set S, and obtain similar results. We avoid doing it for the sake of simplicity. 3

58

Then GPoA≥ ραξcomb Proof (sketch): We know that since α is the local price of anarchy, and S (−) is a cover, X

UWN (Si ) ≥ α

Si ∈S

Claim 3.6. ρξcomb

P

Si ∈S

(−)

X

UMAX (Si

) ≥ α|UOPT |

Si ∈S

UWN (Si ) ≤

P

(−)

UWN (Si

Si ∈S

)

Proof. By the definition of ξcomb ρξcomb

X Si ∈S

P P (−) (−) X X X Si |Si | Si |Si | P P UWN (Si ) = ρ UWN (j) UWN (Si ) = ρ Si |Si | Si |Si | Si ∈S

Si ∈S j∈Si

Let UWN (max) and UWN (min) denote the highest and lowest players’ utilities of the predefined global worst Nash equilibrium respectively. By the definition of ρ, ρUWN (max) ≤ UWN (min). Thus, P P (−) (−) X X X X Si |Si | Si |Si | P P ρ UWN (j) ≤ ρ UWN (max) Si |Si | Si |Si | Si ∈S j∈Si

Si ∈S j∈Si

P (−) X Si |Si | P ≤ ρUWN (max) |Si | Si |Si | Si

≤ UWN (min)

X

(−)

|Si

|

Si



X

(−)

UWN (Si

)

Si ∈S

By the fact that S (−) is a partition X

(−)

UWN (Si

)≤

Si ∈S

X

UWN (i)

i∈V

We therefore conclude that X i∈V

UWN (i) ≥

X

(−)

UWN (Si

) ≥ ρξcomb

Si ∈S

X

UWN (Si ) ≥ ρξcomb α|UOPT |

Si ∈S

2 Since in the biased consensus game ρ = 1, if we take a cover S where S (−) is a partition, and α = LPoAS (G), we will have, by Proposition 3.5, GPoA≥ αξcomb . We can show also that this proposition is tight. Formally: Proposition 3.7 (Tightness). For every  > 0, there exists a graphical game G and a cover S, such that: 59

1. S (−) is a partition 2. α = LPoAS (G) 3. ξcomb = ξcomb (S) and: αξcomb ≤ GPoA ≤ (1 + )αξcomb Proof (sketch): Consider the biased consensus game (Example 2.13) played on a torus graph, and consider a cover by k × k grids (Example 2.6). Proposition 3.5 implies that αξcomb ≤GPoA. For the other direction, as noted before: ξcomb = choosing γ =

4 k

we will get that αξcomb =

k2 k2 +4k

4 γ 4 γ γ +4

By Observation 2.15, α =

By

By Observation 2.14, GPoA(G)= 1/γ. A

simple calculation then shows that GPoA= 1/γ ≤ (1 + )αξcomb

4

k2 +4k γk2 +4k

2

Monotonicity

One potential drawback of the local price of anarchy is that it is not monotone, and therefore it is hard to work with. In this section we first demonstrate the lack of monotonicity for the local price of anarchy, and then continue by describing a different parameter which is monotone. Unfortunately, in many cases, this parameter may yield only very weak bounds. Recall that the local price of anarchy of a subset S ⊆ V [G] is denoted by αS .

4.1

Non monotonicity of local price of anarchy

Consider the following family of strict majority games. Example 4.1 (Strict majority game). In a majority game each player has {0, 1} as the set of actions. The utility of player i is ui = a if it plays the same as the strict majority of its neighbors, and ui = b (b < a) otherwise. Proposition 4.2 (Non-monotonicity). For each of the following monotone properties below, there exists a game G and a cover S = (S1 , . . . , Sl ) that contradicts it: 1. ∃i s.t. GPoA(G) ≤ αSi 2. ∀i GPoA(G) ≥ αSi 3. GPoA(G)≥ mini {αSi } even if S is disjoined. 4. GPoA(G)≤ maxi {αSi } even if S is disjoined. Proof (sketch): We use the majority game from Example 4.1 to introduce counter examples for the above proposition. We let Cn denote a cycle graph with n nodes.

60

1. Consider a majority game on C5 . Let Si = {i, (i + 1) mod 5} be a set of two adjacent vertices. Suppose that the two neighbors of S play the same, say without loss of generality 0. If the two nodes in S play 1, we have a local Nash equilibrium that yields a welfare of 2b for Si . If both members play 0, the total utility will be 2a. Therefore αSi ≤ b/a On the other hand, in every vector of pure strategies, there is always a pair of adjacent vertices with the same action. This means that GPoA(G)> b/a. Thus, ∀i, GPoA(G) > αSi 2. Consider a majority game G on C4 . It is not difficult to verify that PoA(G)= b/a, but if S = {v1 } then αS = 1 and PoA(G)< αS 3. Consider any game G where GPoA(G)< 1. Consider the partition S = {S1 , S2 , . . . , Sn }, where si = {vi } are singletons. Since, in equilibrium, players always respond optimally to their environments, ∀i, PoAG (si ) = 1. Thus, PoA(G)< mini {αSi } 4. Consider again a majority game G on C5 . Let S1 = {v5 , v1 } and S2 = {v2 , v3 , v4 }. We already know that GPoA(G)> b/a and LPoAG (S1 ) = b/a. We will show that LPoAG (S2 ) ≤ b/a Consider a local Nash equilibrium on S2 where its neighbors play 0, v2 play 0 and v3 , v4 play 1. It is a Nash equilibrium since no player can play like a strict majority of its neighbors. The welfare of S2 in this equilibrium is 3b. If all the members of S2 play 0, the welfare would have been 3a. Therefore LPoAG (S2 ) ≤ b/a. Thus, we got that GPoA(G)> b/a = maxi {αSi } 2 In other words, the local price of anarchy of subsets (S1 , . . . , Sl ) do not say much about the price S of anarchy of the whole set S = i Si . It is possible to construct examples in which the ratio between the αSi s and αS is arbitrarily high. Thus, from an algorithmic perspective, it may be difficult to find good covers for general games.

4.2

A monotone local parameter

We now introduce another local parameter which is monotone. Definition 23. For a game G and S ⊆ V [G], define δS to be the ratio between welfare of the 0 (S) and the best utility worst Nash equilibrium on S for every neighbors’ action, denote by UWN

that S can get, that is, δS =

0 UWN (S) UMAX (S)

In other words, δS measures the ratio between the worst possible welfare of S and best welfare 0 that S can hope for. Note that in general UWN (S) ≤ UWN (S) since the former is not restricted

for neighbors actions only from Nash. Of course, typically, δS is very wasteful.

61

Proposition 4.3. Let S = {S1 , S2 , . . . , Sl } be a cover for G, a graphical game. Then GPoA(G)≥ minSi {δSi }. We next show that this bound can is also tight. The proposition uses the biased consensus game but now for all covers. Proposition 4.4. For every graph G, and every cover S, there is a graphical game for which GPoA ≤ min{δSi } < 1 Si

Corollary 4.5. By the last proposition, and by Proposition 4.3, there exists a game G, for which for every cover S = {Si }i GPoA(G) = min{δSi } Si

Unfortunately, it is not difficult to construct examples in which these δ values yield only very weak bounds on the global price of anarchy.

5

Structural Properties Conclusion

We view the investigation of the relations between local and global properties of games as a basic issue in the understanding of large games. This section demonstrates that at least from the perspective of the price of anarchy, a good local behavior of a game implies a good global behavior. The converse is not necessarily true, and there are many non-trivial questions which are related to bounding the price of anarchy of graphical games. Of course, it is natural to investigate questions, similar to the ones which are studied here, in the context of other properties of games. In general, we believe that models like graphical games provide an excellent opportunity to introduce many structural properties into games.

62

Bibliography [1]

E. Ackerman, O. Ben-Zwi, and G. Wolfovitz, Combinatorial model and bounds for target set selection, Theor. Comput. Sci. 411 (2010), no. 44-46, 4017–4022.

[2]

G. Aggarwal, A. Fiat, A. V. Goldberg, J. D. Hartline, N. Immorlica, and M. Sudan, Derandomization of auctions, Proceedings of the 37th annual acm symposium on theory of computing (stoc), 2005, pp. 619–625.

[3]

G. Aggarwal, A. Fiat, A. V. Goldberg, J. D. Hartline, N. Immorlica, and M. Sudan, Derandomization of auctions, Games and Economic Behavior 72 (2011), 1–11.

[4]

N. Alon, Problems and results in extremal combinatorics - ii., Discrete Mathematics 308 (2008), no. 19, 4460–4472.

[5]

N. Alon and J. Spencer, The probabilistic method (3rd edition), Wiley, New-York, 2008.

[6]

E. Amir, Efficient approximation for triangulation of minimum treewidth, Proceedings of the seventeenth conference on uncertainty in artificial intelligence (uai), 2001, pp. 7–15.

[7]

´ Tardos, T. Wexler, and T. Roughgarden, The price of E. Anshelevich, A. Dasgupta, J. M. Kleinberg, E. stability for network design with fair cost allocation, Proceedings of the 45th symposium on foundations of computer science (focs, 2004, pp. 295–304.

[8]

S. Arnborg, Efficient algorithms for combinatorial problems on graphs with bounded decomposability. A survey, BIT Numerical Mathematics 25 (1985), no. 1, 2–23.

[9]

S. Arnborg, D. G. Corneil, and A. Proskurowski, Complexity of finding embeddings in a k-tree, SIAM Journal on Algebraic and Discrete Methods 8 (1987), no. 2, 277–284.

[10]

S. Arnborg and A. Proskurowski, Linear time algorithms for NP-hard problems restricted to partial k-trees, Discrete Applied Mathematics 23 (1989), 11–24.

[11]

R. J. Aumann and J. H. Dreze, When all is said and done, how should you play and what should you expect?, Center for Rationality and Interactive Decision Theory, Hebrew University, Jerusalem, 2005.

[12]

B. Awerbuch, Y. Azar, and A. Epstein, The price of routing unsplittable flow, Proceedings of the 37th annual acm symposium on theory of computing (stoc), 2005, pp. 57–66.

[13]

B. Awerbuch and D. Peleg, Sparse partitions (extended abstract), proceedings of the 31st annual symposium on foundations of computer science (focs), 1990, pp. 503–513.

[14]

R. Bar-Yehuda and S. Even, A linear time approximation algorithm for the weighted vertex cover problem, Journal of Algorithms 2 (1981), 198–203.

[15]

O. Ben-Zwi, D. Hermelin, D. Lokshtanov, and I. Newman, An exact almost optimal algorithm for target set selection in social networks, Proceedings 10th acm conference on electronic commerce (ec), 2009, pp. 355– 362.

[16]

O. Ben-Zwi and I. Newman, Optimal bi-valued auctions, CoRR abs/1106.4677 (2011).

[17]

O. Ben-Zwi, I. Newman, and G. Wolfovitz, A new derandomization of auctions, Proceedings of algorithmic game theory, second international symposium (sagt), 2009, pp. 233–237.

63

[18]

O. Ben-Zwi, I. Newman, and G. Wolfovitz, Hats, auctions and derandomization – Manuscript, 2011.

[19]

O. Ben-Zwi and A. Ronen, The local and global price of anarchy of graphical games, proceedings of algorithmic game theory, first international symposium (sagt), 2008, pp. 255–266.

[20]

O. Ben-Zwi and G. Wolfovitz, A hat trick, Fun, 2010, pp. 37–40.

[21]

O. Ben-Zwi, D. Hermelin, D. Lokshtanov, and I. Newman, Treewidth governs the complexity of target set selection, Discrete Optimization 8 (2011), no. 1, 87–96.

[22]

O. Ben-Zwi and A. Ronen, Local and global price of anarchy of graphical games, Theor. Comput. Sci. 412 (2011), no. 12-14, 1196–1207.

[23]

E. Berger, Dynamic monopolies of constant size, J. Comb. Theory, Ser. B 83 (2001), no. 2, 191–200.

[24]

V. Bil` o, A. Fanelli, M. Flammini, and L. Moscardelli, Graphical congestion games, Wine, 2008, pp. 70–81.

[25]

V. Bil` o, A. Fanelli, M. Flammini, and L. Moscardelli, When ignorance helps: Graphical multicast cost sharing games, Mfcs, 2008, pp. 108–119.

[26]

V. Bil` o, A. Fanelli, M. Flammini, and L. Moscardelli, When ignorance helps: Graphical multicast cost sharing games, Theor. Comput. Sci. 411 (2010), no. 3, 660–671.

[27]

H. L. Bodlaender, A tourist guide through treewidth, Acta Cybernetica 11 (1993), 1–23.

[28]

H. L. Bodlaender, A linear time algorithm for finding tree-decompositions of small treewidth, SIAM Journal on Computing 25 (1996), 1305–1317.

[29]

V. Bouchitt´e, D. Kratsch, H. M¨ uller, and I. Todinca, On treewidth approximations, Discrete Applied Mathematics 136 (2004), no. 2-3.

[30]

S. Butler, M. T. Hajiaghayi, R. D. Kleinberg, and T. Leighton, Hat guessing games, SIAM J. Discrete Math. 22 (2008), no. 2, 592–605.

[31]

CL. Chang and YD. Lyuu, Spreading messages, Theor. Comput. Sci. 410 (2009), no. 27-29, 2714–2724.

[32]

CL. Chang and YD. Lyuu, Bounding the number of tolerable faults in majority-based systems, Algorithms and complexity, 7th international conference, ciac 2010, rome, italy, may 26-28, 2010. proceedings, 2010, pp. 109–119.

[33]

J. Chen, B. Chor, M. Fellows, X. Huang, D. W. Juedes, I. Kanj, and G. Xia, Tight lower bounds for certain parameterized NP-hard problems, Proceedings of the 19th annual ieee conference on computational complexity (ccc), 2004, pp. 150–160.

[34]

N. Chen, On the approximability of influence in social networks, Proceedings of the 19th annual acm-siam symposium on discrete algorithms (soda), 2008, pp. 1029–1037.

[35]

X. Chen and X. Deng, Settling the complexity of two-player nash equilibrium, proceedings of the 47th annual ieee symposium on foundations of computer science (focs), 2006, pp. 261–272.

[36]

C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou, The complexity of computing a nash equilibrium, Proceedings of the 38th annual acm symposium on theory of computing (stoc), 2006, pp. 71–78.

[37]

J. R. Davis, Z. Goldman, J. Hilty, E. Koch, D. Liben-Nowell, A. Sharp, T. Wexler, and E. Zhou, Equilibria and efficiency loss in games on networks, Cse (4), 2009, pp. 82–89.

[38]

Z. Dezs˝ o and A. L. Barab´ asi, Halting viruses in scale-free networks, Phys. Rev. E 65 (2002), no. 5, 055103.

[39]

S. Dobzinski and S. Dughmi, On the power of randomization in algorithmic mechanism design, proceedings of the 50th annual ieee symposium on foundations of computer science (focs), 2009.

[40]

B. Doerr, Integral approximation, Habilitationsschrift. Christian-Albrechts-Universita”t zu Kiel (2005).

[41]

P. Domingos and M. Richardson, Mining the network value of customers, Proceedings of the 7th acm sigkdd international conference on knowledge discovery and data mining (kdd), 2001, pp. 57–66.

64

[42]

R. Downey and M. Fellows, Parameterized complexity, Springer-Verlag, 1999.

[43]

T. Ebert, Applications of recursive operators to randomness and complexity, Ph.D. Thesis, University of California at Santa Barbara (1988).

[44]

E. Elkind, L. A. Goldberg, and P. W. Goldberg, Nash equilibria in graphical games on trees revisited, Proceedings 7th acm conference on electronic commerce (ec), 2006, pp. 100–109.

[45]

P. Erd˝ os and L. Lov´ asz, Problems and results on 3-chromatic hypergraphs and some related questions, Infinite and Finite Sets (A. Hajnal et al., eds.) (1975), 609–628.

[46]

U. Feige, You can leave your hat on (if you guess its color), Technical Report MCS04-03, Computer Science and Applied Mathematics, The Weizmann Institute of Science (2004).

[47]

U. Feige, A. Flaxman, J. D. Hartline, and R. D. Kleinberg, On the competitive ratio of the random sampling auction, Internet and network economics, first international workshop, (wine), 2005, pp. 878–886.

[48]

M. R. Fellows, D. Hermelin, F. A. Rosamond, and S. Vialette, On the parameterized complexity of multiple interval problems, Theoretical Computer Science 410 (2009), no. 1, 53–61.

[49]

A. Fiat, A. V. Goldberg, J. D. Hartline, and A. R. Karlin, Competitive generalized auctions, The 34th acm symposium on theory of computing, (stoc), 2002, pp. 72–81.

[50]

E. Fischer, The art of uninformed decisions, Bulletin of the EATCS 75 (2001), 97.

[51]

P. Flocchini, F. Geurts, and N. Santoro, Optimal irreversible dynamos in chordal rings, Discrete Applied Mathematics 113 (2001), no. 1, 23–42.

[52]

P. Flocchini, R. Kr´ aloviˇc, P. Ruˇziˇcka, A. Roncato, and N. Santoro, On time versus size for monotone dynamic monopolies in regular topologies, J. of Discrete Algorithms 1 (2003), no. 2.

[53]

L. C. Freeman, The development of social network analysis: A study in the sociology of science, Vancouver, BC, Canada: Empirical Press, 2004.

[54]

D. Fudenberg and J. Tirole, Game theory, MIT Press, 1991.

[55]

A. Galeotti, S. Goyal, M. O. Jackson, F. Vega-Redondox, and L. Yariv, Network games, Review of Economic Studies 77 (2010), no. 1, 218–244.

[56]

A. V. Goldberg, J. D. Hartline, A. Karlin, M. Saks, and A. Wright, Competitive auctions, Games and Economic Behavior 55 (2006), no. 2, 242–269.

[57]

A. V. Goldberg, J. D. Hartline, A. R. Karlin, and M. E. Saks, A lower bound on the competitive ratio of truthful auctions, Stacs, 21st annual symposium on theoretical aspects of computer science, 2004, pp. 644– 655.

[58]

A. V. Goldberg, J. D. Hartline, and A. Wright, Competitive auctions and digital goods, Proceedings of the 12th annual acm-siam symposium on discrete algorithms (soda), 2001, pp. 735–744.

[59]

M. S. Granovetter, The strength of weak ties, American Journal of Sociology, 78 (1973), 1360–1380.

[60]

J. D. Hartline and T. Roughgarden, Optimal mechanism design and money burning, Proceedings of the 40th annual acm symposium on theory of computing, (stoc), 2008, pp. 75–84.

[61]

W. Hoeffding, Probability inequalities for sums of bounded random variables, Journal of the American Statistical Association 58 (1963), no. 301, 13–30.

[62]

S. M. Kakade, M. Kearns, L. E. Ortiz, R. Pemantle, and S. Suri, Economic properties of social networks, proceedings of the 17th conference on advances in neural information processing systems (nips), 2004.

[63]

E. Katz and P. F. Lazarsfeld, Images of the mass communications process. in personal influence: The part played by people in the flow of mass communications, Glencoe, IL:Free Press, 1955.

[64]

M. Kearns, M. L. Littman, and S. P. Singh, Graphical models for game theory, Proceedings of the 17th conference in uncertainty in artificial intelligence (uai), 2001, pp. 253–260.

65

[65]

M. Kearns and L. Ortiz, Algorithms for interdependent security games, Proceedings of the 17th annual conference on advances in neural information processing systems (nips), 2003, pp. 288–297.

[66]

´ Tardos, Maximizing the spread of influence through a social network, ProD. Kempe, J. Kleinberg, and E. ceedings of the 9th acm sigkdd international conference on knowledge discovery and data mining (kdd), 2003, pp. 137–146.

[67]

´ Tardos, Influential nodes in a diffusion model for social networks, Proceedings D. Kempe, J. Kleinberg, and E. of the 32nd international colloquium on automata, languages and programming (icalp), 2005, pp. 1127–1138.

[68]

T. Kloks and H. Bodlaender, Approximating treewidth and pathwidth of some classes of perfect graphs, Proceedings of the 3rd international symposium on algorithms and computation (isaac), 1992, pp. 116–125.

[69]

E. Koutsoupias and C. H. Papadimitriou, Worst-case equilibria, proceedings of the 16th annual symposium on theoretical aspects of computer science (stacs), 1999, pp. 404–413.

[70]

E. Koutsoupias and G. Pierrakos, On the competitive ratio of online sampling auctions, Internet and network economics - 6th international workshop, (wine), 2010, pp. 327–338.

[71]

D. Kreps, A course in microeconomic theory, Princeton University Press, 1990.

[72]

R. Lavi, Algorithmic mechanism design, Encyclopedia of algorithms, 2008.

[73]

R. Lavi and C. Swamy, Truthful and near-optimal mechanism design via linear programming, Focs, 2005, pp. 595–604.

[74]

R. Lavi and N. Nisan, Competitive analysis of incentive compatible on-line auctions, Acm conference on electronic commerce, 2000, pp. 233–241.

[75]

N. Linial, D. Peleg, Y. Rabinovich, and M. E. Saks, Sphere packing and local majorities in graphs, proceedings of the 2nd israel symposium on theory and computing systems (istcs), 1993, pp. 141–149.

[76]

N. Linial and M. E. Saks, Low diameter graph decompositions, Combinatorica 13 (1993), no. 4, 441–454.

[77]

F. Luccio, L. Pagli, and H. Sanossian, Irreversible dynamos in butterflies, Sirocco, 1999, pp. 204–218.

[78]

A. Mas-Collel, W. Whinston, and J. Green, Microeconomic theory, Oxford university press, 1995.

[79]

A. Mehta and V. V. Vazirani, Randomized truthful auctions of digital goods are randomizations over truthful auctions, Acm conference on electronic commerce, 2004, pp. 120–124.

[80]

R. T. Mikolajczyk and M. Kretzschmar, Collecting social contact data in the context of disease transmission: Prospective and retrospective study designs, Social Networks 30 (2008), no. 2, 127–135.

[81]

S. Milgram, The small world problem, Psychology Today 2 (1967), 60–67.

[82]

J. A. Mirrlees, An exploration in the theory of optimum income taxation, The Review of Economic Studies 38 (1971), no. 2, 175–208.

[83]

S. Morris, Contagion, The Review of Economic Studies 67 (2000), no. 1, 57–78.

[84]

E. Mossel and S. Roch, On the submodularity of influence in social networks, Proceedings of the 39th annual acm symposium on theory of computing (stoc), 2007, pp. 128–134.

[85]

R. B. Myerson, Optimal auction design, Mathematics of Operations Research 6 (1981), no. 1, 58–73.

[86]

J. F. Nash, Non-cooperative games, Annals of Mathematics 54 (1951), 286–295.

[87]

N. Nisan and A. Ronen, Algorithmic mechanism design (extended abstract), Stoc, 1999, pp. 129–140.

[88]

´ Tardos, and V. V. Vazirani, Algorithmic game theory, Cambridge University N. Nisan, T. Roughgarden, E. Press, 2007.

[89]

K. Nissim, R. Smorodinsky, and M. Tennenholtz, Approximately optimal mechanism design via differential privacy, CoRR (to appear on The 3rd Innovations in Theoretical Computer Science (ITCS 2012)) abs/1004.2888 (2010).

66

[90]

M. J. Osborne and A. Rubinstein, A course in game theory, MIT press, 1994.

[91]

C. H. Papadimitriou, Algorithms, games, and the internet, Proceedings of the 33rd annual acm symposium on theory of computing (stoc), 2001, pp. 749–753.

[92]

R. Pastor-Satorras and A. Vespignani, Epidemic spreading in scale-free networks, Phys. Rev. Lett. 86 (2001), no. 14, 3200–3203.

[93]

D. Peleg, Size bounds for dynamic monopolies, Discrete Applied Mathematics 86 (1998), no. 2-3, 263–273.

[94]

D. Peleg, Local majorities, coalitions and monopolies in graphs: a review, Theor. Comput. Sci. 282 (2002), no. 2, 231–257.

[95]

N. Robertson and P. D. Seymour, Graph minors. II. Algorithmic aspects of tree-width, SIAM Journal of Algorithms 7 (1986), 309–322.

[96]

´ Tardos, How bad is selfish routing?, proceedings of the 41st symposium on foundaT. Roughgarden and E. tions of computer science (focs), 2000, pp. 93–102.

[97]

W. J. Savitch, Relationships between nondeterministic and deterministic tape complexities, Journal of Computer and System Sciences 4 (1970), no. 2, 177–192.

[98]

I. Segal, Optimal pricing mechanisms with unknown demand, American Economic Review 93 (2003).

[99]

D. D. Sleator and R. E. Tarjan, Amortized efficiency of list update and paging rules, Commun. ACM 28 (1985), no. 2, 202–208.

[100] J. Spencer, Ten lectures on the probabilistic method, SIAM, 1987. [101] Unknown(public), Social network, Wikipedia:: found at: http://en.wikipedia.org/wiki/Social network, 2011. [102] V. V. Vazirani, Approximation algorithms, Springer-Verlag, Berlin, 2001. [103] A. Vetta, Nash equilibria in competitive societies, with applications to facility location, traffic routing and auctions, proceedings of the 43rd symposium on foundations of computer science (focs), 2002, pp. 416–425. [104] J. von Neumann, Zur theorie der gesellschaftsspiele, Mathematische Annalen 100 (1928), 295–320. [105] D. S. Wilson, Levels of selection: An alternative to individualism in biology and the human sciences, Social Networks 11 (1989), no. 3, 257–272. [106] P. Winkler, Games people don’t play, In D. Wolfe and T. Rogers editors. Puzzlers tribute. A feast for the mind A. K. Peters (2001). [107] A. Yao, Probabilistic computations: Toward a unified measure of complexity (extended abstract), 18th annual symposium on foundations of computer science, (focs), 1977, pp. 222–227.

67