and falling trust levels in the United States, it would improve the lives of millions of people if ... Section 4 lists the perfect Bayesian equilibria of this game, .... punishments) provided by laws or informal social conventions (Hardin, 2002: 40-53). ...... Government Reform of the U.S. House of Representatives, 108th Congress.
A Dynamic Model of Generalized Social Trust
T.K. Ahn and Justin Esarey* Department of Political Science Florida State University
*
The authors would like to thank the discussants and participants on our panel of the 2004
American Political Science Association Annual Meeting, as well as the Florida State University Political Science Colloquium series, for their many helpful comments and suggestions.
1
A Dynamic Model of Generalized Social Trust
Abstract
How does generalized social trust—a Trustor’s willingness to allow anonymous Trustees to make decisions affecting the Trustor’s own welfare without an enforceable contract or guarantee, despite the Trustees’ incentives to exploit or defraud—come to predominate in a community? This paper is organized around a theoretical argument about the dynamics of generalized trust. This argument is deduced from a formal model built on assumptions that are common in the existing literature on trust. We find three results. First, generalized trust and trustworthiness can thrive if reliable credentials allow people to distinguish between trustworthy and untrustworthy partners. Second, under a system of credential-dependent trust, the proportion of trustworthy persons in the population tends to cycle between high and low levels in the long run. Hence, the model may explain the currently observed decline in generalized trust in the United States as a part of a long-term cycle. Finally, trustworthy types can coordinate to dampen the trustworthiness cycle and (under some conditions) maintain trustworthy types as the majority in a society.
KEYWORDS: trust, social capital, evolutionary game theory, signaling
2
1. Introduction: The Dynamics of Generalized Trust How does generalized social trust come to predominate in a community? The question is politically important, because societies with a high level of trust enjoy a number of advantages: better economic performance (Fukuyama, 1995; Knack and Keefer, 1997; North, 1990), increased government efficiency and effectiveness (Putnam, 1993), and healthier and more satisfied citizens (Kawachi et al., 1997). Given international disparities in generalized trust levels and falling trust levels in the United States, it would improve the lives of millions of people if trust could be fostered in places where distrust now prevails. This paper is organized around a formal model that is built on common assumptions in the existing literature on trust. Its intent is to deduce the full, sometimes unexpected implications of these common assumptions. Some of these implications are already known to trust scholars. For example, the model implies that generalized trust cannot persist in a large, fluid, and anonymous society where people interact with those about whom they know nothing. According to the model, trust and trustworthiness can thrive if reliable credentials exist to allow people to distinguish between trustworthy and untrustworthy partners. While neither of these insights will surprise researchers, the model does show that they can be rigorously deduced from widely held assumptions about trust. More surprisingly, the formal model also shows that credentialing mechanisms tend to be undermined by their own success. When trustworthiness in the population is low, credentialing mechanisms make trustworthiness profitable: trustworthy people are preferentially hired as employees and managers, sought out by customers as retailers, granted favorable terms as borrowers, and are therefore in a position to economically outperform their less-trustworthy fellows. If profitable traits tend to spread in a population via learning or cultural evolution, the
3
population will become more trustworthy until few untrustworthy persons remain. But prevalent trustworthiness may cause people to stop conditioning trust on credentials, because the cost of using credentials is no longer justified. In this environment, the few remaining untrustworthy persons can profit by defrauding their trusting victims and do better than their honest counterparts. The population’s trustworthiness therefore wanes until credentialing is again deemed necessary, at which point the cycle repeats. Our finding may help to explain the phenomenon of declining generalized trust in the United States (e.g. Paxton, 1999; Putnam 2000). In a large, loosely-knit society where generalized trust is largely built on credentialing mechanisms, our model predicts that trust and trustworthiness will periodically rise and decline in cyclical fashion, though these cycles will be long and slow if trust and trustworthiness are characteristics that form during youth and persist throughout a person’s lifetime. Hence, the current decline may reverse itself over time through generational replacement of the less trusting and trustworthy. Of course, even temporarily declining trust imposes a variety of social costs, and hence shortening these cycles and stabilizing the level of trust at a higher level would be desirable. Our model implies a number of conditions under which these cycles can be shortened and trust levels increased to a higher and more stable level. The remainder of this paper discusses this argument about generalized trust. Section 2 reviews other scholars’ approaches to trust and trustworthiness, and describes how this paper’s approach reflects the literature. Particular attention is paid to theories of how generalized trust can emerge from a starting point of widespread distrust. Section 3 formalizes the assumptions that underpin the arguments of Section 2 and derives predictions from them using an evolutionary game-theoretic model. Section 4 lists the perfect Bayesian equilibria of this game,
4
determining the equilibria that can be supported by an evolutionary stable level of trustworthiness under a variety of conditions. Section 5 describes an unexpected result: the conditions that allow for the emergence of generalized trust also cause a systematic cycle wherein trustworthiness repeatedly rises and falls over the long run. This section also discusses the conditions under which these cycles can be stabilized at a high level of overall trust and trustworthiness. Section 6 concludes with a discussion of empirical observations that support the plausibility of the model’s predictions and the substantive implications of the model for trust research.
2. Approaches to Trust and Trustworthiness It is useful to begin with a rigorous definition of what is meant by the phrase generalized trust, as the literature reflects a variety of interpretations. Figure 1, a trust game featured in Kreps (1990) and common to this literature, provides a helpful illustration. This game shows two people in a sequential, one-shot, and anonymous interaction. The first mover (called Trustor, denoted by R in the figure) decides whether to trust the second mover (called Trustee, denoted by E), with an investment of x ∈ (0,0.5) . Trusting and not trusting are denoted in the figure by t and nt respectively. If R chooses not to trust, the game ends and both players receive x. If the Trustor chooses to trust, then the Trustee chooses whether to reciprocate (r) or exploit (nr) this trust. If Trustee chooses to reciprocate, both players get (1- x), an amount greater than x. If Trustee does not reciprocate, R gets 0 and E gets 1. Hence, there is a potential for both persons to benefit from reciprocated trust, but also a temptation for Trustee to harm Trustor by exploiting his/her trust. [Figure 1 about here]
5
In this essay, generalized trust is the willingness of Trustors to take trusting actions (that is, to play t) in a setting like Figure 1. That is, generalized trust is the willingness of a Trustor to voluntarily forsake a safe, low-paying outcome in favor of allowing a completely anonymous Trustee to make a decision, a decision in which the Trustee can either split a high-paying outcome with Trustor or take all the proceeds for him/herself. If the Trustee does not exploit the Trustor, then both do better than the safe, low-paying outcome. This setting is meant to approximate one-shot, anonymous trust interactions that often take place in a large, industrialized society. Note that trust cannot be conditioned on some characteristic of or history with a Trustee. If a Trustor believes that the Trustee is rational and self-interested, then the best action to take is nt with a payoff of x. Trustor will expect Trustee to exploit him (because Trustee gets 1 for this action versus 1 − x for reciprocating) and will therefore expect to receive 0 by playing t. Consequently, the subgame perfect Nash equilibrium of this game is [Not Trust, Exploit], leaving both players with a payoff x that is Pareto inferior to the 1 − x they could have achieved with the outcome [Trust, Reciprocate]. Assuming that both players are rational, trust therefore hinges on Trustor’s belief that Trustee is not driven strictly by the payoffs of this one-shot interaction. Stated plainly, the Trustor must believe that the Trustee is trustworthy. It is for this reason that generalized trust in a society is inextricably linked to generalized trustworthiness in that society: in a gallery of rogues, to be trusting is to be gullible (Hardin, 2006: 1-16; Yamagishi, 2001; Yamagishi et al., 1999). Indeed, trust is neither individually rational nor socially beneficial when betrayal is likely: “social trust is a valuable community asset if–but only if–it is warranted” (Putnam, 2000: 135). When firms, consumers, and voters are constantly duped, they turn to well-known alternatives to trust: wasteful and complex contracts, extensive compliance monitoring, endless litigation, private enforcement agents (such as the
6
mafia), and so on. A community infused with generalized trust benefits because it does not need to resort to these costly alternatives: its citizens can simply make bargains and trust that agreements will be fulfilled. Therefore, any theory about the dynamics of generalized trust must also incorporate a theory about the dynamics of generalized trustworthiness.
2.1 Trustworthiness What does it mean to be trustworthy? Within the rational choice tradition, there are at least two broad approaches to this question. One, the encapsulated interest view of trust articulated by Russell Hardin, can be summarized as follows: I trust you because I think it is in your interest to take my interests in the relevant matter seriously in the following sense: You value the continuation of our relationship, and therefore you have your own interests in taking my interests into account. (Hardin, 2002: 1) In other words, a trustworthy person will “judge [his/her] interests as dependent on doing what [he/she] is trusted to do” (Hardin, 2002: 28). The linchpin of the encapsulated interest perspective is that trustworthy actions are still self-interested; trustworthy people consider trust to be in their interest for social or economic reasons. This account owes much to the literature on repeated games, particularly repeated Prisoner’s Dilemma games. Although the precise mechanism varies according to the structure of the game, in all cases it is the threat of punishment in future interactions that makes cooperation sustainable in a game theoretic equilibrium. The Trustee can credibly commit to acting in the Trustor’s interest, because the Trustee has an incentive to do so. The Trustor, knowing this incentive, has trust in Trustees as a result. Hardin argues that there are numerous reasons why a person might value the continuation of a relationship, including an emotional attachment to the other person, the fiscal value of a trustworthy reputation, the shadow of future rewards (or
7
punishments) in dyadic dealings with that person (Hardin, 2006: 19), or extrinsic incentives (or punishments) provided by laws or informal social conventions (Hardin, 2002: 40-53). As Hardin acknowledges (2001: 6; 2002: 60-62), the encapsulated interest conception of trust is largely inapplicable to generalized trust of unknown persons in reasonably anonymous settings, such as in Figure 1, simply because the Trustor has no relationship with Trustees and will likely have none in the future. Generalized trustworthiness—the willingness to reciprocate the trust of an anonymous Trustor—can therefore not be sustained by the “shadow of the future” (Axelrod, 1984: 124). Yet experimental research (e.g. Cochard et al., 2004; Cook and Cooper, 2003; Ostrom, 2003) and common sense tell us that people who are trustworthy in this generalized sense do exist, at least in some situations. This essay therefore adopts another approach to trustworthiness, one that can account for the existence of generalized trust. In this approach, trustworthiness is a trait (or type) that inspires a Trustee to value the reciprocation of trust independent of the Trustee’s own selfinterest (Ahn, 2002; Hardin, 2001; Messick and Kramer, 2001; Rothstein, 2000; Yamagishi, 2001). This conception of trustworthiness does not argue that self-interest does not factor into a Trustee’s decision-making, or that trustworthy persons cannot be persuaded to betray trust even when the incentives to do so are overwhelming. Rather, it says that there are numerous reasons— biological, psychological, moral, and religious, to name a few—why a Trustee might believe that reciprocating trust is valuable even without a direct social or economic incentive to do so. Under this concept of trustworthiness, behavior is still fundamentally cognitive: people respond to incentives and make decisions by weighing the perceived value of different courses of action (see Hardin, 2001: 5-7). Therefore, the approach is still amenable to game theoretic analysis. Indeed, the idea of trustworthy and untrustworthy types comports well with existing
8
theory on Bayesian games of incomplete information. In this setting, the game-theoretic notion of payoffs simply extends beyond objective benefit to oneself.
2.2 Dynamics of Generalized Trust and Trustworthiness If generalized trustworthiness can cause people to make choices that are against their interests, one might ask how such a trait could persist in a competitive environment. It is hard to imagine such a trait surviving for long in a world where, through natural selection, cultural evolution (Boyd and Richerson, 1985; Boyd and Richerson 1989), learning, and other processes, traits that improve a person’s objective well-being tend to proliferate and those that harm wellbeing tend to disappear. But if trustworthy individuals can be identified by Trustors, then reciprocated trust becomes possible and both Trustors and Trustees can enjoy rewards larger than those in the self-interested equilibrium where non-trustworthy persons are stuck (Bacharach and Gambetta, 2001; Frank, 1987; Frank, 1988; Dasgupta, 1988; Yamagishi, 2001). In this case, trustworthiness will proliferate because it confers an advantage. So, one key to a trustworthy type’s success is the existence of some mechanism to reveal trustworthiness to potential Trustees (Frank, 1987; Frank, 1988; Frank, 1999; Güth, Kliemt, and Peleg, 2000; Güth and Yaari, 1992). Without such a mechanism, con-men will be able to do better than their honest counterparts by defrauding an endless stream of victims. There are several ways that trustworthiness might be revealed to potential Trustors. As Patterson (1999: 154-157) describes, this information might be revealed through a personal relationship with the Trustor (affective trust), through an intermediary person known and trusted by both Trustor and Trustee (intermediary trust), or through affiliation with a group or institution that people trust (delegated trust.) Affective and intermediary mechanisms are inapplicable to
9
generalized trust, as generalized trust is extended to strangers. But Patterson’s concept of delegated trust applies well to generalized trust, because even a stranger can prove affiliation with a trustworthy group by presenting credentials. Aside from informal cues, such as personal appearance, there are often physical proofs of membership in the form of certificates, identification cards, and so on. This sort of trust is referred to as delegated because, by trusting all members of a certain group, the Trustor is effectively delegating the task of sorting out trustworthy people to this group. For example, a police officer flashes a badge and wears a distinctive uniform to communicate to citizens (who are unlikely to know the officer personally) that he/she is a member of a group screened for trustworthiness. Delegated trust depends on reliable credentials. A successful signal must be strongly correlated with trustworthiness to be effective, and therefore resistant to mimicry by untrustworthy types (Bacharach and Gambetta, 2001; Messick and Kramer, 2001). Binding oaths (Bolle and Braham, 2003), licenses, certifications (Rao, 1994), eBay reputation scores (Resnick et al., 2003), credit ratings, identification cards, diplomas, referrals, and sometimes even appearances (Bacharach and Gambetta, 2001: 173-74) have the potential to serve as effective signals that separate the trustworthy from the untrustworthy. To this point, generalized trustworthiness has been predicated on non-generalized trust: Trustors in this account do not trust everyone, only those with the appropriate trustworthiness credentials. But if trustworthiness spreads through a population as a result of learning or evolutionary pressure, then truly generalized trust may become justified. When nearly everyone is trustworthy, then trusting an anonymous individual is a profitable bet. Indeed, the signals that originally propagated trustworthiness may become viewed as unnecessary, especially if they are
10
costly, because generalized trust is usually honored by the Trustee and therefore profitable (on average) for the Trustor. This quasi-evolutionary argument for the emergence of generalized trust and trustworthiness using credentials has an intuitive appeal, particularly given the well-known power of costly signals to screen individuals (Spence, 1973) and the prevalence of evolutionary explanations for other kinds of cooperative behavior (Axelrod, 1984; Axelrod, 1997; Boyd and Richerson, 1985; Boyd and Richerson, 1989; Baron, 2000; Greif and Laitlin, 2004; Orbell et al., 2004; Alford, Funk, and Hibbing, 2005). What remains is to formalize this argument and rigorously derive its consequences, to see whether the dynamics that make intuitive sense hold up, under what conditions these dynamics will exist, and whether additional unforeseen dynamics exist. Therefore, the next step is to propose and analyze a trust-signaling game, a variant of the standard trust game that incorporates the ideas of (1) multiple types, (2) costly signal-sending, and (3) a socio-cultural evolutionary process in which traits that are more successful materially spread over time as people learn from their experiences. The term evolutionary process means only that more successful types tend to proliferate while less successful types tend to disappear, an assumption that applies equally well to learning and cultural evolution as well as literal population replacement via natural selection. The model applies as long as people tend to gravitate toward a type that does better than its alternatives. As described in greater detail in the next section, a perfect Bayesian equilibrium (PBE) of a game with incomplete information, however reasonable in the static non-evolutionary context, may dissolve in a dynamic process if it is not evolutionarily stable.
11
3. The Trust-Signaling Game and Evolutionary Assumptions 3.1. The Trust-Signaling Game The trust-signaling game depicted in Figure 2 adds two features to the trust game of Figure 1: a trustworthiness trait, and a mechanism for sending a costly signal of this trait. In this game, there are two players: a Trustor (labeled R in the figure) and a Trustee (labeled E.) A Trustee has one of two possible types: θ ∈ {w, uw}. The trustworthy type (w) prefers to reciprocate when trusted, while the untrustworthy type (uw) prefers to exploit when trusted. The game proceeds as follows: 1. At the beginning of the game, Nature chooses player E’s type with Pr(θ = w) = pw and Pr(θ = uw) = puw = 1- pw , where 0 ≤ pw ≤ 1. 2. Player E either sends the signal (s) or not (ns). 3. Player R, after observing the presence or absence of the signal, chooses whether to trust (t) or not (nt). 4. After R observes the presence or absence of the signal, the game is the same as in Figure 1. Thus, if R chooses not to trust, the game ends and both players receive a payoff
x ∈ (0,0.5) . If R chooses to trust, E chooses whether to reciprocate or not. The response of E after R trusts is directly determined by E’s type: the trustworthy type reciprocates, and the untrustworthy type exploits. Reciprocated trust yields both players 1 − x ,while exploited trust yields R a payoff of 0 and E a payoff of 1. Figure 2 prunes the last node of the game (where Trustee determines whether to exploit or not) to save space, but all conclusions are robust to the re-addition of this node; we omit the node because Trustee’s behavior is entirely determined by type at that node.
12
5. Sending the signal costs cw for type w and cuw for type uw, where 0 < cw < cuw < 1. The assumption cw < cuw reflects the idea that signals of trustworthiness are hard to mimic if one is not genuinely trustworthy. A person who is not creditworthy, for instance, has a difficult time maintaining a positive credit rating for a long period of time. [Figure 2 about here]
3.2. The Evolutionary Setting As explained in Section 2, the dynamics of generalized trust are intimately connected to the dynamics of trustworthiness in the trust-signaling game. How will the proportions of trustworthy and untrustworthy types change if the game is played according to a perfect Bayesian equilibrium? To answer this question, it is first necessary to specify and justify some assumptions about the evolutionary process. 1. Population size is normalized to 1, to ease calculation without loss of generality. This is equivalent to modeling proportions of the population rather than absolute numbers. 2. In each round of the game, each player is randomly matched with another player, mirroring the large and anonymous trust environment that the model is designed to explain. 3. Each player has an equal chance of playing the game as R and as E. This assumption is not strictly necessary for much of the analysis: with two populations, one of Trustors and one of Trustees, similar results would be obtained. However, the assumption gains theoretical bite later in the paper, where it is shown that uncertainty about whether one will be a Trustor or Trustee in the future can affect the evolutionary dynamics of the game.
13
4. Let population state p = (pw , puw), such that pw + puw = 1, and 0 ≤ pw ≤ 1. Let ptw be the proportion of type w in the population at time t. Let πtw be the average fitness payoff to t type w at time t. Then, p wt +1 > p wt if and only if π wt > π uw . Also, p wt +1 = p wt if and only if
t . In words, only types that have larger objective fitness payoffs at time t become π wt = π uw
a larger proportion of the population at time t+1. Furthermore, only if payoffs to both types are equal, then the population proportion of both types stays the same. This assumption is a very general, nonparametric way of specifying the fundamental dynamic of any evolutionary process. The strength of these assumptions lies in their abstractness: there is no specific functional form to the process of evolution. As long as learning, cultural inheritance, or any other process of change follows the basic rule that better-performing types proliferate and worse-performing types decline, results derived from these assumptions will apply.
4. Equilibria of the Trust-Signaling Game The first step to predicting behavior in the trust-signaling game is to solve it in a static environment without evolutionary type change. This is accomplished by finding the perfect Bayesian equilibrium (PBE) of the game (Fudenberg and Tirole, 1991). In cases where more than one PBE is possible, it is also appropriate to examine whether some can be ruled out. In some cases, PBEs rely on unreasonable beliefs that are technically sustainable (because they are off the equilibrium path of behavior) but unlikely to persist if people slightly deviate from equilibrium predictions. These potentially unstable equilibria can be identified and ruled out. Finally, the evolutionary stability of the remaining equilibria can be determined. If equilibrium payoffs to the trustworthy are disproportionately higher or lower than payoffs to the
14
untrustworthy, then the proportion of types in the population will change in favor of the more successful type. This change can undermine the conditions necessary to support the PBE. For example, Trustors may abandon generalized trust if the proportion of trustworthy people in the population becomes too small.
4.1. Perfect Bayesian Equilibria There are four types of pure-strategy perfect Bayesian equilibrium in the trust signaling game. To simplify exposition, several terms are employed as shorthand. The term low signaling
costs denotes that the cost to send a signal is below a critical threshold for both untrustworthy and trustworthy types: (cw ≤ 1-2x) ∧ (cuw ≤ 1-x). Separating signaling costs indicates that the signaling cost for untrustworthy types is above a critical threshold, while the signaling cost for trustworthy types is below a critical threshold: (cw ≤ 1-2x) ∧ (cuw > 1-x). Mixed signaling costs indicates that the signaling cost for trustworthy types is above a critical threshold, but the signaling cost for untrustworthy types—though still higher than the cost for trustworthy types— is lower than a critical threshold: (cw > 1-2x) ∧ (cuw ≤ 1- x). High signaling costs indicates that the signaling cost for both types is above a critical threshold: (cw > 1-2x) ∧ (cuw > 1-x). When the proportion of trustworthy types in the population is said to be high, it means that this proportion is above a critical threshold: pw ≥
x 1− x
. Conversely, a low proportion of trustworthy types
indicates that the proportion is below this threshold: pw
x ( 2 x −1) x −1
, which is strictly less than the cw required to
sustain a Type IV delegated trust equilibrium, c w ≤ 1−xx .
32
7. This case study was suggested to us by an anonymous reviewer, whom we thank for the suggestion.
33
REFERENCES
Ahn, T.K. (2002) ‘Trust and Collective Action: Concepts and Causalities.’ Paper presented at the Annual Meeting of the American Political Science Association, August 28-September 1, 2002, at Boston, MA. Alford, John, Carolyn Funk, and John Hibbing (2005) ‘Are Political Orientations Genetically Transmitted?’ American Political Science Review 99(2): 153-167. Axelrod, Robert (1984) The Evolution of Cooperation. New York: Basic Books. Axelrod, Robert (1997) The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration. Princeton, NJ: Princeton University Press. Bacharach, Michael, and Diego Gambetta (2001) ‘Trust in Signs.’ In Karen Cook (ed) Trust in Society: 148-184. New York: Russell Sage Foundation. Baron, David (2000) ‘Legislative Organization with Informational Committees.’ American Journal of Political Science 44(3): 485-505. Bolle, Friedel, and Matthew Braham (2003) ‘A Difficulty with Oaths: On Trust, Trustworthiness, and Signalling.’ German Working Papers in Law and Economics Vol. 2003, Article 3. Boyd, Robert, and Peter Richerson (1985) Culture and the Evolutionary Process. Chicago: The University of Chicago Press. Boyd, Robert, and Peter Richerson (1989) ‘The Evolution of Indirect Reciprocity.’ Social Networks 4: 213-236. Cho, In-Koo, and David Kreps (1987) ‘Signaling Games and Stable Equilibria.’ Quarterly Journal of Economics 52 (2):179-221.
34
Cochard F., Nguyen Van P. and M. Willinger (2004) ‘Trusting Behavior in a Repeated Investment Game.’ Journal of Economic Behavior and Organization 55(1): 31-44. Cook, Karen, and Robin Cooper (2003) ‘Experimental Studies of Cooperation, Trust, and Social Exchange.’ In E. Ostrom and J. Walker (eds) Trust and Reciprocity: Interdisciplinary Lessons from Experimental Research: 209-244. New York: Russell Sage Foundation. Curry, Timothy, and Lynn Shibut (2000) ‘The Cost of the Savings and Loan Crisis: Truth and Consequences.’ FDIC Banking Review 13(2): 26-35. Dasgupta, Partha (1988) ‘Trust as a commodity.’ In D. Gambetta (ed). Trust: Making and Breaking Cooperative Relations: 49–72. New York: Basil Blackwell/ Dietary Supplement Health and Education Act (DSHEA) of 1994. Available online at http://www.fda.gov/opacom/laws/dshea.html (accessed July 6, 2007). Dietary Supplement and Nonprescription Drug Consumer Protection Act of 2006. Available online at GovTrack.us (database of federal legislation), http://www.govtrack.us/congress/bill.xpd?bill=s109-3546 (accessed July 9, 2007). Farrell, Joseph, and Matthew Rabin (1996) ‘Cheap Talk.’ The Journal of Economic Perspectives 10(3): 103-118. Frank, Robert H. (1987) ‘If Homo Economicus Could Choose His Own Utility Function, Would He Want One With a Conscience?’ American Economic Review 77 (4): 593-604. Frank, Robert H. (1988) Passions Within Reason: The Strategic Role of the Emotions. New York: W. W. Norton. Frank, Robert H. (1999) ‘Cooperation through Emotional Commitment.’ In R, Nesse (ed) Evolution and the Capacity for Commitment: 55-76. New York: Russell Sage Foundation.
35
Friedman, Daniel, and Nirvikar Singh (2003) ‘Equilibrium Vengeance.’ Working Paper. Economics Department, UC Santa Cruz. Santa Cruz, CA. Fudenberg, Drew, and Jean Tirole (1991) ‘Perfect Bayesian Equilibrium and Sequential Equilibrium.’ Journal of Economic Theory 53: 236-260. Fukuyama, Francis (1995) Trust: The Social Virtues and the Creation of Prosperity. New York: The Free Press. Greif, Avner, and David Laitlin (2004) ‘A Theory of Endogenous Institutional Change.’ American Political Science Review 98(4): 633-652. Güth, Werner (1995) ‘An Evolutionary Approach to Explaining Cooperative Behavior by Reciprocal Initiatives.’ International Journal of Game Theory 24:323-344. Güth, Werner, Hartmut Kliemt, and Bezalel Peleg (2000) ‘Co-evolution of Preferences and Information in Simple Games of Trust.’ German Economic Review 1(1): 83-110. Güth, Werner, and Menahem Yaari (1992) ‘An Evolutionary Approach to Explaining Reciprocal Behavior in a Simple Strategic Game.’ In U. Witt (ed) Explaining Process and Change: Approaches to Evolutionary Economics: 23-34. Ann Arbor: University of Michigan Press. Hardin, Russell (2001) ‘Conceptions and Explanations of Trust.’ In K. Cook (ed) Trust in Society: 3-39. New York: Russell Sage Foundation. Hardin, Russell (2002) Trust and Trustworthiness. New York: Russell Sage Foundation. Hardin, Russell (2006) Trust. Cambridge: Polity. Hearing before the Subcommittee on Human Rights and Wellness of the Committee on Government Reform of the U.S. House of Representatives, 108th Congress. March 24,
36
2004. Serial No. 108-146. Available online at http://www.gpo.gov/congress/house (accessed July 9, 2007.) Kawachi, Ichiro, Bruce P. Kennedy, and Kimberly Lochner (1997) ‘Long Live Community: Social Capital as Public Health.’ The American Prospect 8(35): 56-59. Knack, Stephen and Philip Keefer (1997) ‘Does Social Capital Have an Economic Payoff? A Cross-Country Investigation.’ Quarterly Journal of Economics 112(4): 1251-88. Kreps, David (1990) ‘Corporate Culture and Economic Theory.’ In Perspectives on Political Economy, eds. James Alt and Kenneth Shepsle. New York: Cambridge University Press: 90-143. Kreps, David, and Robert Wilson (1982) ‘Sequential Equilibria.’ Econometrica 50(4): 863-894. Maynard Smith, John, and G. R. Price (1973) ‘The Logic of Animal Conflict.’ Nature 246: 1518. Messick, David, and Roderick Kramer (2001) ‘Trust as a Form of Shallow Morality.’ In K. Cook (ed) Trust in Society: 89-118. New York: Russell Sage Foundation. Nestle, Marion (2002) Food Politics: How the Food Industry Influences Nutrition and Health. Berkeley: University of California Press. North, Douglass (1990) Institutions, Institutional Change, and Economic Performance. Cambridge: Cambridge University Press. Orbell, John, Tomonori Morikawa, Jason Hartwig, James Hanley, and Nicholas Allen (2004) ‘'Machiavellian' Intelligence as a Basis for the Evolution of Cooperative Dispositions.’ American Political Science Review 98 (1):1-17.
37
Ostrom, Elinor (2003) ‘Toward a Behavioral Theory Linking Trust, Reciprocity, and Reputation.’ In E. Ostrom and J. Walker (eds) Trust and Reciprocity: Interdisciplinary Lessons from Experimental Research: 19-79. New York: Russell Sage Foundation. Patterson, Orlando (1999) ‘Liberty Against the Democratic State.’ In M. Warren (ed) Democracy and Trust: 151-207. Cambridge: Cambridge University Press. Paxton, Pamela (1999) ‘Is Social Capital Declining in the United States? A Multiple Indicator Assessment.’ American Journal of Sociology 105(1): 88-127. Paxton, Pamela (2005) ‘Trust in Decline?’ Contexts 4(1): 40-46. Putnam, Robert (with Robert Leonardi and Raffaella Nanetti) (1993) Making Democracy Work. Princeton, NJ: Princeton University Press. Putnam, Robert (2000) Bowling Alone: The Collapse and Revival of American Community. New York: Simon and Schuster. Raloff, Janet (2004) ‘Ephedra Finale.’ Science News Online, January 10. Available at http://www.sciencenews.org/articles/20040110/food.asp (accessed July 10, 2007). Rao, Hayagreeva (1994) ‘The Social Construction of Reputation: Certification Contests, Legitimation, and the Survival of Organizations in the American Automobile Industry: 1895-1912.’ Strategic Management Journal 15:29-44. Resnick, Paul, Richard Zeckhauser, John Swanson, and Kate Lockwood (2003) ‘The Value of Reputation on eBay: A Controlled Experiment.’ Working Paper. Ann Arbor: University of Michigan. Rothstein, Bo (2000) ‘Trust, Social Dilemmas and Collective Memories.’ Journal of Theoretical Politics 12(4): 477-501.
38
Selten, Reinhard (1975) ‘A Reexamination of the Perfectness Concept for Equilibrium Points in Extensive Games.’ International Journal of Game Theory 4: 25-55. Spence, Michael (1973) ‘Job Market Signaling.’ Quarterly Journal of Economics 87(3): 355374. U.S. Food and Drug Administration Center for Food Safety and Applied Nutrition (1981) ‘The Long Struggle For The 1906 Law.’ Available online at http://www.cfsan.fda.gov/~lrd/history2.html (accessed July 10, 2007). Yamagishi, Toshio (2001) ‘Trust as a Form of Social Intelligence.’ In K. Cook (ed) Trust in Society: 121-47. New York: Russell Sage Foundation. Yamagishi, Toshio, Masako Kikushi, and Motoko Kosugi (1999) ‘Trust, Gullibility, and Social Intelligence.’ Asian Journal of Social Psychology 2(1): 145-161.
39
Figure 1. Trust Game
Trustor (R)
Not Trust (nt)
Trust (t)
Trustee (E) x, x
Exploit (nr)
0, 1
Reciprocate (r)
1-x, 1-x
Note: 0 < x < 0.5
40
Figure 2. Trust-Signaling Game
*Note: 0 < x < 0.5; 0 < cw < cuw < 1
41
Proportion of Trustworthy Types
Figure 3: Perfect Bayesian Equilibria in the Trust-Signaling Game
*I: Generalized trust equilibrium; II: Generalized distrust equilibrium; III: Pervasive Mimicry equilibrium; IV: Delegated Trust equilibrium. When multiple equilibria exist, bold typeface indicates that the equilibrium passes the Intuitive Criterion and survives equalprobability signal trembling by both E types. An underline indicates that there is an evolutionary stable proportion of trustworthy types in the population that will sustain this equilibrium.
42
Figure 4. Evolutionary Cycles under the Separating Costs Regime
Type IV Delegated Trust PBE 2 6
3
5 1
4
Type II Generalized Distrust PBE
0
Type I Generalized Trust PBE
x/(1-x)
(x-cw)/x
1
Proportion of trustworthy types
43