Performance-Based Reward Distribution Methods for Anonymous Decision-Making Groups B. Gavish (
[email protected]) Owen Graduate School of Management, Vanderbilt University
J. H. Gerdes, Jr. (
[email protected]) Anderson Graduate School of Management, University of California, Riverside
J. Kalvenes (
[email protected]) School of Management, University of Texas at Dallas Abstract. Research has shown that both the support of anonymity and the use of appropriate incentives can lead to improved group performance. Anonymity enables a more open discussion resulting in a more critical analysis of a problem. Rewards can motivate individuals to cooperate, giving them the incentive to share valuable information with the group. Unfortunately, these two mechanisms are both dependent on the ability to identify the contributor. Anonymity hides the identity of the contributor, while the support of individualized, performance-based rewards requires the rewarding agent to be able to determine the identity of the contributor. This contradictory requirement has prevented the simultaneous used of anonymity and performance-based rewards in decision making. Using group decision support systems as a basis, this work identifies procedures to simultaneously support participant anonymity and performance-based rewards. Mechanisms based on public key encryption technologies are presented which make it possible to distribute individual rewards to anonymous contributors, guarantee that only the contributor can claim a reward for her contribution, verify that a reward has been distributed, and be able to deliver this reward in such a way that the identity of the anonymous contributor is protected. This is accomplished without the rewarding agent ever knowing the identity of the recipient. Keywords: anonymous, encryption, equity-based, GDSS, group decision support, performance-based, rewards
1. Introduction Anonymity and rewards. These two mechanisms have long been used to improve the quality and intensity of communication within a group. The value of these two mechanisms has also been recognized in the group decision support system (GDSS) literature. In a GDSS environment, anonymity has been found to improve communication by reducing the fear of expressing or supporting an unpopular position, eliminating bias based on the source of a comment, encouraging the expression of wild ideas which may give rise to unconventional solutions, c 1999 Kluwer Academic Publishers. Printed in the Netherlands.
Reward1.tex; 8/09/1999; 7:18; p.1
2
Gavish, Gerdes and Kalvenes
and allowing individuals to modify their stance without appearing indecisive (Nunamaker et al., 1987; Nunamaker et al., 1988; Valacich et al., 1992; Valacich and Tansik, 1991). Anonymity can lead to a more critical analysis of issues and a freer interchange of opinions (Jessup et al., 1990). Rewards are often used as an incentive mechanism to encourage participation, increase individual effort, and increase cooperation in group problem solving tasks. Two basic reward allocation models appear in the economic and psychology literature – performance-based and equality-based. The performance-based norm (sometimes referred to as equity-based norm) rewards individuals in proportion to the value of their contribution. While tending to increase individual effort, it also increases intra-group competition, thereby increasing group conflict (see Figure 1). The performance norm is perceived as the most appropriate when the activity is designed to promote individual activity and contributions (Bierhoff et al., 1995; Clark, 1984; Watts and Messe, 1982). In contrast, the equality-based norm focuses solely on the group’s overall contribution. An equality-based scheme rewards group members equally, independent of individual contribution, which tends to improve group cohesiveness. However, the equality norm provides less of an incentive for individuals to contribute. Since rewards are shared equally with other group members, the marginal benefit of contributing is decreased. This phenomenon can lead to free-riding and other productivity losses (Barua et al., 1995; Bierhoff et al., 1995). Feature Basis of Rewards Individual Effort
Performance Norm Individual performance Increased relative to equality-based rewards
Emphasize Individual Differences Consequences
Yes (quality of contributions) High individual efficiency and group competition Economic, scientific, and technical
Common Applications
Equality Norm Group performance Decreased relative to performance-based rewards No (focus on final result) Intra-group harmony, and group solidarity Mutual support, close relationships
Figure 1. Dimensions of the performance and equality norms of reward distribution.
Dennis et al. (Dennis et al., 1990) discuss the importance of a correctly designed incentive mechanism in motivating participants’ behavior in GDSS research. Tying rewards to the quality of session results can motivate participants to work harder and to cooperate within the group. Because of the potential negative impact of a pure performance
Reward1.tex; 8/09/1999; 7:18; p.2
Performance-Based Reward Distribution
3
or group based strategies, many organizations have instituted a composite strategy which contains elements of both (Hogarty, 1994; Nelson, 1994). The performance-based component reduces free-riding and rewards distinguished, individual effort. The equality component encourages group interaction and cooperation while fostering increased group cohesion. Supporting a mixture of performance and equality-based rewards can be implemented easily in an non-anonymous environment. Equalitybased rewards can be distributed by sending each participant his/her share of the aggregate, group reward. Distribution of individualized, performance-based rewards is also straightforward, since the contributions of each participant are readily identified (due to the nonanonymous environment). But, what if the group wants to take advantage of the benefits offered by an anonymous environment? Distribution of equality-based rewards is still straightforward, since each member in the group is treated equally. Although performance-based rewards tend to improve individual effort, their implementation in an anonymous environment is problematic. How can a reward be properly allocated and distributed when the system expressly prevents the identification of the comment’s source? It is counter-productive to maintain an anonymous GDSS environment if the participant or system must breach that anonymity to reward an individual for valued contributions. Simple solutions to this problem are flawed. It would be possible to identify the author of each comment by attaching her name or identifying tag to each comment. However, this directly eliminates anonymity. Attaching a tag and simply suppressing its display ultimately defeats the anonymity of the system. There is always a risk that this pseudo anonymity will be breached at some later point. Thus, the challenge is to develop mechanisms that allow participants to prove their claims of authorship without identifying themselves. Assume that a group wants to address a particular issue in an anonymous environment and has decided to reward individuals for their contributions. The value of the solution is assumed to be directly related to the quality of the contributions made, and the amount of the total reward is directly related to the value of the solution. Figure 2 illustrates such an environment involving 9 participants (represented by open circles) generating 5 contributions (represented by closed circles). The contributions were submitted by only 4 of the participants, represented by the horizontal arrows. These arrows do not connect the participant with a specific contribution, since it is an anonymous session. Two types of reward incentives are possible – group-based and message specific (performance-based). Distributing a group-based
Reward1.tex; 8/09/1999; 7:18; p.3
4
Gavish, Gerdes and Kalvenes
#
Authenticating
"
Mechanism
Wc c
Nc
Nc
c c
c? c
-
Message Specific Rewards
!
s
-
Anonymous Anonymity
s
Contributions
-
c
s
Participants
s
s
Contributions
Group-based Rewards
Figure 2. Anonymous, performance-based distribution of rewards is challenging. At issue is how to ensure that deserving contributors are adequately rewarded without breaching anonymity.
reward is straightforward. The individualized, performance-based reward is complicated by the system anonymity. Distributing the messagespecific rewards requires an additional authentication mechanism to ensure the reward is routed to the deserving participant. This authentication mechanism is the crucial element to the support of anonymous rewards. Before we develop the mechanisms needed to support anonymous performance-based rewards, it is important to understand the whole reward distribution process.
Establish Reward Protocol
-
Generate Comments and Ideas
Assess Contribution Value & Allocate Rewards
-
-
Authenticate Claims of Authorship
-
Distribute Rewards
Figure 3. Sequence of steps in the distribution of rewards in a GDSS environment.
Reward1.tex; 8/09/1999; 7:18; p.4
Performance-Based Reward Distribution
5
The support of anonymous rewards can be modeled as a five phase process, as shown in Figure 3. The initial phase establishes the reward allocation protocol. If participants do not have a clear understanding of the reward mechanism, the reward will not be an effective incentive. For this reason it is important that meeting’s protocols be clearly defined prior to the meeting. It is also critical to design the reward mechanisms to promote the desired behavior. Reward structures may be radically different depending on the goals of the activity. When developing a reward protocol, many diverse issues influence how participants interact, and their willingness to share valuable ideas (see Figure 4).
•
• •
•
• • • •
Who is going to be rewarded for valuable contributions – The individual submitting the valued contribution – The subgroup involved in the discussion spawning the valued contributions – The whole group, independent of individual contributions – Some combination of the above alternatives What is the nature of the reward – simple recognition, a token reward (i.e., a plaque), something of intrinsic value (cash payment), or some combination How is the overall amount of reward to be determined? Who decides on it? – Fixed, pre-defined rewards – Variable, the value of a reward linked to the value of contribution Will participants have any input as to the value of the contributions, or will the value be determine by some authority outside the group? – Group – Individual – Outside the group Can the contribution’s value be quantified, such as operational savings, or is the value of the contribution subjective? When is the attribution done Are contributions made anonymously or not? Is the reward claim process anonymous or not?
Figure 4. Issues that must be addressed when developing a reward protocol.
After a reward protocol has been established, the process moves to the second phase – finding a solution to the problem at hand. This phase encompasses all of the conventional aspects of a meeting, including: pre-meeting planning, establishing the session’s format, agenda and protocols; discussion of the issues; generating possible solutions and courses of action; debating the merits of these proposed solutions; and finally establishing some course of action (Nunamaker et al., 1991). The factors impacting the decision making process are beyond the scope of
Reward1.tex; 8/09/1999; 7:18; p.5
6
Gavish, Gerdes and Kalvenes
this work. Nevertheless, this is a critical phase of the anonymous reward distribution process. It is during this phase that special digital signatures must be attached to contributions to permit authorship validation and reward distribution in the later phases. These signatures are not designed to identify the author, but rather to allow a participant to anonymously prove authorship during the reward distribution phase (see discussion of the fourth phase below). At the conclusion of the meeting, the reward process moves to the third phase – assessing the contribution’s value and allocation of rewards. The magnitude of the reward pool is dependent on the meeting’s reward protocol (established in phase one) and the quality of the group’s solution. Under a performance-based distribution rule this reward pool is allocated among the anonymous contributions requiring that the significance of each contribution be assessed. Conceptually, this involves assigning a value to each contribution, either individually or by segmenting the contributions into pre-specified reward classes, depending on the reward protocol. In conventional meetings, reward allocation is often done by an executive committee or subset of the group. Although this is expedient, it is not necessarily the most effective way to allocate rewards. Participants may perceive such allocations as arbitrary, biased, and/or inconsistent with the performance-based distribution rule. The procedural justice literature indicates that individuals are more satisfied with an allocation of rewards if they are involved in the valuation process, even when the reward structure does not favor that individual (Greenberg and Folger, 1983). Within the GDSS context, the valuation of the contributions can be done through an anonymous voting process (Gavish and Gerdes, 1997; Gavish et al., 1994). After rewards have been allocated, the process moves to the fourth phase – reward claim and authentication. Authenticating a claim of authorship is complicated by the needed to maintain the anonymity of both the original contributor as well as the individual claiming authorship. In the next section procedures incorporating public key technology are developed which maintain anonymity throughout the reward distribution process. Once a claim has been authenticated, the reward process moves to the fifth and final phase – the physical distribution of rewards. To maintain anonymity, rewards must be distributed in a manner that leaves an auditable trail while not revealing the identity of the recipient. Solutions to this problem are presented in Section 3.
Reward1.tex; 8/09/1999; 7:18; p.6
Performance-Based Reward Distribution
7
2. Performance-Based Rewards in a GDSS Environment Reward allocation has been identified as an important issue in the group decision making process (Barua et al., 1995; Gavish et al., 1994). Fortunately, computer-based group decision support systems contain many tools that are well suited to the task of reward allocation. A GDSS typically includes: brainstorming/conferencing tools to allow discussion of the issues, modeling and analysis tools for analyzing the alternatives, voting tools to aid the group in prioritizing and deciding among different options, and reporting tools to assist in documenting all aspects of the session (Nunamaker et al., 1991). These same tools can be used in the reward allocation process. The support of anonymity can be valuable in the reward allocation phase. It is known from public justice research that rewards are allocated more equitably (i.e., better reflecting the contribution’s value) when the allocation is done anonymously (Sagan et al., 1981; Shapiro, 1975; vonGrumbkow et al., 1976). Unfortunately, conventional meeting support tools do not provide this anonymous format, and thus may bias the reward allocation process. Although a GDSS may provide for anonymous group interaction, carrying this anonymity through the reward distribution process is not necessarily straightforward. Special mechanisms are needed to accurately authenticate the author and then to route the rewards without revealing the author’s identity to either the authenticating party or outside observers. In the following sections three reward protocols are considered. Each addresses a different operational environment, namely: non-anonymous meetings, meetings with limited anonymity, and completely anonymous meetings. 2.1. Performance-based Rewards without Anonymity Performance-based reward distribution is easiest in a non-anonymous environment. By definition, there is no concern over author anonymity, which greatly simplifies the reward protocol. The critical issue is still authentication of the author’s identity (see Figure 5). Different meeting scenarios may fall under this ‘non-anonymous’ category. The group could be in a centralized meeting, with comments being made in open session. Alternatively, the meeting could be decentralized, or even being held asynchronously with participation varying over time. In each case, given that the discussion is not anonymous, authenticating authorship is straightforward. Nevertheless, to prevent fraud and ensure the proper distribution of rewards it is still important to authenticate the author’s identity.
Reward1.tex; 8/09/1999; 7:18; p.7
8
Gavish, Gerdes and Kalvenes
The ability to prove authorship is important for three reasons. The first is that it guarantees that only the author can claim rewards allocated to his/her contributions. A false ‘by-line’ could be used in an attempt to mislead other participants into believing the message was submitted by some other individual. Such behavior has been observed in CM 3 sessions1 (Gavish et al., 1995). In one particular case, a very derogatory and inflammatory remark was entered and ‘signed’ with a participant’s name. Although the actual source could not be determined due to the anonymity provided by the system, it was known that the individual corresponding to the signature was not the author. Although he was involved in the meeting, by sheer luck (for him), it was known that he was engaged in another activity at the time the comment was made.
Authenticating
Agent
c c
c
c
c c
9 ) ) )
Individualized Rewards
c
-s
-s
c
c
Participants
-s Attributable Comments
-
zs
zs
Contributions
Figure 5. Rewards distribution in a non-anonymous environment. Authorship must be authenticated before rewards can be distributed.
A second reason for authentication is that people forget. If the source of comments are not clearly documented, its actual source may become increasingly difficult to establish over time. Often, the very best ideas 1 CM 3 stands for Computer Mediated Meeting Management, a group decision support system described in (Gavish and Gerdes, 1997; Gavish et al., 1994; Gavish et al., 1995).
Reward1.tex; 8/09/1999; 7:18; p.8
Performance-Based Reward Distribution
9
are those which seem intuitively obvious once expressed. As interest builds to excitement over the idea, individuals start to think of it as their own. Consequently, over time, several individuals may honestly feel they came up with the original idea. Even the original author may be confused as to who actually proposed the idea. The third reason for authentication is that increasing participants’ faith in the integrity of the system will increase the likelihood that individuals will more fully participate in the session. A system which guarantees that rewards are paid and that they are also being paid to the proper individuals should increase participants comfort level, yielding more involvement and better decisions. Verification of authorship (and non-authorship) can be accomplished through digital signatures (see (Schneier, 1996) for details concerning encryption and applied cryptographic techniques). The signature must be generated at the same time as the message, not after it has been released to the public. Participant X attaches a digital signature (DS(M, vx )) based on her private key, vx , and the message text (M ). Thus, the message package is composed of two components: the message and the digital signature. This can be represented as: hM, DS(M, vx )i. A shortcoming of using the standard digital signature algorithm is that the author implicitly relinquishes the right to control authentication at the time the comment is generated. This may not be desirable. The contributor may not want to be unilaterally linked to the contribution even though he/she may want the option to claim future rewards allocated to that contribution. As will be seen in the next section, this can be addressed using a double key system, with the author retaining control over critical decryption keys. 2.2. Performance-based Rewards with Limited Anonymity A system supporting limited anonymity is defined as one where an individual’s identity is ostensibly hidden, but is either known or can be determined through collaboration of one or more individuals without the aid of the individual. This definition is more stringent than that found in common usage. For example, there are often reports of ’anonymous donations’ being made to a major charity. However, under this definition, the anonymity would be categorized as only limited since there are individuals who are able to identify the actual donor (e.g., the fund raiser who dealt with the donor, the secretary processing the requisite paperwork, and the accountant who handled the financial details).
Reward1.tex; 8/09/1999; 7:18; p.9
10
Gavish, Gerdes and Kalvenes
This distinction is important, for there are some issues where the costs are very high if anonymity is broken. Under limited anonymity it is possible for a group having authority over the authenticating agent (i.e., management, or some legal body) to force the release of authorship information2 . An understanding of this possibility can reduce the openness of participants (Engstrom et al., 1988). The costs associated with supporting a complete anonymity can be quite high, and in some situations the potential gains do not justify those costs (Gavish and Gerdes, 1998). In these situations a system supporting limited anonymity may be a viable alternative. Message attribution with limited anonymity can be accomplished through intermediaries who can identify the source of the document and subsequently shield the author’s identity from the group. The message’s source can be identified either through the message’s network header or directly using a digital signature. Even if the message is routed through multiple intermediaries (or message servers), they could collaborate and track a message from its source to its destination. To ensure that messages are attributed correctly, thereby not relying on the trustworthiness of the intermediary, there is still a need for a mechanism which provides undeniable proof of authorship. Public key encryption techniques can again be used. Designate the authenticating agent as AA, and assume that participant X posts comment M .3 Participant X would doubly encrypt the M – first with her private encryption key (vx ) and then with the authenticating agent’s public key (uaa ) yielding the cyphertext C = uaa [M, vx [M ]]. Encrypting with her private key provides a digital signature which will be used to authenticate authorship. Encrypting with the authenticating agent’s public key (uaa ) hides authorship from outside observers. This cyphertext is then sent to the authenticating agent who can authenticate authorship using its private decryption key vaa and X’s public encryption key ux . vaa [C] = vaa [uaa [M, vx [M ]]] yields M and vx [M ]; ˜. ux [vx [M ]] = M 2
This happened in an incident involving Penet, a well respected anonymity server operating on the Internet (Quittner, 1995). 3 Without loss of generality, the message M could represent a plain text message ˆ ]. This would restrict encrypted with the group’s session key eg , thus M = eg [M ˆ to only the authorized group members. access of the plain text message M
Reward1.tex; 8/09/1999; 7:18; p.10
Performance-Based Reward Distribution
11
˜ then the message authorship is authenticated. This If M = M process would require explicit enumeration of all known keys by the authentication agent to find the appropriate ux key. To facilitate the search, the cyphertext could include a pointer to the author (e.g., C = uaa [M, vx [M ], UserID), which the authenticating agent verifies using the appropriate public key. This assumes that the author’s public key is known to the authenticating agent. This need not be the case. The author can withhold the public key, and only submit it when and if there is a reason to prove authorship (such as to claim an associated reward). If there is a need to prove group membership, this could be accomplished by using a group based encryption key which would still maintain authorship anonymity. By modifying the protocol, this authentication function can be distributed between multiple parties. Under this approach no single entity can independently verify a signature, but would require cooperation of all authentication members. This approach was adopted in the SKIPJACK encryption algorithm developed by the National Security Agency (Denning, 1993; Herdman, 1994). Two ‘escrow agents’ each hold half of the key needed to provide access through the law enforcement trapdoor. Each key fragment is useless without the other half, thus preventing the escrow agents from acting independently and illegally violating the anonymity of transmissions. Another approach to distribute the authentication function would be to multiply encrypted the digital signature, once with the author’s private key and once with at least one independent entity’s public encryption key. Consider the case with a single intermediate agent designated as the Anonymity Server (AS). Encrypting the message with the author’s private key (vx ) and the authenticator’s public keys (uAS ) yields the following signature. uAS [vx [M]]] Under the above approach, both the anonymity server and author must cooperate to validate the author’s signature. Under this scenario, the author still does not control the decision to authenticate the message. If the two independent entities collude, they can extract the digital signature. This protocol has the desirable characteristic that the author determines if and when a comment is authenticated. Authorship cannot be determined unless the author initiates the process. Unfortunately, once authenticated, the identity of the author is known to the session server, thereby breaching anonymity. The protocol given in the next section addresses this shortcoming.
Reward1.tex; 8/09/1999; 7:18; p.11
12
Gavish, Gerdes and Kalvenes
2.3. Performance-based Rewards with Full Anonymity In particularly sensitive situations, it may be desirable to maintain a fully anonymous environment. Examples include the group discussion of illegal acts (i.e., drug use, fraud, and murder), or issues which carry some social stigma (i.e., AIDS, teen pregnancy, alcoholism, and sexual orientation) and potentially sensitive topics (i.e., racism, sexual harassment and management’s performance). The mechanisms necessary to maintain full anonymous communication in a GDSS environment have been previously been identified (Gavish and Gerdes, 1998). Providing anonymous, performance-based incentives means the rewarding agent must be able to prove a claim of authorship and be able to communicate with the author without ever knowing the author’s identity. This is possible using Chaum’s untraceable return addresses protocol (Chaum, 1981) (see Appendix B). When a comment is submitted, the author attaches a new public key. Since the author generates this key and holds the corresponding private key, she is the only one who can decrypt messages encrypted with this key. This allows the rewarding agent to communicate directly and privately with the author without knowing the author’s identity. Through this mechanism, the authenticating agent can send secure, encrypted instructions to the author detailing how to claim the reward.
3. Non-traceable Reward Distribution Once a claim has been authenticated, the corresponding reward must be distributed. After taking pains to maintain the author’s anonymity in the authentication process, it is important that the distribution process not breach that anonymity. Administrators cannot simply issue a check to the author, for his or her identity is not known. The distribution process must be able to: − transfer rewards anonymously − prevent unauthorized access to these rewards − maintain an audit record of the distribution This audit trail is important not only for tax purposes, but also to maintain the group’s trust that the system delivers the promised rewards. The reward distribution mechanism must address two issues, namely:
Reward1.tex; 8/09/1999; 7:18; p.12
Performance-Based Reward Distribution
13
− P 1: If a contributor or intermediary falsely claims that no reward was paid, how can the rewarding company prove that rewards were indeed distributed? − P 2: If the rewarding company falsely claims to have distributed a reward, how can a contributor or intermediary prove that the reward was not distributed without giving up the anonymity of the contributor? The critical aspect of anonymous, performance-based reward distribution is the existence of a mechanism to prove authorship claims, thereby addressing both P 1 (company’s ability to prove reward distribution) and P 2 (groups ability to prove rewards were not distributed). This can be accomplished when contributors attach message-specific public keys to each contribution. When the contributor claims her reward, she must prove authorship by demonstrating access to the appropriate private key. This can be accomplished by digitally signing a receipt which can be verified with the public key associated with the contribution. Subsequently, the rewarding agent’s ability to produce this digitally signed receipt provides proof that the reward has been paid (satisfying P 1). The inability to produce the receipt indicates that the reward has not been paid (satisfying P 2). This same message-specific public key can be used to establish a private dialog with the anonymous author (Chaum, 1981). This can be used by the rewarding agent to privately transmit instructions on how to claim a reward. A mechanism is still needed to deliver the reward while maintaining anonymity. There are at least three approaches to accomplish this. The first depends on an intermediary trusted by the contributor, which collects and transfers the reward to the contributor. The second allows the contributor to claim the reward directly, without the dependence on a trusted intermediary. The third takes merges two encryption technologies (digital cash and Untraceable Return Addresses to permit direct payment of reward through electronic means. These approaches are discussed in the following sections. 3.1. Reward Distribution through a Trusted Intermediary There are situations where the contributor can operate through a trusted intermediary to anonymously claim a reward. Assume that a contributor X has submitted a comment and has appended an encryption key ux , privately holding the decryption key, vx . Further assume that symmetric keys are being used (this is not necessary, but simplifies
Reward1.tex; 8/09/1999; 7:18; p.13
14
Gavish, Gerdes and Kalvenes
the following discussion). In such a situation a reward can be claimed through the process illustrated in Figure 6. % z ug [(M, DS(M, ug ), ux )]
1) Reward allocation
Y
Anonymity Server
*
#
Rewarding Agent
"
6
4) Prove authority using (du ); sign receipt
!
6) Distribute gross reward
#
?
Trusted Agent
"
6
- 8a) Fees and report !
income to IRS
8b) Net equity-based reward
2&3) Decryption key (vx )
# 1) Post Message
- 7) Save receipt
?
Contributor X
"
!
Figure 6. Process by which a contributor can anonymously claim a reward through a trusted agent.
1. Contributor anonymously posts through an anonymity server a message attaching a message specific encryption key, ux . Contributions are subsequently evaluated and rewards allocated. 2. Contributor hires an agent (bonded, trusted, etc.) who handles the claim. 3. Contributor gives the trusted agent the relevant, privately held decryption key, vx . 4. The company tests the authority of the agent to receive the reward by giving him a message encrypted with ux .
Reward1.tex; 8/09/1999; 7:18; p.14
Performance-Based Reward Distribution
15
5. The trusted agent validates the message by decrypting it with vx , thereby proving the claim. 6. The trusted agent receives a check or cash and signs, using vx , a receipt message which is subsequently validated (using ux ) and held by the rewarding agent. 7. The ability to produce the encrypted receipt proves the rewarding agent has paid the reward to the author’s agent. 8. The trusted agent cashes the reward check, deducts its service charge and fees, files all required documents (i.e., tax information), and finally pays the balance to the contributor. Since the contributor picks the trusted agent (e.g., a lawyer, banker, etc.), no one (except the agent) knows the identity of the contributor, maintaining anonymity. Issues P 1 and P 2 can be resolved by the signature of the agent in step 6. If the rewarding agent can show a signed receipt, the reward was distributed, otherwise the reward was not distributed. 3.2. Reward Distribution without a Trusted Intermediary In the previous section, the role of the intermediary is to shield the author from having to deal directly with the outside world, thus maintaining her anonymity. However, anonymous reward distribution does not require the use of a trusted intermediary. Anonymity can also be maintained by giving the contributor partial control over part of the reward claim process (i.e., incorporating the contributor into the reward anonymity chain). Assume that the rewards are distributed through a trusted third party, such as a bank. The reward funds are transferred to the bank along with the reward allocations and necessary authentication information. The rewards could then be placed in individual accounts which then can be accessed by the participants. For participants to claim a reward, they must know the reward location (i.e., bank name and account number) as well as any special access information (i.e., password or account PIN). Using the private message capability of URA (see appendix B), this information can be transmitted securely to the author. Although this information is extremely important, it is not sufficient to ensure that the author is the person claiming the reward. After all, someone within the bank must also have this information so they can process the claim. Again, the critical factor proving authority to claim the reward is the privately held, message-specific encryption key,
Reward1.tex; 8/09/1999; 7:18; p.15
16
Gavish, Gerdes and Kalvenes
vM . Since the author is the only individual that can has access to the private key corresponding to the posted public key, restricting payment to those who can produce the appropriate key prevents unauthorized access to rewards. This message-specific information transforms the logical distribution chain into a physical distribution chain. In the above example, the logical distribution chain routes the reward from a rewarding company, to a rewarding agent, which passes it through a bank account and finally to the author. The instructions sent to the author identifies the specific rewarding agent and account number where the reward can be claimed. Limiting the claim site to a single location would allow easy monitoring and increases the risk that the individual claiming the reward would lose his/her anonymity. Fortunately, distributing the reward through a bank does not restrict the physical distribution channel to a single site. Most banks have multiple branches, and in some cases these branches may be distributed across multiple states or even countries. With sufficient proof of authority, the reward account could be accessed at any of these sites. Another possibility would be to allow the rewarded group to act as the rewarding agent. Under this scenario, the rewarding company is only responsible for funding the rewards, not their distribution. The rewarding agent would make the reward funds available to the rewarded group, say by depositing this amount into a bank of the rewarded groups choosing. The rewarding agent could even write a check to the group which then cashes it and deposits these funds into a bank of its own choosing (thus keeping the location secret from the rewarding agent). In the case where participant identities are known, requiring all participants to jointly endorse the check greatly reduces the risk of a misappropriation of funds. All participants would then know the magnitude of the reward pool and can personally ensure that the funds are deposited into the reward account. This option is not available when the identities of participants are not known. In this case some intermediate agent (i.e., an independent lawyer) could handle the funds transfer to the distributing agent. After the session has ended, the rewarded group would then go through the allocation process, assigning rewards to specific contributions. The allocations and the appropriate message decryption keys are then provided to the distributing agent who uses them to distribute the rewards. This approach prevents the rewarding agent from applying pressure on the distributing agent to gain information concerning the identity of the reward recipients, since the identity of the distributing agent is hidden from the rewarding agent.
Reward1.tex; 8/09/1999; 7:18; p.16
Performance-Based Reward Distribution
17
3.3. Reward Distribution using Digital Money The most straightforward way of providing performance-based rewards is to use digital money (O’Mahony, 1997) delivered using the Untraceable Return Address (URA) scheme previously introduced. Conventional financial instruments (i.e., cash, money orders, checks, stocks, bonds, etc.), depend on physical documents to indicate ownership of a resource. A change in ownership is not completed until this physical documentation is transferred to the new owner. In contrast, digital money (DM) does not depend on nor requires any physical instantiation, but is simply a string of characters, and as such can be easily exchanged electronically. There are two general classes of digital money. The first is a closed system which requires the recipient to verify with the DM provider that the digital money is valid before it can be accepted. This is the digital equivalent of the conventional check and debit card system. Digital providers which use this scheme include CyberCash, DigiCash and eCash. The second approach utilzed by Mondex (Ives and Earl, 1997) supports an open system which acts more like conventional money. Using public key technologies, it provides a secure, point to point transfer of funds eliminating the need for on-line authentication of funds. As in the previous sections, assume that the group is holding an anonymous meeting, but this time rewards are distributed through a Mondex type digital money scheme. Participant X posts a valuable contribution M (including a URA) through an anonymity server (i.e., (M, U RAM )). Recall that the URA includes a public key uM 2 which can blind a response such that only participant X can read it. Once reward amounts are allocated, the rewarding agent converts the designated reward to digital tokens dtM through Mondex. The Mondex system provides the authentication of the funds and the secure transfer to the author. The Untraceable Return Address scheme can be used to prevent monitoring of the message traffic, thereby maintaining contributor anonymity throughout the contribution and reward distribution processes. The author can subsequently convert the digital token into hard currency through any individual or organization which supports the Mondex system. 3.4. Legal Implications of Anonymous Reward Distribution There are legal aspects that must be addressed with any anonymous reward distribution, principally the tax implication issues of anonymous disbursements. Taxing bodies are mandated to collect all such taxes as dictated by the legislature. In the United States, the 16th amendment to the Constitution gives Congress the authority “to lay and collect
Reward1.tex; 8/09/1999; 7:18; p.17
18
Gavish, Gerdes and Kalvenes
taxes on income, from whatever source derived.” The Internal Revenue Service (IRS) is charged with the collection side of this mandate. To ensure compliance with the tax law, the IRS has set up very specific regulations and reporting requirements to provide the ability to track monetary transactions and cross-check reporting compliance. In the United States, rewards fall under the definition of gross income (USTR, 1995; USTR, 1995) and as such must be reported to the government, normally by both the payer and recipient. Reporting guidelines require the identification of the recipient (through a social security number) and the amount distributed. Any attempt to distribute anonymous rewards (i.e., where the recipients are not identified with their social security numbers) would likely be viewed as an attempt to avoid taxation and be met with severe fines and penalties.4 A second legal issue relates to the potential existence of liens and levies against an individual’s income. For example, the IRS can require an employer to garnish an employee’s wages to address unpaid taxes (IRS, 1995). In some states, wages can be garnished to satisfy unpaid child support. Under such circumstances, fully anonymous disbursements may not be legal, for they would circumvent the satisfaction of legal claims against the recipient’s income. Anonymous payments are legal in certain situations. Crime Stoppers, a 501-C3 non-profit corporation, can anonymously pay up to $1,000 for information leading to the arrest of a criminal. This information can be submitted anonymously, and Crime Stoppers guarantees anonymity by never requiring the identity of the individual, even when the reward is collected. At the time the original information is provided the informant is given a secret code number. When claiming the reward, presentation of this code number identifies the individual as the individual which provided the valuable information. In a separate context, the tax code allows an organization to distribute gambling winnings up to $1,000 without having to provide identification (USTR, 1995). In both cases the IRS views these payments as taxable income, but has exempted the distributing company from having to report the identity of the recipient. These are special exemptions, and are not directly applicable to a for-profit corporation wanting to provide performance-based rewards in an anonymous environment.
4
This analysis is based on correspondence with IRS tax specialists and review of U.S. tax code and publications. Every attempt has been made to be accurate and the mechanisms proposed are presented to meet both the spirit and letter of the law. A given mechanism’s legality will depend on local tax law, which varies throughout the world. Hence, individuals and groups are advised to consult a tax authority before implementing any of these mechanisms.
Reward1.tex; 8/09/1999; 7:18; p.18
Performance-Based Reward Distribution
19
Thus, in general, there must be some way to enforce the proper reporting of reward payments to the tax authority. We have already discussed using a trusted intermediary as a means to authenticate authorship. It is possible to expand this intermediaries role to include the reporting of appropriate tax information. For example, consider using a bank as the distributing agent. For accounting and tax purposes, the rewarding organization can identify the distributing agent. Acting as the distributing agent, the bank would have the same reporting requirements as the rewarding organization5 , and so must be informed of any and all liens which have been served against any of the potential recipients. The distributing agent would identify the claimant, validate claims of authorship, comply with known liens against the claimant, report the reward to the appropriate tax authorities, and finally distribute the balance of the reward to the claimant. The benefit of this approach is that the independent distributing agent could shield the recipient’s identity from the rewarding organization. If a trusted intermediary is not used, there is no identifiable recipient that can be held responsible for taxes due. This could be addressed (subject to local tax laws) by having the rewarding agent withhold and pay the maximum marginal tax from the reward prior to distribution. Such an approach would not be legal in the United States, for the payment of other’s taxes represents taxable income for that individual and must be reported (USTR, 1995). Thus, this is only an option when the taxing authority is not concerned with tracking and cross-checking money flow, but simply with the collection of the tax revenues. In environments where such an approach is legal, the mechanism can be extended to cover cases where participants are subject to different tax liabilities. It would be possible to withhold the maximum marginal tax for each taxing authority, although this would significantly reduce the net reward. This approach allows the distribution process to be anonymous while ensuring that all tax liabilities are satisfied. Some individuals may feel that their contributions have been undervalued in comparison to the other rewards. Those receiving the larger rewards may feel self conscious due to the relative magnitude of the reward.
5 The tax codes specifically indicate that a bank acting as a distributing agent for a payer must report the payer’s name (i.e., the rewarding company) and include the recipient’s social security number (USTR, 1990).
Reward1.tex; 8/09/1999; 7:18; p.19
20
Gavish, Gerdes and Kalvenes
4. Summary and Conclusions This paper looks at the anonymous, performance-based reward distribution problem – how to reward individuals in an anonymous environment based on their individual contribution to the group effort. It is shown that public encryption schemes and digital signatures can be used to maintain the anonymity of participants while providing incentives to express innovative concepts and ideas. The paper focuses on two facets that impact the dynamics of a group – the ability to act anonymously and the ability to motivate participation through appropriate incentives. The literature has shown that individually, these factors can improve the effectiveness of a group. In the group decision support literature, anonymity has been shown to lead to a more open discussion and result in a more critical analysis of the issues. In the psychology and labor economics literature, appropriate incentives have been found to motivate individuals to cooperate and share valuable information with the group. This work is important because it demonstrates for the first time that it is possible to combine these two mechanisms which have individually been found to be beneficial in group decision making. Failure to provide such an environment can reduce participant effort and lead to various process losses (Barua et al., 1995). This work identifies important factors which are critical to maintaining the integrity of the system. First, it must be possible for the rewarding company to be able to prove that rewards have been distributed. Second, participants must have the ability to prove non-distribution of rewards. Specific mechanisms are presented which address these critical areas without breaching anonymity. The support of anonymous rewards has important legal consequences, the chief one being the associated tax implications of anonymous reward distribution. U.S. tax regulations currently require that disbursements to individuals be reported to the IRS, with the recipient identified by his/her social security number. We show that if rewards are distributed through an intermediate agent, the required documentation can be generated, since the identity of this intermediate agent is known to the rewarding company and the agent knows the identity of the reward recipient. When an intermediate agent is not used, the required tax documentation cannot be filled out. If local tax laws permit, the maximum marginal tax could be withheld and submitted to cover the tax liability. Although this may dramatically reduce the net reward received by the contributor, it serves the dual purpose of permitting anonymous reward distribution while insuring that the tax liabilities are satisfied.
Reward1.tex; 8/09/1999; 7:18; p.20
Performance-Based Reward Distribution
21
The ability to support anonymous, performance-based rewards is an important, counter-intuitive result. Future work is needed to see how these principles can be implemented and to determine how they impact individual and group performance. Some work is currently being done looking at the economic and productivity implications of using performance-based rewards in an anonymous environment (Gavish and Kalvenes, 1996).
Appendix A: Encryption Systems and Digital Signatures Encryption is the primary mechanism used to maintain secrecy and confidentiality in modern communications. Two basic types of encryption algorithms exist, namely simple key and dual key encryption.6 Single key (also known as secret key) encryption uses the same key (kx ) to encrypt and decrypt a message. Dual key encryption uses two different keys, a encryption key (ex ) and the corresponding decryption key (dx ). In general the encryption keys are not reversible, in that the role of the two keys can not be reversed – the message must always be encrypted with ex and decrypted with dx . Reversing the roles will not yield an intelligible result (see Figure 7).
Single Key
Dual Key
Symmetric Dual Key
kx [kx [M ]] = M
dx 6= eu dx [ex [M ]] = M ex [dx [M ]] 6= M
ux 6= vx ux [vx [M ]] = vx [ux [M ]] = M
Figure 7. Illustration of single, dual and symmetric dual key encryption schemes.
An important special case of the dual key system is the symmetric dual key system. In a symmetric dual key scheme, each key can decrypt messages encrypted by the other key. The symmetric dual key scheme is also referred to as public key encryption. This name stems from the practice of making one key publicly available, while the other key is privately held by the owner. To differentiate this scheme from the dual 6
This short introduction to encryption schemes is provided because of the important role cryptography plays in anonymous communication. The concepts are only presented in sufficient detail to provide a basis for the remainder of the paper. The interested reader is directed to (Braucer et al., 1990; Denning, 1982; Muftic, 1989) for a more complete treatment.
Reward1.tex; 8/09/1999; 7:18; p.21
22
Gavish, Gerdes and Kalvenes
key, we use ux to represent the public key and vx to represent the private key. To communicate privately, a message is encrypted with the recipient’s public key. The resulting packet can then be openly transmitted, since decryption requires the private key known only to the intended recipient, thus ensuring the privacy of the transmission. Let M represent a plain text message, and ux and vx represent the public and private keys (respectively) of entity X. The relationship between the plain text message and encrypted cyphertext message under the various schemes is given in Figure 7. Digital Signatures Digital signatures (utilizing public key encryption technology) provide a mechanism that verifies the authorship of a message and authenticates that it has not been altered in transit (Chaum, 1981; US. DOC, 1994). Analogous to an ink signature on a paper document, a digital signature uniquely identifies the author of the document to which it is attached. At the time of submission, the author appends to the message the digital signature derived from the original message and the author’s private key. In principle, this signature could simply be the encrypted version of the message. Any individual can then use the author’s public key to decrypt the signature and compare it with the original message. If they are identical, the message is authenticated and authorship confirmed. In practice, a shorter version of the message (referred to as the message digest) is used instead of the whole message. The message digest is generated from the original message using a public hashing algorithm. Thus, the authenticator needs to compare the decrypted signature to the result obtained from passing the message through this publicly available algorithm. If the results are identical, the message has been authenticated. Two well-known digital signature systems are the RSA scheme (ElGamal, 1985; Rivest et al., 1978) and the U.S. Digital Signature Standard (DSS) (US. DOC, 1994). Appendix B: The Untraceable Return Address Protocol Public key encryption makes it possible to privately communicate with another individual by encrypting the message with the recipient’s public key. But what if you want to correspond privately with someone who has posted a message anonymously? Since the individual is not known, it is not known which public key to use. Also, how do you get this message to the individual without knowing their address? This is the type of problem solved with Chaum’s Untraceable Return Address protocol (Chaum, 1981) illustrated in Figure 8. Assume
Reward1.tex; 8/09/1999; 7:18; p.22
23
Performance-Based Reward Distribution
that user Y wants to reply to an anonymous posting made by user X. Each transmission consists of four components, namely an address indicating the immediate destination of the message, a message packet readable only by the intended recipient, an encryption key to be used with any reply, and an encrypted return address readable only by the anonymity server. In Step a, user X attaches an untraceable return address (URA) to her original message. The URA consists of a message specific encryption key and the user’s real address all encrypted with the anonymity server’s public key. Appended to this is a second message specific encryption key. Thus the URA is: U RA ≡ uAS [uM 1 , Ax ], uM 2 where: uAS is the public key of the anonymity server, Ax is the real return address of user X, and uM 1 , uM 2 are message specific public keys chosen by user X. Also, let uS represent the session key held by all participants, and RM represent the response to message M . Untraceable Return Addresses U RAM – uAS [eM 1 , Ax ], uM 2 U RAR – uAS [uM 3 , Ay ], uM 4 M
i
a
< Hx , uAS [uS [M, U RAM ]] >
i
e
-
i
b
Anonymity Server
< HAS , uM 1 [uM 2 [RM , U RAR ]] > M
di
< HAS , uS [M, U RAM ]] >
< Hy , uAS [uM 2 [RM , U RAR ], M U RAM ] >
? fi ?
Communication
RM , U RAR User X
M
Bus
M, U RAM User Y
? ci ?
Figure 8. Protocol to support Y’s untraceable reply to an anonymous posting by user X.
In Step b, the anonymity server strips off the user’s header and broadcasts the message. User Y receives and decrypts the message in Step c yielding the message and the URA. To reply, user V uses the second public key (uM 2 ) to encrypt the message and uses the URA as a blind address. As shown in Step d, user Y can also include a URA so the original author can continue the anonymous dialog. In Step e, the anonymity server strips off the reply’s header, decrypts the original message’s URA yielding user X’s true address and the first public key (uM 1 ). The anonymity server encrypts the reply with this key and transmits the message to user X. Finally in Step f , user X receives
Reward1.tex; 8/09/1999; 7:18; p.23
24
Gavish, Gerdes and Kalvenes
and decrypts the message the private keys vM 1 and vM 2 yielding the reply along with responder’s URA.
References Banker, ‘Out of Control’, Banker, Vol. 145, No. 834, August 1995, pp. 15-16. Barua, A., Lee, C. H. S., Whinston, A. B., ‘Incentives and Computing Systems for Team-based Organizations’, Organization Science, Vol. 6, No. 4, July-August, 1995, pp. 487-504. Bierhoff, H. W., Buck, E., and Klein, R. ‘Social Context and Perceived Justice’, in Bierhoff, H. W., Cohen, R. L., and Greenberg, J., Justice in Social Relations, 1986, New York: Plenum Press, pp. 165-185. Brams, S., Fishburn, P., Approval Voting, Birkhauser, Boston, Mass., 1983. Brauer, W., Rozenberg, G., and Salomaa, A. (Eds.), ‘Public-Key Cryptography’, EATCS Monographs on Theoretical Computer Science, Springer-Verlag, Berlin, 1990. Chaum, D., ‘Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms’, Communications of the ACM, Vol. 24 No. 2, February 1981, pp. 84-87. Clark, M. S., ‘Record Keeping in Two Types of Relationships’, Journal of Personality and Social Psychology, Vol. 47, 1984, pp. 549-557. Denning, D. E. R., Cryptography and Data Security, Reading, PA, Addison-Wesley, 1982. Denning, D. E., ‘The Clipper Encryption System’, American Scientist, Vol. 81, July-August, 1993, pp. 319-322. Dennis, A. R., Nunamaker, Jr., J. F., and Vogel, D. R., ‘A Comparison of Laboratory and Field Research in the Study of Electronic Meeting Systems’, Journal of Management Information Systems, Vol. 7, No. 3, Winter 1990-91, pp. 107-135. Dummett, M., Voting Procedures, Clarendon Press, Oxford, 1984. ElGamal, T., ‘A Public-Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms’, IEEE Transactions on Information Theory, Vol. IT-31, 1985, pp. 469-472. Engstrom, Y., Engstrom, P., and Saarelma, M. D., ‘Computerized Medical Records, Production Pressure, and Compartmentalization in the Work Activity Health Center Physicians’, Proceedings of the Conference on Computer Supported Cooperative Work, Portland Oregon, September 1988, pp. 65-83. Felsenthal, D. S., Topics in Social Choice, Sophisticated Voting, Efficacy, and Proportional Representation, Praeger, New York, 1990. FIPS PUB 186, ‘Federal Information Processing Standards Publication – Digital Signature Standard (DES)’, U.S. Department of Commerce, Technology Administration, National Institute of Standards and Technology, U.S. Government Printing Office, Washington, D.C., 1994. Gavish, B., and Gerdes, Jr., J., ‘CM 3 , Voting Mechanisms and their Implications’, Annals of OR, vol. 71, 1997, pp. 41-74. Gavish, B., and Gerdes, Jr., J., ‘Anonymity Mechanisms in Group Decision Support Systems Communication’, Decision Support Systems, Vol. 23, 1998, pp. 297-328.
Reward1.tex; 8/09/1999; 7:18; p.24
Performance-Based Reward Distribution
25
Gavish, B., Gerdes, J., and Sridhar, S., ‘CM 3 , Looking into the Third and Fourth Dimensions of GDSS’, in Integration: Information and Collaboration Models, NATO ASI Series, 1994, pp. 269-299. Gavish, B., Gerdes, J., and Sridhar, S., ‘CM 3 , A Distributed Group Decision Support System’, IIE Transactions, Vol. 27, 1995, pp. 722-733. Gavish, B., and Kalvenes, J, ‘Anonymous Rewarding in Group Decision Support Systems and It’s Implications’, working paper, Owen Graduate School of Management, Vanderbilt University, 1996. Greenberg, J., and Folger, R., ‘Procedural Justice, Participation, and the Fair Process Effect in Groups and Organizations’, in Paulus, P., (Ed.) Basic Group Process, New York, Springer-Verlag, 1983, pp. 235-256. Heider, F., Psychology of Interpersonal Relationships, Wiley, NY, 1958. Herdman, R. C. (Director), Information Security and Privacy in Network Environments, Office of Technology Assessment, Congress of the United States, September, 1994. Hogarty, D. B., ‘New Ways to Pay’, Management Review, Vol. 83, No. 1, January 1994, pp. 34-36. Internal Revenue Service, Your Federal Income Tax for Individuals, Department of the Treasury, Publication 17, 1994. Internal Revenue Service, Federal and State Gift Tax, Department of the Treasury, Publication 448, 1995. Internal Revenue Service, Understanding the Collection Process, Department of the Treasury, Publication 594 (Rev. 1-95), Catalog Number 46596B, 1995. Internal Revenue Service, Code Section 1099 Instructions, Publication 1099, 1995. Ives, B. and Earl, M., ‘Mondex International: Reengineering Money’, London Business School, CRIM CS97/2, http://mis.uis.edu/ecomm2/mondex case/mondex.html. Jessup, L. M., Connolly, T., and Tansik, D. A., ‘Toward a Theory of Automated Group Work, The Deindividuating Effects of Anonymity’, Small Group Research, Vol. 21, No. 3, August 1990, pp. 333-348. McLeod, D., ‘Federal Investigation Targets Leasing Firm’, Business Insurance, Vol. 28, No. 9, February 29, 1994, pp. 3, 6. McMillan, E. J., ‘Is your Association Ripe for Embezzlement?’, Association Management, Vol. 47, No. 3, March 1995, pp. 34-38. Muftic, S., Security Mechanisms for Computer Networks, Ellis Horwood Limited, Chichester, England, 1989. Nelson, B., ‘Rewarding People’, Executive Excellence, Vol. 11, No. 10, October 1994, pp. 11-12. Nunamaker, J. F., Applegate, L. M., Konsynski, B. R., ‘Facilitating Group Creativity: Experience with a Group Decision Support System’, Journal of Management Information Systems, Vol. 3, 1987, pp. 5-19. Nunamaker, J. F., Applegate, L. M., Konsynski, B. R., ‘Computer-aided Deliberation: Model Management and Group Decision Support’, Operations Research, Vol. 36, No. 6, 1988, pp. 826-848. Nunamaker, J. F., Dennis, A. R., Valacich, J. S., Vogel, D. R., and George, J. F., ‘Electronic Meeting Systems to Support Group Work’, Communications of the ACM, Vol. 34, No. 7, July 1991, pp. 40-61. O’Mahony, D., Peirce, M., and Tewari, H., ‘Electronic Payment Systems’, Artech House, 1997. Quittner, J., ‘Unmasked on the Net’, Time, New York, March 6, 1995, pp. 72-73.
Reward1.tex; 8/09/1999; 7:18; p.25
26
Gavish, Gerdes and Kalvenes
Rivest, R. L., Shamir, A., and Adleman, L., ‘A Method for Obtaining Digital Signatures and Public-Key Cryptosystems’, Communications of ACM, Vol. 21, No. 2, Feb. 1978, pp. 120-126. Sagan, K. Pondel, M., Wittig, M. A., ‘The Effect of Anticipated Future Interaction on Reward Allocation in Same - and Opposite - Sex Dyads’, Journal of Personality, Vol. 49, 1981, pp. 438-449. Schneier, B. Applied Cryptography, Second Edition, John Wiley & Sons, Inc., New York, 1996. Shapiro, E. G., ‘Effects of Expectations of Future Interaction on Reward Allocations in Dyads: Equity or Equality’, Journal of Personality and Social Psychology, Vol. 31, 1975, pp. 873-880. United States Tax Reporter, Research Institute of America, Inc., New York, Regulation 1.61-1, §61,¶611, August 17, 1995, p. 13,021. United States Tax Reporter, Research Institute of America, Inc., New York, Regulation 1.61-2, §61, ¶612, August 17, 1995, pp. 13,021-13,024. United States Tax Reporter, Research Institute of America, Inc., New York, Regulation 1.61-14, §61, ¶612.14, August 17, 1995, pp. 13,038. 1996. United States Tax Reporter, Research Institute of America, Inc., New York, §3402, ¶34,025.26, August 17, 1995, p. 57,280. United States Tax Reporter, Research Institute of America, Inc., New York, §6049, ¶60,495.01, August 17, 1995, pp. 61,670-61,673. Valacich, J. S., Jessup, J. M., Dennis, A. R., and Nunamaker Jr., J. F., ‘A Conceptual Framework of Anonymity in Group Support Systems’, Group Decision and Negotiation, Vol. 1, 1992, pp. 219-241. Valacich, J. S., and Tansik, D. A., ‘Decision Making in an Automated Environment: The Effects of Anonymity and Proximity with a Group Decision Support System’, Decision Sciences, Vol. 22, 1991, pp. 266-279. von Grumbkow, J., Deen, E., Steensma, H., and Wilke, H., ‘The Effect of Future Interaction on the Distribution of Rewards’, European Journal of Social Psychology, Vol. 6, 1976, pp. 119-123. Watts, B. L., and Messe, L. A., ‘The Impact of Task Inputs, Situational Context, and Sex on Evaluations of Reward Allocators’, Social Psychology Quarterly, Vol. 45, 1982, pp. 254-262.
Reward1.tex; 8/09/1999; 7:18; p.26