Studying Distributed Collaborations using the ...

2 downloads 0 Views 43KB Size Report
Vincent F. Mancuso1, Victor Finomore2, Gregory Funke2, Benjamin Knott2 .... 711 Human Performance Wing, Human Effectiveness Directorate, Warf-.
Studying Distributed Collaborations using the Resource Allocation Negotiation Task (RANT) Vincent F. Mancuso1, Victor Finomore2, Gregory Funke2, Benjamin Knott2 1 Oak Ridge Institute for Science and Education, [email protected] 2 Air Force Research Lab, Wright Patterson Air-Force Base, Ohio {victor.finomore,gregory.funke,benjamin.knott}@wpafb.af.mil

Abstract. When conducting team research, the experimental task can have major effects on the interpretation and overall success of the study. While some researchers chose to use high-fidelity tasks to simulate an operating environment, an alternative perspective uses more simplified, metacognitive tasks. The purpose of this paper is to present a new metacognitive task, the Resource Allocation Negotiation Task (RANT), which aims to elicit complex and rich collaborations between distributed team members. Keywords: Distributed teams, consensus tasks, collaboration

1

Introduction

With an increased reliance on distributed teams in the workplace, understanding the cognitive and collaborative processes of such teams has become a critical need. In order to develop effective training mechanisms, intervention and collaborative support systems, it is essential that we first must achieve a holistic and complete understanding of the collaborative and cognitive work of these teams. In much of this research, it is not uncommon to see experiments rely on synthetic task environments to simulate the complexities and context of the target domain. While these types of tasks help maintain ecological validity, they also introduce numerous confounds, making interpretation of the results more difficult. To account for this, researchers can utilize metacognitive tasks to hone in on the collaborative and cognitive demands of the task. In metacognitive tasks, the context and complexities are stripped away, and greater emphasis is placed on replicating critical elements of the task- and team- work within the environment. Metacognitive tasks such as group consensus tasks [1-3], who-done-it murder mystery [4], group recall [5], and collaborative assembly [6], have been successfully used to isolate various aspects of team cognitive behavior without the extra confounds of a complex target environment In this paper, we will propose a new metacognitive task, the Resource Allocation Negotiation Task (RANT), designed to simulate team decision making in complex environments. In the following sections, we will present an overview of the theoreti-

adfa, p. 1, 2011. © Springer-Verlag Berlin Heidelberg 2011

cal development of RANT, an initial evaluation, and finally a discussion of future research directions.

2

Resource Allocation Negotiation Task

2.1

Group Consensus Tasks

Group consensus tasks are a popular metacognitive task, which have been used in numerous team research studies. In group consensus tasks, teams are given a fuzzy problem and asked at coming to an agreement on the answer One of the more popular variations of this task type are group survival tasks such as “Lost in the Desert” [1], “NASA Lost on the Moon” [2], and “Lost at Sea” [3]. These tasks have been used widely in team research, with various foci such as designing collaborative systems [7] and training interventions [3,8], and studying the impact of group goals and time pressure [9], cognitive resource theory [10], interpersonal behaviors [11], cognitive motivation [12], and leadership [13], to name a few. Group consensus tasks afford rich group discussions that require information sharing, negotiation, and team decision making. These critical elements make group consensus tasks a perfect platform for team cognition research. 2.2

Overview of Task

Building on previous work, we aimed to develop a new experimental task to enable basic team cognition research. In addition to developing a task that would afford the same rich collaborations, we wanted to develop a task that was more flexible for the purpose of experimental research. In pursuit of this objective, we developed the Resource Allocation Negotiation Task (RANT). RANT is a metacognitive task that combines aspects of strategic decision making and group consensus. In RANT, teams assign resources across a set of targets based on a specific criteria. It is up to the group to come to an agreement on how many resources each target requires, as they have varying costs and/or demands. Like previous group consensus tasks, in RANT teams must negotiate their individual perceptions to come to an agreement of a final team answer. While seemingly simple, the true utility of RANT comes in its flexibility. At its base, RANT is a simple consensus bidding task. However, upon this base, more complexities and manipulations can be introduced. When compared to previous group consensus tasks, RANT offers more extensibility, as well as greater data collection opportunities.

3

Initial Evaluation

3.1

Goals of Evaluation

In order to evaluate the utility of RANT, we designed and conducted an initial evaluation of various aspects of the task. Specifically, our goals were twofold, (1) to assess two different task-manipulations, and (2) to ensure that the content of the task has minimal effect on performance. 3.2

Experimental Design

To evaluate the utility of RANT in facilitating distributed team collaborations, we conducted an initial pilot study using a 2 x 2 within subjects design with independent variable manipulations for task type (finite vs. unlimited resources) and item sets (item set 1 and item set 2). To manipulate the task types, we varied whether or not the participants had a set of finite resources they could allocate. In the finite resources condition, participants were given a fixed pool of money they could allocate, while in the unlimited resources condition there was no such constraint. The item set manipulation was a variation on two unique sets of 15 items. The two lists were balanced to have a similar total price ($610 and $670), and items with similar prices. 3.3

RANT Task

For the initial evaluation, participants receive two sets of 15 unique items, and tasked with coming to a group consensus on the price of each item (rounded to the nearest dollar). Keeping with the theme of previous group consensus tasks (i.e., lost in the desert, lost in the wilderness), each of the 15 items related to camping or survival. This decision was made as it was thought camping equipment (i.e., backpacks, tents, etc.) were accessible enough for regular participants, but foreign enough to elicit discussion and disagreement. 3.4

Participants

For this evaluation, 16 participants were recruited from available Air Force personnel and a subject pool. Each session included 4 participants, representing 1 team in the final data set (4 teams). During each experimental session, teams would complete two unique RANT tasks across the two within subjects conditions. 3.5

Measures

Performance was calculated using an average price ratio across all items for each of the conditions. The price ratio provides a normalized score calculated as a percentage based on the difference between the groups consensus price, and the actual price of the item.

In addition to performance, communications were captured and analyzed based on the total number across all participants 3.6

Procedures

Upon arrival, participants were assigned to a computer terminal. After an initial task briefing, one participant was asked to serve as the recorder and to track the team’s final decision for each item. Prior to beginning the group discussion (via chat), each participant was given a paper packet that contained the directions of the task, and the list of the 15 items they would be discussing. In one condition, the directions included a total price they were permitted to spend across all items. Each team participated in two separate RANT sessions. 3.7

Results.

Manipulation Check. To ensure that there was no effect of task content on performance, the two item sets were compared as a manipulation check. We found minimal difference across item list 1 (M = 50.90%, SD = 7.64%) and item list 2 (M = 54.85%, SD = 8.60%). Similarly, in a comparison of the total number of communications, we found minimal differences between item list 1 (M = 181.25, SD = 38.39) and item list 2 (M=184.25, SD = 60.84). Effect of Task Type. Across the two variations of the task, for performance we found that teams, in general performed better in the unlimited resources condition (M=58.56%, SD = 2.44%) than in the finite resources condition (M=44.65%, SD = 0.48%). On the other hand, while they did not perform as well, team in the finite resources condition had more communications (M = 206, SD = 44.86) than teams in the unlimited resources condition (M = 159.5, SD = 28.62).

4

Discussion and Future Work

Based on these initial results, we can conclude that RANT offers a viable platform for future team research. While the teams did communicate more in the condition with the finite resources, due to the added complexity of the problem, for our own interests, we feel that the infinite resources variations will better support our research goals. In our future research we plan to investigate the effects of spearphishing cyberattacks on team cognitive biases. Spearphishing, a popular social-engineering attack, is when people are targeted by a message disguised as coming from a trustworthy source. Using RANT, during a team discussion of an item, we will inject seemingly trustworthy communications that suggest the items price is either higher or lower than its actual value. In this regard, in the infinite resources variation, each item can become its own unique trial. This will allow us to run a large number of trials and ma-

nipulations on the same participant base, as opposed to in the finite resources manipulation where each list would be a single trial. Acknowledgements. The research conducted by author Vincent Mancuso was supported in part by an appointment to the Postgraduate Research Participation Program at U.S. Air Force Research Laboratory, 711 Human Performance Wing, Human Effectiveness Directorate, Warfighter Interface Division, Applied Neuroscience Branch administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and USAFRL.Energy and USAFRL.

References 1. Lafferty, J.C., Pond, A.W.: The Desert Survival Situation. In Human Synergistics (1974) 2. Yetton, Philip, and Preston Bottger. "The relationships among group size, member ability, social decision schemes, and performance." Organizational Behavior and Human Performance 32, no. 2 (1983): 145-159. 3. Nemiroff, Paul M., William A. Pasmore, and David L. Ford. "The Effects of Two Normative Structural Interventions on Established and Ad Hoc Groups: Implications for Improcing Decision Making Effectiveness. Decision Sciences 7, no. 4 (1976): 841-855. 4. Stasser, Garold, Dennis D. Stewart, and Gwen M. Wittenbaum. "Expert roles and information exchange during discussion: The importance of knowing who knows what." Journal of experimental social psychology 31, no. 3 (1995): 244-265. 5. Wegner, Daniel M. "Transactive memory: A contemporary analysis of the group mind." In Theories of group behavior, pp. 185-208. Springer New York, (1987). 6. Liang, Diane Wei, Richard Moreland, and Linda Argote. "Group versus individual training and group performance: The mediating role of transactive memory." Personality and Social Psychology Bulletin 21, no. 4 (1995): 384-393. 7. Adams, Susan J., Sylvia G. Roch, and Roya Ayman. "Communication Medium and Member Familiarity The Effects on Decision Time, Accuracy, and Satisfaction." Small group research 36.3 (2005): 321-353. 8. Leshed, Gilly, Jeffrey T. Hancock, Dan Cosley, Poppy L. McLeod, and Geri Gay. "Feedback for guiding reflection on teamwork practices." In Proceedings of the 2007 international ACM conference on Supporting group work, pp. 217-220. ACM, 2007. 9. Durham, Cathy C., et al. "Effects of group goals and time pressure on group efficacy, information-seeking strategy, and performance." Human Performance 13.2 (2000): 115-138. 10. Murphy, Susan E., Dewey Blyth, and Fred E. Fiedler. "Cognitive resource theory and the utilization of the leader's and group members' technical competence." The Leadership Quarterly 3.3 (1992): 237-255. 11. Sundstrom, Eric, Paul L. Busby, and Warren S. Bobrow. "Group process and performance: Interpersonal behaviors and decision quality in group problem solving by consensus." Group Dynamics: Theory, Research, and Practice 1.3 (1997): 241. 12. Scudder, Joseph N., Richard T. Herschel, and Martin D. Crossland. "Test of a model linking cognitive motivation, assessment of alternatives, decision quality, and group process satisfaction." Small group research 25.1 (1994): 57-82. 13. Kickul, Jill, and George Neuman. "Emergent leadership behaviors: The function of personality and cognitive ability in determining teamwork performance and KSAs." Journal of Business and Psychology 15.1 (2000): 27-51.

Suggest Documents