Decision Sciences Journal of Innovative Education C 2015 Decision Sciences Institute Volume 13 Number 2 April 2015 Printed in the U.S.A. Please cite as: Heidrich, B., Kasa, R., Shu, W., & Chandler, N. (2015). Worlds Apart But Not Alone- How Wiki-Technologies Influence Productivity and Decision-making in Student Groups. Decision Sciences Journal of Innovative Education, 13(2), 221–246.
EMPIRICAL RESEARCH
Worlds Apart But Not Alone: How Wiki Technologies Influence Productivity and Decision-Making in Student Groups Bal´azs Heidrich and Rich´ard K´asa† Faculty of Finance and Accounting, Institute of Management and Human Resources, Budapest Business School, Budapest, Hungary, e-mail:
[email protected], richard.kasa@pszfb. bgf.hu
Wesley Shu Department of Information Management, National Central University, Jung-li City, Taiwan, China, e-mail:
[email protected]
Nick Chandler Faculty of Finance and Accounting, Institute of Languages and Communication, Budapest Business School, Budapest, Hungary, e-mail:
[email protected]
ABSTRACT Regardless of the size of an organization, collaboration has become a fundamental element with regard to engagement between the organization and internal and external stakeholders. With the rapid advance of communication technologies and the free-flow of information, the concept of collaboration extends beyond physical locations and time zones in the form of globally connected virtual teams. This study considers how modern Web 2.0-based collaborative technologies (wikis) relate to higher decision quality and productivity, and identifies if these collaborative technologies are better suited to tasks requiring extensive asynchronous collaboration in an educational setting. Controlled experiments involving student teams that worked in technologically and demographically diverse groups showed that wiki technologies do not suit all kinds of tasks, and do not always increase productivity or the decision quality of team collaboration.
Subject Areas: On-Line Collaboration, Web2 Technologies, Wikis.
INTRODUCTION The current pace of technological development is forcing both higher education instructors and trainers to keep up with the latest IT innovations in order to transform static learning processes into dynamic cognitive processes. The driving For the Special Issue on “Multidisciplinary and Collaborative Education.” † Corresponding author.
221
222
How Wiki Technologies Influence Productivity and Decision-Making
forces for this transformation are technological innovations (a push effect) and students’ increasing need to use technology in every aspect of their life (a pull effect). Collaborative learning enables users to capitalize on each other’s resources, skills, and ideas in an environment in which participants are engaged in a common task and goal, and every individual is dependent on and accountable to the others. The field of this collaboration can be effectively supported by the use of innovative technology. Despite the acknowledged importance of modern IT innovations in education, it is unclear how these technologies influence learning and problem-solving processes. Studies often investigate the use of technology with a focus on specific tasks and technologies, and are usually inadequate from a methodological and statistical point of view. The intent of this research is to overcome these deficiencies through the use of an extended experiment based on an original concept of Shu and Chuang (2011, 2012) and rigorous statistical methodology. The experiment separates tasks and technologies, and measures their respective fit and the evolution of productivity and decision quality during the solution of case studies under controlled conditions. The study seeks to illuminate the relationship between different types of tasks and technologies from several aspects.
RECENT TRENDS IN WEB-BASED COLLABORATION Over the last 20 years, the Web has grown from being a group work tool for small teams of scientists at the European Council for Nuclear Research into a global collaboration space of almost three billion users worldwide (ITU, 2014). It is now returning to its origins as a read/write tool, and entering a new, social phase as a participatory tool. This second wave has led Darcy DiNucci to create the term Web 2.0 (or Web2). This refers to Web sites that allow users to interact and collaborate with each other in a social media dialogue as creators of user-generated content in a virtual community. Web 2.0 technologies enable remarkable interactivity, and create many new collaboration models such as Wikipedia or InnoCentive. What is encouraging about Web 2.0 is not only its popularity but its idiosyncrasies; simple and parallel editing, version control and real-time updates (Trkman & Trkman, 2009). This is a contrast to the “bottleneck effect” referred to by Bean and Hott (2005) whereby updates are delayed through a centrally managed entry. Created by Ward Cunningham in 1995, wikis are Web-based hypertext applications intended for collaborative writing. In addition to writing and viewing their own pages in real time, users of a wiki can see pages others have published, and link to them using hypertext without having to wait for an editor to assemble the various components developed individually on multiple devices. Moreover, during the writing process, content can be immediately displayed to other team members, who can immediately add their own contributions and see others’ revisions without having to wait for an editor to assemble the content (Lin, Chuang, & Shu, 2012). McAfee argued that “the technologists of Enterprise 2.0 (e.g. wikis) are trying hard not to impose on users any preconceived notions about how work should proceed or how output should be categorized or structured. Instead, they are building tools that let these aspects of
Heidrich et al.
223
knowledge work emerge” (McAfee, 2006). Wikis have evolved as a tool capable of matching these characteristics (Lin et al., 2012). Instead of serving as a control centre, a wiki serves as a collaboration and creation platform and a central repository. This makes asynchronous cooperation across time zones possible. Beyond the alleviation of physical constraints, the potential for greater collaboration is further augmented by the use of wikis in three particular ways. First, wikis provide an equal opportunity for all opinions to be heard. As Bean and Hott (2005) commented, rather than the back-andforth exchanges of e-mail attachments or discussion boards, wikis allow direct exchanges of opinions that are centrally and permanently stored. This enhances the efficiency of the organization through the increased efficiency of teamwork as shown by recent studies (Hadjerrouit, 2014; Li, 2013; Wang, Zou, Wang, & Xing, 2013). These studies, however, omit many aspects of wiki collaboration, and are lacking methodologically. Second, unlike blogs or microblogs that are available today, wiki usage allows two-way communication, making their use a dynamic process that closely resembles real-life communication (Zhang, Fang, Wei, & He, 2013). Mattison (2003) pointed out that in contrast to blogs, most wikis provide forums where authors can discuss and resolve conflicting opinions by seeing others’ postings, and respond with their own thoughts. Finally, the entire methodology of wikis is built on trust, which means all entries are assumed to be genuine and correct, and filters established only when necessary. The assumed trust, and the way a wiki encourages the continuous enhancement of data, in turn harness the power of diverse individuals to create collaborative work globally (Shu & Chuang, 2012).
DIVERSITY AND EFFECTIVENESS IN WORK GROUPS Traditional management techniques have assumed that team members were homogenous, and for a long time the techniques seemed to work. Nowadays, there are many elements at play that are catalysts for greater diversity in the workforce. We therefore extended traditional perspectives on group diversity, applying a multilevel approach that allows us to examine not only cultural, gender, and age differences, but diversity in professional backgrounds, and attitudes to recent ICT applications of team members. Many studies have focused on the effectiveness of group problem solving (Bettenhausen, 1991). However, most of these studies did not have a particular focus on diverse teams. Table 1 summarizes the task characteristics of effective diverse teams, and illustrates that the effectiveness of diversity depends on whether tasks are routine or innovative. Diversity seems to be an advantage only in the case of innovative tasks, whereas for routine tasks it appears to be a constraint to effectiveness. Studies that have focused on diverse teams have produced inconsistent results (Watson, Kumar, & Michaelson, 1993). While diversity within a team may increase the number of solutions or alternatives offered, it also appears to be a serious obstacle to smooth interaction processes, and often results in decreased performance (Adler, 1990; Maznevski, 1994). On the other hand, these teams can
224
How Wiki Technologies Influence Productivity and Decision-Making
Table 1: The effective management of diversity in work groups. Effective Task Stage Conditions
Innovative Divergence (earlier) Differences recognized Members selected for task-related abilities Mutual respect Equal power Superordinate goal External feedback
Ineffective Routine Convergence (later) Differences ignored Members selected on basis of ethnicity Ethnocentrism Cultural dominance Individual goals No feedback (complete autonomy)
Source: Adler (1990).
help to minimize the risk of uniformity and the pressures of “group-think” that can easily occur in longstanding, homogenous teams (Schneider & Barsoux, 2003). In conclusion, studies have demonstrated that diverse groups have the potential to perform well; they can generate more and better alternatives and criteria than homogenous groups. However, when it comes to solutions and implementation, their performance is lower than that of homogenous groups (Kumar, Subramanian, & Nonis, 1991). Where diversity proved to enhance effectiveness, the common element seemed to be the conscious integration of that diversity (Maznevski, 1994). Once diversity is integrated, diverse teams can achieve their potential (Hurst, Rush, & White, 1989). In this study, mechanisms were expressly used to facilitate integration as a means of completing certain tasks.
VARIABLES FOR ANALYSIS This study uses the methodology and experimental procedure developed by Shu and Chuang (2011, 2012) which modified the task/technology fit (TTF) model. Collaborative tasks (intellectual and preference) and collaborative technologies (wikis and nonwikis) were grouped. The degrees of TTF were then evaluated to predict team performance (decision quality and productivity). The following performance parameters and factors were used as a priori variables in this study.
Task/Technology Fit According to McGrath (1984), collaborative tasks can be divided into two categories: intellectual tasks and preference tasks. The former refers to problem solving that has an anticipated outcome. The latter are tasks in which an outcome is uncertain and solutions depend on team members’ values and beliefs. Zigurs and Buckland (1998) suggested that team members will, for different collaborative tasks and technologies, collaborate differently and cause differences in task fit. The nature of preference tasks makes them more likely to incite discussion within a group, the exchange of different opinions, and alternative interpretations of the task, thereby creating the need for a sound collaboration platform. In the TTF model (Goodhue, 2007), the usefulness of tools in performing a task is highly correlated with how well the task relates to the tools’ functionality.
Heidrich et al.
225
The main thrust of TTF theory is that any science and technology must harmonize with the needs of the mission before measurable performance is possible. For example, collaboration between team members in a collaborative support system can solve problems efficiently because it is not subject to time and geographical constraints (Klein & Dologite, 2000). We believe that the TTF model is appropriate for our study. Wiki and traditional collaboration tools differ in synchronization. Furthermore, the TTF model differs from utilization models such as the unified theory of acceptance and use of technology (Venkatesh, Morris, Gordon, & Davis, 2003) in that it directly measures performance. When technology use is not voluntary, this makes the model less credible for measuring performance (Goodhue & Thompson, 1995). Goodhue (1988) argued that information technology is a good fit whenever it is able to reduce operating costs, provide easier user experiences, and improve performance outcomes. In other words, a user must believe that the information system is useful and able to provide considerable benefits to his/her assigned task (Greenstein, 1998). Collaboration between team members in a collaborative support system that can solve problems without time and geographical constraints, will be efficient and favourable (Dennis, Hayes, & Robert, 1999; Klein & Dologite, 2000). We use five variables to measure TTF, with the aim of aggregating them into a single factor.
Productivity In mathematical terms, productivity can be calculated as the ratio of inputs to outputs (Belanger, Collins, & Cheney, 2001). However, not all aspects of inputs and outputs are measurable (Brynjolfsson & Yang, 1996). Consequently, respondents’ beliefs about the effectiveness of using wikis in performing tasks and their perceived productivity is a preferred measure. Previous studies have shown strong links between self-assessments and the quality of performance (Kauffman & Weill, 1989; Kelley, 1994). Productivity in this study is measured using a questionnaire, based on Lin et al. (2012), that asks participants about the productivity of a collaboration and their task performance. We thus measure perceived productivity of an output generation process, which is not directly related to output quality. Five variables are used and are aggregated into a single factor. Decision Quality Benbasat and Lim (1993) quantitatively integrated the results of several experimental studies on group support systems’ (GSS) usage, and found that GSS use had a positive effect on decision quality. Quality is often used to measure the final results in collaborative research. It refers to team members’ own feelings toward team output during each collaborative decision-making process (Chen, 2003), and can also equate to members’ self-evaluations of final decision outcomes (Chizmar & Zak, 1983). In this sense, decision quality is not directly related to the final output of the collaboration, but to the decision-making processes involved during the collaboration. Decision quality is again measured using a questionnaire based on Lin et al. (2012). As with productivity and TTF, five variables are measured and aggregated into a single factor.
226
How Wiki Technologies Influence Productivity and Decision-Making
Satisfaction Satisfaction, defined as the manifestation of good feelings that team members experience during the course of collaboration, is often used in studies of collaboration to measure the success of a process or outcome (Church & Gandal, 1992, 1993). It is often associated with group members’ positive evaluations of their collaborative efforts. Satisfaction may also be defined as an application’s ability to meet user expectations. It is a subjective term that varies with one’s perceptions and attitudes toward eventual benefits and outcomes. Past studies have indicated that satisfaction is closely moderated by an application’s ease of use or the precision of the user-machine interface (Adam Mahmood, Burn, Gemoets, & Jacquez, 2000). Jonscher (1983) suggested that care must be taken when formulating satisfaction metrics because satisfaction and quality are similar, but in practice, a high degree of satisfaction may not necessarily equate with high quality. Satisfaction is also measured using the questionnaire based on Lin et al. (2012), and the five measured variables aggregated into a single one factor. Key Capabilities Typical capabilities and competencies of team members with regard to teamwork and collaboration were measured based on the literature (Lin et al., 2012; Shu & Cheng, 2012; Shu & Chuang, 2012; Shu & Lee, 2003). The following capabilities were measured based on self-assessment: the ability to share knowledge and absorption potential; learning and knowledge transfer; shared knowledge creation; access to knowledge during teamwork; the integration of knowledge from different sources; filtering knowledge; and several issues related to efficiency including learning efficiency, problem-solving efficiency, and learning from mistakes. Seventeen variables are measured with the goal of aggregating these into a single factor. RESEARCH METHODOLOGY AND DESIGN Research Model and Hypothesis During the entire research process, the methodology presented by Lin et al. (2012), Shu and Lee (2003), and Shu and Chuang (2012) was adopted. This methodology is based on the separation of collaborative tasks into intellectual and preference type tasks on the one hand, and the separation of collaborative technologies into traditional (face-to-face meetings) and wiki Web 2.0-based technologies on the other. The aim was to measure the fit of the two dimensions (tasks and technologies) and team performance (Figure 1). Zigurs and Buckland (1998) showed that groups adopting a GSS are more motivated to express their ideas than groups that do not. Additionally, systems that support parallel editing and allow multiple participants to instantly share and express their opinions, ideas, and information can be more efficient than conventional systems in which editing and expression are sequential (Berndt, 1992). We propose hypotheses regarding the effect of systems (wiki technologies) and TTF on productivity and decision quality. Although the variables satisfaction and key capabilities are used in the research, due to quantitative limitations, these aspects are beyond the scope of the study.
Heidrich et al.
227
Figure 1: Research model.
Collaborative task Intellectual type Preference type
Collaborative Technologies
Task/technology fit
Team performance
Intellectual type Preference type
Decision quality Productivity Satisfaction Key capabilities
Wikis Traditional (face-to-face)
Source: (Shu and Lee, 2003).
Christensen and Jorgenson (1969) argued that the amount of information a team member can contribute is an important indicator of the quality of group collaboration. Christensen and Greene (1976) defined productivity as the quantity of output data that a collaborative team can produce. In our model, neither the inputs nor outputs in either the intellectual or the preference task are measurable. We thus define productivity in the context of intergroup collaboration as the perceived adequacy of the acquired and assimilated information for completion of the task at hand. Based on the above argument, we formulate the hypotheses. H1: The better the TTF, the better is team productivity. H1a: Wiki technologies support team productivity better for intellectual tasks. H1b: Wiki technologies support team productivity better for preference tasks. Decision quality is often used as a metric to measure the results of collaborative studies. Fan, Hu, and Xiao (2004) argued that decision-making quality is a good measure of group communication performance. Quality is defined as team members’ feelings about the team’s output during the decision-making process (Chen, 2003), and has also been used to measure the final results of collaborative research. Quality can also refer to team members’ evaluations of the outcomes of the final group decision (Chizmar & Zak, 1983). With this in mind, we propose the hypotheses. H2: The better the TTF, the better the quality of team decisions. H2a: Wiki technologies support the quality of team decisions better for intellectual tasks. H2b: Wiki technologies support the quality of team decisions better for preference tasks.
228
How Wiki Technologies Influence Productivity and Decision-Making
We also consider whether experimental groups exhibit greater improvements in all collaboration performance factors than control groups. This is influenced by team members’ previous attitudes to teamwork and wikis, as in the hypothesis. H3a: If a team member’s attitude to wiki usage is positive, they will achieve greater improvements in productivity and decision quality due to wiki use than those who did not use wikis and vice versa. We consider whether there is a direct (or indirect) causal relationship between pre- and postexperiment attitudes of test subjects. If this is the case for experimental groups but not control groups, we can conclude that wiki usage can change perceptions regarding teamwork. Accordingly we form the hypothesis. H3b: Wiki usage causes systematic changes in attitudes to teamwork. We also assume that wiki usage induces a direct causal change in TTF, production, and decision quality. This means that good TTF will result in higher productivity and decision quality. H3c: Wiki usage improves the decision quality of team collaboration.
Experiment Design and Procedure The research took place at the Budapest Business School, Faculty of Finance and Accountancy, and involved part-time masters’ students in Finance and Accounting between 2012 and 2014 (one experiment per year). A demographic survey of participants was carried out prior to the experiment to identify their attitudes and habits regarding teamwork and wiki usage. The frequency of use of wiki platforms (social sites, cloud computing devices, and online collaboration tools) was measured for each participant, and dichotomous variables were defined as a way of classifying participants as wiki users or not. Teamwork habits were also measured using questions about how often and how many times participants worked in teams. Dichotomous variables were also defined to classify whether participants were team players or not. Teams of four people were formed with special regard to their demographic characteristics (general attitude to IT and Web 2.0 tools). Details are presented in Table 2 and in the sample allocation and distribution section of the study. Preexperiment surveys were then implemented to assess several dimensions of the teams’ attitudes toward collaboration (general aspects of TTF, decision quality, productivity). Both perceptive and reflective questions were used to measure TTF and performance (productivity and data quality). An example of a perceptive question is “I believe it is useful to complete a task by collaborating with my group members.” An example of a reflective question is: “I am satisfied with the data collection process while working with my group members.” Questions on TTF were taken from Jarupathirun and Zahedi (2007), while questions on productivity were taken from Belanger et al. (2001) and Staples, Hulland, and Higgins (1998). The questions on decision quality and satisfaction were taken from Lowry and Nunamaker (2002). All questions were answered using 5-point Likert scales.
Total
Wiki User
12 (48)
Yes Other
No
8 (32) (16)4 0 (0) 12 (48) 24 (96)
4 (16) (28)7 1 (4) 0 (0)
0 (0) (0)0 0 (0) 22 (87)
10 (39) (48)12 0 (0) 28 (112) 53 (211)
2012
2014
2012
2013
Yes
No
Team Workers
Table 2: Composition of participant teams.
4 (16) (92)23 1 (4) 3 (12)
2013 1 (4) (8)2 0 (0) 9 (36)
2014 3 (12) (12)3 3 (12) 8 (32) 20 (78)
2012
Other
2 (8) (16)4 2 (8) 3 (10)
2013 0 (0) (0)0 3 (10) 43 (171)
2014
2013 21 10 (83) (40) (76)19 (136)34 3 4 (12) (16) 48 6 (192) (22) 97 (385)
2012
Total
1 (4) (8)2 3 (10) 97 (385)
2014
32 (127) (220)55 10 (38)
Total
Heidrich et al. 229
230
How Wiki Technologies Influence Productivity and Decision-Making
team generation pre-experiment attitude questionnaire
post-experiment attitude questionnaire datebase construction
exploratory factor analysis
t-tests
confirmatory factor analysis
structural equation modelling
Hypothesis testing
demographic questionnaire
EXPERIMENTS
Formulation of hypotheses
Figure 2: Research process.
At this stage of the research, the teams were randomly divided into two groups based on their familiarity with wiki tools and teamwork. This yielded 48 experimental teams of 4 containing 190 (49.4%) participants (included 2 groups of 3), and 49 control teams of 4 with 195 (50.6%) participants (included 1 group of 3). For teams in the experimental group, a 60-minute training course on “Modern Web 2.0-based applications for online teamwork and mass collaboration” was conducted, and specific freeware applications (Skype, Dropbox, Google Drive, etc.) were demonstrated, as suggested by Wang et al. (2013). The intention was to promote commitment to solving team tasks by using these tools online and without face-to-face communication. To ensure this occurred, the description of the experiment required team members to take screenshots while using the online tools. This ensured that all teams in the experimental group used at least two different wiki tools. Teams in the control group had no information about the research, and only received one of the experiment’s case studies to solve. All experiment participants had to solve case studies. Five different case studies (Apple, Facebook, Nokia, Starbucks, and Google) were used with the same objectives, methods, and tasks (intellectual tasks for close-ended problems with a predictable solution, and preference tasks for open-ended problems). The different case studies encouraged separation of the teams to avoid intergroup cooperation and collaboration. The intellectual tasks related to the: (i) net present value of the company; (ii) determinants of this value; and (iii) capitalization trends. Preference type tasks required more creative thinking and dealt with: (i) market conditions and bargaining power of the company; (ii) strengths and weaknesses of the companies; and (iii) the advantages and disadvantages of current brand strategies. The projects were distributed to teams randomly, and teams were given 4 weeks to complete the case studies and present the results in a 30,000-35,000 character report. The 4-week period gave teams time to carry out research, gather data, and meet either face-to-face or using Web 2.0 meetings. After the experiment, similar questionnaires to those used before the experiment were distributed. However, questions were focused on the specific tasks in the projects. The experiment thus used two surveys: a pre-experiment survey of general attitudes, and a postexperiment survey of project experiences (Figure 2).
Heidrich et al.
231
Sample Allocation and Distribution Subjects in the research experiment were teams generated based on two dimensions. As described earlier, participants were identified based on their teamwork and wiki usage experiences and habits. This was used to generate two dichotomous variables: attitude to teamwork, and familiarity with wiki applications. According to these two dimensions, the 385 participants were divided into 97 teams with four members (3 groups had only 3 members). It was ensured, based on demographic information, that at least two team members were unfamiliar with each other or any other member of the team. Table 2 shows the distribution and number of teams in each category (and number of participants) according to the year of the experiment. An analysis was performed to determine the optimal sample size for each cell of the study design using the formula described by List, Sadoff, and Wagner (2010): σ 2 n = 2(t α2 + tβ )2 − . (1) δ where n is the sample size, tα is t-value (type I error), tβ is t-value (type II error), σ is population SD, and δ is the minimum detectable mean difference between experimental and control conditions. Setting the power at .8 and .9, and the level of α at .05, the ideal minimum treatment cell size was determined to be between 16 and 21. As the number of participants in each cell was a minimum of 16, it can be concluded that the sample was large enough to detect the desired effects. Teams in the cells were randomly divided into two groups: 49 experimental teams (communicate using wiki technology), and 48 control teams (communicate using face-to-face interaction, emails, and phone), with 195 and 190 participants, respectively. All experimental groups had to use at least two wiki tools and provide evidence of usage with screenshots. All control groups teams were encouraged to have face-to-face meetings. Using this method of sample allocation, it must be assumed and accepted that the subsamples (experiment/control groups) are random and have the same features, thus all findings and conclusions are significant.
DATA ANALYSIS METHODOLOGY, MODEL RELIABILITY, AND VALIDITY To establish the model which best describes the observed reality during the experiment, a four-stage methodology was used as described in Table 3. Based on the results of the pre- and postexperiment surveys and in accordance with the research process described in Figure 2, exploratory factor analysis (EFA) was performed. Principal components extraction, VARIMAX rotation, and Kaiser normalization were used to identify latent factors from the 37 measured variables in each dataset (pre and post). Once latent variables were identified [task/technology fit (FIT), productivity (PRO), decision quality (DQ), satisfaction (SAT) and key competences (KEY)], confirmatory factor analysis (CFA) was performed to test the fit of the causal models (J¨oreskog, 1969).
232
How Wiki Technologies Influence Productivity and Decision-Making
Table 3: Research methodology. Method
Aim
Result
Exploratory factor analysis (EFA)
Find latent structure 5 factors from 37 of variables variables
Confirmatory factor analysis (CFA)
Confirm the model and build a more valid, significant model
Structural equation modeling (SEM)
Identify mediation and moderation effects between factors
t-test (Shapiro-Wilk & Levene tests)
Compare experimental and control group on several demographic dimensions
Best fit model with factors
Test Reliability (Cronbach’s α, KMO, Bartlett, TVE) Convergent validity (CR, AVE, factor loadings)
Discriminant validity (AVE, MSV, ASV) Model fit (absolute, incremental, parsimonious) Influence map with Regression weights direct and indirect and significance, causalities standard error, critical ratio, squared multiple correlation Significant Significance level differences based on demographic variables, confirmation/rejection of hypotheses
KMO: Kaiser-Meyer-Olkin test; TVE: total variance expressed; CR: composite reliability; AVE: average variance expressed MSV: maximum shared variance ASV: average shared variance.
Reliability Analysis The best fit models with their five latent factors incorporated 25 (pre-experiment) and 20 (postexperiment) variables out of the 37 measured variables. After this reduction, the inner consistency of the scales is still high based on values of Cronbach’s α: .584 is the lowest value, and exceeds desired values (Cronbach, 1951). Both measures of KMO were above .9, meaning that the use of factor analysis is appropriate (Kaiser, 1974). Moreover, Bartlett’s spherical tests are significant (Snedecor & Cochran, 1989) for all constructs and values of TVE exceed minimum requirements. All constructs can thus be considered to be reliable (see Table 4). Validity To test for convergent validity, the recommendations of Fornell and Larcker (1981) were followed. Convergent reliability was achieved for all constructs in both the pre- and postexperiment data sets (Table 5). Table 6 shows that conditions for discriminant validity were also met for all constructs.
Heidrich et al.
233
Table 4: Reliability analysis (pre- and postexperiment). No. of Var. (Measured) Construct
PRE
Cronbach’s α
KMO
Bartlett sig.
TVE
POST PRE POST PRE POST PRE POST PRE
Technology/ 2 (5) task fit Productivity 2 (5) Decision 3 (5) quality Satisfaction 3 (5) Key compe- 15 (17) tences
5 (5) .607 .876
.902 .911
.000 .000
POST
64.926 72.569
2 (5) .773 .584 3 (5) .824 .84 5 (5) .857 .922 5 (17) .941 .872
Table 5: Measures of convergent validity. Pre-experiment
SAT FIT PRO DQ KEY
Postexperiment
Composite Reliability
AVE
Range of Item Loadings
Composite Reliability
AVE
Range of Item Loadings
.858 .831 .776 .837 .939
.668 .545 .634 .638 .510
.824-.850 .872-.724 .874-.889 .709-.888 .537-.802
.922 .846 .709 .842 .870
.747 .647 .712 .640 .574
.679-.856 .581-.860 .696-.733 .827-.854 .677-.819
It is necessary to determine whether the model-in-use provides the best fit among available models (Fornell & Larcker, 1981). Details of the three types of model fit, absolute, incremental, and parsimonious, are presented in Table 7 along with threshold reference values (Mulaik et al., 1989; Schreiber, Stage, King, Nora, & Barlow, 2006; Tabachnick & Fidell, 2007; Wheaton, Muthen, Alwin, & Summers, 1977). The results of all tests met acceptable levels.
ANALYSIS AND HYPOTHESIS TEST Postexperiment Analysis Prior to testing the hypotheses, Shapiro-Wilk and Levene statistics were used to test for the normality and homogeneity of variances of the sample, respectively. It can be seen in Table 8 that for overall and intellectual tasks, teams in the control group (using face-to-face technologies) outperformed teams in the experimental (using wikis) group for almost all demographic groups. The difference in performance is not significant for the preference task. For decision quality, however, results are different. For almost all demographic groups, intellectual tasks solved by wiki methodologies provide better decisions than those solved by face-to-face technologies, while the opposite is true for preference tasks (Table 9).
234
How Wiki Technologies Influence Productivity and Decision-Making
Table 6: Measures of discriminant validity. Pre-experiment
SAT FIT PRO DQ KEY
AVE
MSV
ASV
SAT
FIT
PRO
DQ
KEY
.668 .545 .634 .638 .510
.201 .271 .042 .271 .201
.107 .125 .018 .088 .082
(.817) .400 .129 .226 .448
(.738) −.078 .521 .252
(.796) .092 .204
(.799) .144
(.714)
FIT
PRO
DQ
KEY
(.804) .429 .462 .436
(.844) .247 .749
(.800) .299
(.758)
Postexperiment SAT FIT PRO DQ KEY
AVE .747 .647 .712 .640 .574
MSV .456 .373 .561 .213 .561
ASV .303 .240 .315 .107 .290
SAT (.864) .611 .675 .253 .564
(Square roots of AVE)
Table 7: Measures of model fit. Value Statistic Absolute fit χ 2 /df GFI RMR RMSEA Incremental fit TLI IFI CFI Parsimonious fit PGFI PCFI PNFI
PRE
POST
Threshold
1.888 .913 .02 .048
2.669 .902 .048 .066
ࣘ3 >.8 .9 >.9
.717 .813 .776
.67 .775 .751
>.5 >.5 >.5
Performance Analysis by Structural Equation Modeling Having the results of the CFA, EFA, and hypothesis tests, it is logical to display all teamwork performance factors in a single map to visualize their interdependencies. A path model was developed using structural equation modeling (SEM) to identify the multivariate relationships between variables. It should be noted that the previously determined factors could not be used in SEM as single-factor models were evaluated both for pre- and postexperiment data. All constructs were found to be reliable and fulfilled all requirements for SEM (Table 10). The pre-experiment factors results are clearly separated from postexperiment factors, and all causality can be clearly seen in Figure 3. The important connections
Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con
Team worker
SD
1.133 .930 1.225 .986 1.343 .932 1.132 .981 .981 1.059 1.270 .928 1.225 .878 1.028 1.237 1.233 .926 1.162 1.027 1.107 .926 1.276 .982 1.204 .960
Mean
−.01 .26 −.412 .189 −.357 .300 −.236 .164 −.499 .261 −.203 .199 −.327 .294 .065 −.135 −.247 .200 −.343 .243 −.194 .080 −.339 .329 −.279 .216
*p < .05, **p < .01, ***p < .001. † Control overperforms experiment groups. †† Experiment overperforms control groups.
Overall
Over 24
24 and under
Finance
Acco−unting
Full time
Part time
Female
Male
Nonwiki user
Wiki user
Nonteam Worker
Group
Variable
*† *† *† *† **†
−2.363 −2.019 −2.547 −2.104 −3.561
*†
−2.131
**† **†
−2.980 −3.075
−1.198
*†
−2.233
.434
**†
p
−2.924
−1.039
t-value
Intellectual Task
−.292 .123 −.005 .113 −.235 −.005 .014 .166 −.372 .033 .029 .162 −.088 .104 −.048 .168 −.087 .123 −.076 .096 −.073 .065 −.091 .164 −.083 .115
Mean .882 1.109 .850 .902 .832 1.179 .876 .820 .912 1.051 .824 .874 .883 .974 .757 .771 .864 .880 .874 1.087 .824 .856 .902 1.020 .864 .939
SD
−1.564
−1.379
−.798
−.731
−1.388
−.788
−1.357
−.913
−1.654
−1.038
−.965
−.842
−1.433
t-value
Preference Task
Table 8: Productivity differences within demographic groups in post experimental data. p −.170 .147 −.157 .215 −.096 .165 −.284 .170 −.170 .171 −.071 .182 −.190 .201 −.003 .008 −.159 .160 −.177 .181 −.123 .072 −.199 .252 −.166 .167
Mean 1.034 .938 1.010 .983 1.000 .898 1.060 1.047 1.267 1.410 1.043 .901 1.047 .928 .856 1.040 1.045 .900 .994 1.047 .945 .886 1.083 .999 1.024 .949
SD
Overall
−3.309
−3.138
−1.403
−2.038
−3.565
−.041
−3.565
−2.137
−2.491
−2.532
−2.159
−1.988
−2.647
t-value
**†
**†
*†
***†
***†
*†
*†
*†
*†
*†
**†
p
Heidrich et al. 235
Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con Exp Con
Team worker
SD
.928 .748 1.084 .609 1.017 .572 1.043 .711 .807 .629 1.068 .678 .969 .672 1.489 .640 1.042 .678 1.036 .647 1.089 .628 1.003 .696 1.035 .663
Mean
−.208 −.826 .028 −.769 −.219 −.691 .043 −.852 .387 −.844 −.200 −.771 −.030 −.789 −.193 −.800 −.087 −.785 .026 −.801 .028 −.770 −.105 −.808 −.050 −.791
*p < .05, **p < .01, ***p < .001. † Control outperforms experiment groups. †† Experiment outperforms control groups.
Overall
Over 24
24 and under
Finance
Acco−unting
Full time
Part time
Female
Male
Nonwiki user
Wiki user
Nonteam worker
Group
Demo
5.598
4.150
3.816
3.660
4.232
1.227
5.679
3.601
5.942
5.273
2.245
4.808
2.967
t-value
Intellectual Task p
***††
***††
***††
**††
***††
***††
***††
***††
***††
*
††
***
††
**††
.049 .866 .005 .865 −.088 .911 .084 .845 .019 .914 .016 .837 −.058 .814 .497 1.114 .026 .811 .003 .997 −.016 .867 .043 .864 .017 .865
Mean .859 .708 .887 .631 .880 .790 .874 .573 .868 .648 .885 .643 .854 .564 .894 .917 .861 .604 .908 .722 .791 .679 .943 .611 .876 .642
SD
−7.963
−5.495
−5.820
−4.827
−6.092
−1.894
−8.053
−6.337
−4.738
−5.972
−4.867
−7.018
−3.396
t-value
Preference Task
Table 9: Decision quality differences within demographic groups in post experimental data. p
***†
***†
***†
***†
***†
***†
***†
***†
***†
***
†
***
†
**†
.014 .132 −.073 −.282 .066 .023 −.141 −.011 .031 −.031 −.077 −.046 −.046 −.008 .221 .101 −.025 .038 .012 −.042 .002 .058 −.021 −.030 −.011 .011
Mean .968 1.024 .894 1.080 .947 1.066 .933 1.040 1.287 1.234 .970 1.039 .902 1.014 1.190 1.238 .945 1.024 .952 1.120 .919 1.049 .968 1.063 .945 1.055
SD
Overall
−.224
.062
−.371
.302
−.360
.375
−.360
−.255
.482
−.770
.328
1.123
−.977
t-value
p
236 How Wiki Technologies Influence Productivity and Decision-Making
PRE
3 (5) 2 (5) 4 (5) 4 (5) 11(17)
Construct
Technology/task fit Productivity Decision quality Satisfaction Key competences
4 (5) 3 (5) 3 (5) 5 (5) 7 (17)
POST
No. of Var. (Measured)
.696 .773 .794 .804 .935
PRE .894 .796 .840 .922 .891
POST
Cronbach’s α
Table 10: Reliability analysis of EFA (pre- and postexperiment).
.671 .500 .715 .780 .929
PRE .837 .681 .725 .883 .864
POST
KMO
.000 .000 .000 .000 .000
PRE
.000 .000 .000 .000 .000
POST
Bartlett Sig.
62.442 81.560 62.114 64.795 60.963
PRE
TVE
76.305 73.110 75.944 76.356 60.824
POST
Heidrich et al. 237
238
How Wiki Technologies Influence Productivity and Decision-Making
Figure 3: SEM model of team performance factors.
to note are those between pre-experiment and postexperiment, and changes due to the experiments. It is interesting that productivity becomes a specific and significant factor in the SEM performance map after the experiment. It is directly caused by TTF (and satisfaction). This causality is higher in groups using wikis and when TTF is high. Productivity has a significant impact on the decision quality of team collaboration, with the greatest impact observed for teams using wikis. It is also significant that the direct effect of fit on decision quality ceases and is mediated by productivity. This means that fit has only an indirect (mediated) effect on decision quality after the experiment, with the mediation effect highest in teams using wikis. From the SEM model, it can also be observed that regression weights corresponding to wiki use are in most cases greater than those corresponding to face-to-face communication (Table 11).
FINDINGS AND DISCUSSION Overall, the results appear to be contradictory. The findings indicate that overall and for intellectual tasks, the control group (face-to-face technologies) outperformed the experimental (wiki) group for almost all demographic groups, though