Collective Efficacy as a Measure of Community - Semantic Scholar

27 downloads 619 Views 308KB Size Report
Apr 7, 2005 - Center for Human-Computer Interaction, School of Information Sciences and Technology. The Pennsylvania State University, University Park, ...
Collective Efficacy as a Measure of Community John M. Carroll, Mary Beth Rosson, Jingying Zhou Computer-Supported Collaboration & Learning Laboratory Center for Human-Computer Interaction, School of Information Sciences and Technology The Pennsylvania State University, University Park, PA 16802 USA e-mails: [email protected], [email protected], [email protected] ABSTRACT

As human-computer interaction increasingly focuses on mediated interactions among groups of individuals, there is a need to develop techniques for measurement and analysis of groups that have been scoped at the level of the group. Bandura’s construct of perceived self-efficacy has been used to understand individual behavior as a function of domain-specific beliefs about personal capacities. The construct of collective efficacy extends self-efficacy to organizations and groups, referring to beliefs about collective capacities in specific domains. We describe the development and refinement of a collective efficacy scale, the factor analysis of the construct, and its external validation in path models of community-oriented attitudes, beliefs, and behaviors. ACM Classification: H.5.3 Group and Organizational Interfaces; K.4.3 Organizational Impacts Keywords: collective efficacy, community informatics, community computing, CSCW, evaluation

INTRODUCTION

The scope of research in HCI has broadened from a focus on individuals working with desktop displays, to include studies of groups and organizations—often separated by time and space—working toward joint outcomes. One of the many challenges in evaluating computer-mediated group behavior is the assessment of group results. The direct approach is to observe, combine, and calibrate a mix of individual and collective outcomes (e.g., documents or decisions created individually or collectively). However the costs of gathering such data, and the ambiguity in combining and interpreting a diverse set of results are high, particularly when the groups are distributed or ad hoc [16]. Thus, how would we measure a town’s success managing its own economic development, or that of a design team managing its own social capital? An attractive alternative is to use less direct measures. For years, HCI researchers have used psychometric constructs Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CHI 2005, April 2–7, 2005, Portland, Oregon, USA. Copyright 2005 ACM 1-58113-998-5/05/0004…$5.00.

like cognitive style or field dependence as predictors or surrogates of complex behavioral phenomena [9,11]. Of particular interest is perceived self-efficacy, a measure often used to predict domain-specific capacities [1]. In HCI, self-efficacy in computing has been used as a predictor of technology learning and achievement [19]. In this paper we develop and discuss the construct of community collective efficacy [5], a specialization of Bandura's [1] collective efficacy construct. We first describe how collective efficacy applies to community computing, a sub-domain of collaborative HCI. We then present research in which we explored the structure of community collective efficacy judgments, and in which we internally and externally validated the construct. COLLECTIVE EFFICACY

Perceived self-efficacy is beliefs about one’s capacity for specific achievements, given domain-specific obstacles [1]. We are investigating the social construct of collective self-efficacy, an extension of Bandura’s original concept that captures a member’s beliefs about the capacity of a group or organization. Self-efficacy scales can be developed for any domain that includes goals of achievement or accomplishment. In the domain of parenting an item might be “I can make it on time to the School Board meeting, even if I must leave work a few minutes early;” an example from office work could be “I can complete a sales report on time even if the online database is unavailable and I need to work from hardcopy.” (These examples are adapted from [2]). Two schematic components in self-efficacy items are (1) a specified capacity in the domain of interest, and (2) a potential obstacle to achieving the goal. Belief in one’s capacity is operationalized by Likert-scale ratings of agreement with the assertion of the capacity, given the assumption of the obstacle. Note that efficacy judgments are not recollections of actual performance patterns ("I usually make it to School Board meetings"), nor are they specific predictions ("In this case, I think I will make it to the meeting on time."). Efficacy is distinct from the construct of self-esteem, which is concerned with judgments of self-worth rather than personal capability. Efficacy judgments predict goal selection and performance in a domain; self-esteem does not [1]. Because efficacy is specific to a domain, it is a

more powerful predictor than general-purpose measures like locus of control, perceived self-control, self-concept of ability, or cognitive competence ([1], pp. 47-48). Efficacy is relatively easy to assess, and correlates strongly with key aspects of performance including setting challenging goals, working harder, learning more, and achieving more [1]. Thus, self-efficacy ratings offer a means for interrogating complex capacities where it would difficult to measure actual performance directly. At the same time, efficacy can be seen as an important primary measure about the possibility of achievement: To wit, a person is unlikely to accomplish something he or she believes is beyond his or her capacity. Collective efficacy extends self-efficacy to beliefs about the shared capacities of the groups in which people participate; that is, to beliefs about joint endeavors and joint outcomes [10]. Bandura [1] shows that just as selfefficacy predicts personal performance, group members’ beliefs of collective efficacy predict their performance as a group. The applications could range from physical capacities (e.g., beliefs that a basketball team could pull together to win a tough game even if its star is injured) to more subtle achievements involving learning (e.g., that members of a work group could adapt unfamiliar new technology on their own) or negotiation (e.g., that a school district could raise funds for unplanned building maintenance or improvements). Like self-efficacy, collective efficacy is interesting for two reasons. First, the beliefs of members about their group’s capacities for various sorts of achievement are a primary indicator of the group’s possible trajectory to the achievements. Second, Bandura’s original work suggests that collective efficacy may be used as a valid and robust surrogate for group achievement [1]. In the case of collective performance, this heuristic value of the efficacy judgments may be even more significant, because the costs of initiating and measuring group outcomes directly increases with the number of members and tasks. The assessment of group performance is of a much higher order of complexity than individual performance. COMMUNITY COLLECTIVE EFFICACY

Our interest in collective efficacy emerged as part of a project studying community computing [6,12,13]. In this project we are exploring the relation between people’s use of Internet technologies (email, chat, web) and their feelings about and behaviors in their community. Many of our analyses have focused on individual attitudes and behavioral reports, but we are also interested in studying collective community phenomena. Bandura’s discussion of collective efficacy [1] suggests that it may be an appropriate technique for assessing the capacities of a community: People’s beliefs in collective efficacy influence the futures they seek to achieve through collective action, how well they use their resources, how

much effort they put into their group endeavors, their persistence when collective efforts fail to produce quick results or confront influential opposition, and their vulnerability to the discouragement that can beset people taking on tough social problems (p. 76). Our expectation is that people’s beliefs in community collective efficacy will influence their tendencies toward communityoriented behaviors, including planning and use of shared resources, and a willingness to persist in the face of internal conflicts, political challenges, or social concerns. In the context of our research project on community computing we expected that measurements of collective efficacy would help us to understand the impacts of information technology on the community. Community networks offer a variety of mechanisms for collective action, from relatively indirect behaviors like browsing information about local issues to more direct behaviors like contacting officials by email. A plausible hypothesis is that individuals who believe that their community can address challenges together will recruit the Internet in support of community goals. We explored these possibilities by developing a community collective efficacy (CCE) scale—a “capacity analysis” of the community by the community. Like task analysis, the CCE scale decomposes community involvement into a set of specific concerns. However it goes beyond mere task enumeration, probing people’s beliefs about how well their community can succeed in such joint endeavors. THE COMMUNITY COMPUTING CONTEXT

Our study of collective efficacy was carried out as part of the EPIC project (Experiences of People, Internet, and Community), a wide-ranging assessment of the use and impacts of the Blacksburg Electronic Village (BEV), a community network supporting the university town of Blacksburg, Virginia (population 47,000), and nearby areas of Montgomery and Giles counties. BEV is a mature community network, both in the sense that it has been operational for a decade, and in the sense that it has a high level of penetration into its community [6,14]. This level of technology adoption has helped to evoke and support a lively and diverse range of locally oriented, Internet services and content [6,8]. The BEV hosts many community-oriented initiatives (community newsgroups, listservs, a town chat, a senior citizen informal history archive, public-access kiosks). The town provides online forms for surveys, house check requests, and e-mail to town officials, as well as dissemination of schedules and other documents. As in many other communities, the youth of Blacksburg use Internet services extensively, to connect socially outside of school and to collaborate informally on homework and projects. Our study has focused on household use and impacts of the BEV and the Internet. Data collection in the study was

multi-faceted (Figure 1), comprising a two-wave survey, with the second round of surveys administered approximately 12 months after the first; a logging study, in which we monitored household email and Web activity; and an interview study, in which we carried out a series of four household interviews throughout a 12month period. At the end of the project, an online discussion was created to share and discuss the study results within the community.

challenge and achievement (e.g., education, resource planning, social services). Challenges or achievements were phrased as collective capacities (e.g., “Despite our differences, we can commit ourselves to common community goals:”) and respondents indicated their agreement on a rating scale from 1=Strongly Disagree to 5=Strongly Agree. The items on the scale (Cronbach alpha=.86) appear in Table 1. As a community, we can handle mistakes and setbacks without getting discouraged. Despite our differences, we can commit ourselves to common community goals. I am confident that we can be united in the community vision we present to outsiders. I am convinced that we can improve the quality of life in the community, even when resources are limited or become scarce.

Figure 1. Overall research design of Experiences of People, Internet, and Community (EPIC) study

Our community can cooperate in the face of difficulties to improve the quality of community facilities.

We constructed a stratified sample of 100 households, representing the actual population demographics of the town and surrounding region. To minimize self-selection, we began with a random sample of 1250 residential addresses purchased from Survey Sample, Inc. (SSI) a previous research project; after pre-filtering to remove invalid addresses we were left with 870 households. We invited participation from this sample with a 10-item survey that allowed us to classify households with respect to location, whether and where they had access to the Internet, and education level of the head of household. We then recruited households such that these three stratification variables were represented in proportion to the actual population of the region, as described by census data and other demographic studies of the local area.

The people of our community can continue to work together, even when it requires a great deal of effort.

The EPIC survey asked participants (all household members aged 16 and higher) about their community involvement, organizational memberships, informal group participation, Internet use, social circles, community collective efficacy, personal attributes like extroversion, recent life changes, and basic demographics like age and education. We drew upon existing survey instruments, particularly the HomeNet survey [15] and prior BEV surveys [14]. For more detail and background on sample design, survey construction, and statistical analyses, the project web site can be consulted at http://epic.cs.vt.edu. THE COMMUNITY COLLECTIVE EFFICACY SCALE

One section of the EPIC survey was a scale measuring residents’ beliefs about their collective capacities as a community, the community collective efficacy (CCE) scale. We constructed this scale through a process of iterative refinement over the two rounds of the survey. A preliminary CCE Scale

The first version of the CCE consisted of 13 items that were created by brainstorming key areas of community

We can resolve crises in the community without any negative aftereffects. Our community can greatly improve the quality of education in Montgomery County without help from the Commonwealth of Virginia Our community can greatly improve services for senior citizens in Blacksburg and Montgomery County without help from the Commonwealth of Virginia I am confident that our community can create adequate resources to develop new jobs despite changes in the economy. We can greatly improve the roads in Blacksburg and Montgomery. County, even when there is opposition within the community. Our community can present itself in ways that increase tourism. Our community can enact fair laws, even when there is disagreement among people. Table 1. Version 1 of CCE Scale with 13 items; the shading identifies items that loaded on each of three factors in a principle components factor analysis with varimax rotation.

Factor analysis of responses from the first wave of survey data (N=157) revealed a stable internal structure of three factors (see [5] for a detailed analysis of the first CCE scale; this paper will focus on the more refined version of the CCE scale administered in the second wave of the survey). The shading of items in Table 1 differentiates the three groups of items that loaded on each factor following varimax rotation. The item loadings suggested an interpretation of these three factors as “active cooperation” (the community pulls together as needed to make things better); “social services” (the community can meet its education and outreach needs); and “economic

infrastructure” (the community can create, and maintain an adequate physical and social infrastructure). However, some aspects of the scale were problematic. For instance, the item probing tourism had relatively weak and unstable loadings. We speculated that for many Blacksburg residents there are already too many tourists (Blacksburg is a picturesque college town, set in the mountains), and attracting more is not really desirable. As a result we elected to remove this item in the revised version of the scale. The 13 items also varied with respect to the directness of the community achievement. For example, resolving crises and having a vision of the community are things the members of a community must do for themselves, whereas providing good schools and services for elders is something most towns do indirectly through local funding (in Virginia), although of course specific individuals can play a personal role in such services. Having better roads is something a community achieves even more indirectly via state or even federal projects; local communities have little direct control over these decisions and plans, and typically no one even knows the people who come and improve the roads. It seems like collective efficacy might be more strongly indicated by achievements that are attained more directly. Two of the 13 items (improving roads and enacting fair laws) employed the obstacle of disagreement among community members. It is likely that, for collective efficacy, obstacles originating from within the group are critically different than obstacles originating from outside the group. The former may entail a kind of conflict or internal strife that competes with beliefs about collective capacity. This led us to reword some items to shift focus to obstacles coming from “outside” a community, those that would more reliably evoke a shared call to action. Finally, we observed that the three provisional first-order factors were not uniformly represented in the scale. Pedhazur [17] recommends that factors include 3-5 items or indicators. The first factor (active cooperation) included 7 items in the rotated solution; we decided to try to refine this “subscale” to a smaller number of items. We also decided to generate additional items that might be indicators for the two other factors, particularly the social services factor on which only two items (education and senior citizens) loaded in the rotated solution. A refined CCE scale

The items used in the second version of the scale only partially overlap with those in the first version (9 items were exactly the same). The refined scale included 17 items that comprise a more complete analysis of goals and related obstacles for the community domain (the revised set of items is in Table 3). The 17 goals are (1) assist economically disadvantaged, (2) increase tourism, (3) improve roads, (4) improve quality of life, (5) improve

quality of education, (6) preserve parklands, (7) handle mistakes and setbacks, (8) improve quality of community facilities, (9) present united community vision, (10) quality and access to services by disabled people, (11) commit to common community goals, (12) clean air and water, (13) work together, (14) resolve crises, (15) enact fair laws, (16) create resources for new jobs, and (17) improve services for senior citizens. The typical obstacles to the community’s ability to attain these goals include (1) problems with the economy, (2) maintenance of unique character, (3) opposition from adjacent counties and states, (4) limited resources, (5 and 17) inadequate help from the state of Virginia, (6) population growth, (7) discouragement, (8) difficulties, (10) inadequate help from the federal government, (11) work and family obligations, (12) commercial development, (13) a great deal of effort, (14) negative aftereffects, (15) conflicts in the larger society, and (16) changes in the economy. Item (9) mentioned no explicit obstacle, though it might be assumed that social entropy would tend to undermine a united community vision. To investigate the underlying factors in the revised scale, we carried out a principle component factor analysis on the CCE data collected in the second wave of EPIC surveys (N=146). In screening the data, we found that 264 of the 272 bivariate item correlations were significant (p