Comparing the Efficacy of Policy-Capturing Weights and Direct

0 downloads 0 Views 192KB Size Report
Keywords: policy capturing; recruitment; job choice; research methods ... value various job attributes, commonly referred to as job attribute ...... Job C _____.
Organizational Research Methods http://orm.sagepub.com

Comparing the Efficacy of Policy-Capturing Weights and Direct Estimates for Predicting Job Choice Jerel E. Slaughter, Erin M. Richard and James H. Martin Organizational Research Methods 2006; 9; 285 DOI: 10.1177/1094428105279936 The online version of this article can be found at: http://orm.sagepub.com/cgi/content/abstract/9/3/285

Published by: http://www.sagepublications.com

On behalf of:

The Research Methods Division of The Academy of Management

Additional services and information for Organizational Research Methods can be found at: Email Alerts: http://orm.sagepub.com/cgi/alerts Subscriptions: http://orm.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations (this article cites 30 articles hosted on the SAGE Journals Online and HighWire Press platforms): http://orm.sagepub.com/cgi/content/refs/9/3/285

Downloaded from http://orm.sagepub.com at PENNSYLVANIA STATE UNIV on April 11, 2008 © 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Organizational 10.1177/1094428105279936 Slaughter et al. Research / Direct Estimates Methods V ersus Policy Capturing

Comparing the Efficacy of Policy-Capturing Weights and Direct Estimates for Predicting Job Choice

Organizational Research Methods Volume 9 Number 3 July 2006 285-314 © 2006 Sage Publications 10.1177/1094428105279936 http://orm.sagepub.com hosted at http://online.sagepub.com

Jerel E. Slaughter University of Arizona

Erin M. Richard Florida Institute of Technology

James H. Martin University of Missouri–Rolla

When studying applicants’ job attribute preferences, researchers have used either direct estimates (DE) of importance or regression-derived statistical weights from policy-capturing (PC) studies. Although each methodology has been criticized, no research has examined the efficacy of weights derived from either method for predicting choices among job offers. In this study, participants were assigned to either a DE or PC condition, and weights for 14 attribute preferences were derived. Three weeks later, the participants made choices among hypothetical job offers. As predicted, PC weights outperformed DE weights when a noncompensatory strategy was assumed, and DE weights outperformed PC weights when a compensatory strategy was assumed. Implications for researchers’ choice of methodology when studying attribute preferences are discussed. Keywords: policy capturing; recruitment; job choice; research methods

F

or the past half century, researchers have been studying the degree to which individuals value various job attributes, commonly referred to as job attribute preferences. Job attribute preferences may be defined as “the extent to which people desire specific qualities and outcomes from paid work” (Konrad, Ritchie, Lieb, & Corrigall, 2000, p. 593). Researchers have studied these preferences extensively, asking a variety of different research questions. For example, researchers have examined sex differences in preferences (e.g., Bridges, 1989; Hollenbeck, Ilgen, Ostroff, & Vancouver, 1989), changes in preferences over time (e.g., Authors’ Note: Erin M. Richard was a doctoral student at Louisiana State University when this study was conducted. We are grateful to Lisa Ordóñez for her helpful comments on a previous draft of this article and to Kajal Mehta for her assistance with data collection and entry. Portions of this article were presented at the annual conferences of the Society for Industrial and Organizational Psychology, Toronto, Ontario, Canada; and the Society for Judgment and Decision Making, Kansas City, Missouri.

285 Downloaded from http://orm.sagepub.com at PENNSYLVANIA STATE UNIV on April 11, 2008 © 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

286

Organizational Research Methods

Jurgensen, 1978), the relation between personality and preferences for particular attributes (Bretz & Judge, 1994), and the relative influence of attributes and recruitment activities on organizational attractiveness (e.g., Powell, 1984), among other issues (for more extensive reviews, see Barber, 1998, chap. 4; and Breaugh, 1992, chap. 5). Researchers interested in job attribute preferences have used one of two different methodologies. One is the direct estimate (DE) method. DE investigations have required participants to rate (e.g., Bartol & Manhardt, 1979; Manhardt, 1972; Wiersma, 1990) or rank-order (e.g., Jurgensen, 1978; Lacy, Bokemeier, & Shepard, 1983) the importance of job attributes or to allot points to attributes based on perceived importance. The other methodology that often has been used in the study of attribute preferences is policy capturing (PC) (Einhorn, 1971; Zedeck, 1977). In PC, stimuli are described on numerous attributes that vary in terms of level or type. Participants provide judgments of suitability on a Likert scale. Then, using regression, in which judgments of suitability are regressed on the attribute values, a researcher may determine the influence of each attribute on each participant’s suitability ratings. Considerable criticism has been directed toward each of these methods. For example, DE methods have been criticized as being subject to social desirability biases (Rynes, 1991; Schwab, Rynes, & Aldag, 1987). PC studies have been criticized because the large numbers of scenarios required can lead to participant fatigue and carelessness (Cooksey, 1996). Participants in PC studies also make judgments under conditions of perfect information (i.e., all attribute values are available for all stimuli), which is fairly unrepresentative of the conditions that job seekers face (Rynes, 1991; Slaughter & Highhouse, 2003). Moreover, several authors have noted inconsistent findings across studies using both DE (Jurgensen, 1978; Lacy et al., 1983; Turban, Eyring, & Campion, 1993) and PC methodologies (Bretz & Judge, 1994; Zedeck, 1977). Although the critiques of both methods are certainly valid, we believe that there is another, more important issue to address. One assumption that appears to be implicit in studies of job attribute preferences is that valuation estimates derived from study participants are representative of the way those attributes will influence participants making choices among jobs at a later point in time. For example, let us say that in a study on job attribute preferences, a particular individual (a) gives a higher rating to meaningfulness of work than he gives to pay; (b) places meaningfulness of work higher than pay in the rank-order; or (c) in making judgments, utilizes meaningfulness of work to a greater extent than pay. It then stands to reason that, in a choice situation—at a point in time separate from the study of attribute preferences—this individual will place more weight on meaningfulness of work than pay. Although this implicit assumption likely guides the work of researchers who study job attribute preferences, no previous research has empirically tested it. The present investigation was designed therefore to determine (a) whether DE or PC weights could be predictive of choices among hypothetical job offers, at a point in time following participation in a DE or PC task; and (b) whether one method was superior to the other in predicting choice. Briefly, participants were randomly assigned to either a PC or DE condition in one experimental session. Three weeks later, in a second experimental session, participants were asked to make hypothetical choices among sets of job offers. It is important to provide an initial test to determine which attribute valuation method provides estimates most closely related to attribute valuation in choice tasks—even if those choices are hypothetical decisions. Existing literature suggests that different valuation meth-

Downloaded from http://orm.sagepub.com at PENNSYLVANIA STATE UNIV on April 11, 2008 © 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Slaughter et al. / Direct Estimates Versus Policy Capturing

287

ods may produce different rankings of attribute importance (e.g., Jurgensen, 1978; Zedeck, 1977). Moreover, there can be substantial differences between the two methods in terms of the amount of time required for respondents to produce valuation estimates. DE investigations typically require only a few minutes, whereas responses in PC investigations can take up to several hours, depending upon the number of scenarios participants are required to rate (e.g., Roose & Doherty, 1976, 1978). On one hand, if DE studies produce valuation estimates that are unrelated to choice, even the few minutes required to produce such estimates would be wasted time. On the other hand, if DE investigations produce estimates that approximate the predictive power of estimates produced by PC investigations, then it would make little sense to invest the time and resources necessary to undertake a PC study. Making predictions about which type of weights will be better predictors of future choice requires us first to consider how attributes are used to make decisions. Therefore, the remainder of the introduction unfolds as follows. First, we discuss decision-making process models that suggest how attributes are utilized when making decisions, including available information on the application of these models to job choice. Following that, we discuss some of the evidence for imperfect relationships between weights derived from the PC and DE methods and the use of those weights during choice, which leads to the hypotheses for the present investigation.

Major Decision-Making Models and Job Choice There is no consensus on how job seekers make decisions about jobs (Highhouse & Hoffman, 2001). However, considerable early research on job choice considered the issue of how individuals choose among job opportunities (Wanous & Colella, 1989). Below, we present three theoretical accounts of decision-making strategies and explain how applicants might use these strategies during job choice. Compensatory decision making: Expectancy theory applied to job choice. Much of the early research on the job choice process was stimulated by Vroom’s (1966) application of expectancy theory. In its simplest form, expectancy theory suggests that applicants will attempt to maximize the expected utility of their job choices by using a compensatory strategy. In doing so, job seekers multiply a subjectively determined importance weight by the subjectively assessed degree to which the job offers that particular attribute. The attractiveness of each job is arrived at by summing each of these products for all known job attributes. The job with the highest overall expected utility is then chosen. A review by Wanous, Keon, and Latack (1983) suggested that expectancy theory models were a reasonable approximation of the way individuals formed overall perceptions of job attractiveness and made decisions among jobs. Noncompensatory decision making: Lexicographic and elimination-by-aspects (EBA) strategies. Despite the support for expectancy theory found by Wanous et al. (1983), other researchers have questioned the assertion that decision makers use compensatory strategies in job choice situations (e.g., Baker, Ravichandran, & Randall, 1989; Einhorn, 1971; Naylor, Pritchard, & Ilgen, 1980). In fact, Osborn (1990) found that many jobs predicted to be chosen by expectancy theory formulas were eliminated on the basis of not meeting minimum

Downloaded from http://orm.sagepub.com at PENNSYLVANIA STATE UNIV on April 11, 2008 © 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

288

Organizational Research Methods

requirements on the most important attributes. Thus, when testing the predictive efficacy of attribute weights, it is important to consider that decision makers could instead be using a noncompensatory strategy. Two types of noncompensatory strategies are lexicographic and EBA. The lexicographic procedure is a simple strategy in which the decision maker determines the most important attribute and then examines the value of all alternatives on that attribute (Payne, Bettman, & Johnson, 1993). The alternative with the highest value on the most important dimension is then selected, unless there are two or more alternatives with the same or very similar values on this dimension. If this is the case, the decision maker eliminates the remainder of the alternatives, moves on to the next most important dimension, and selects the alternative with the highest value on this dimension. This process is repeated until a single alternative remains. The EBA (Tversky, 1972) heuristic also begins with the most important attribute (or “aspect,” using Tversky’s terminology). The decision maker determines the cutoff value for that attribute, and alternatives with values below the cutoff are eliminated. The process is repeated with the next most important attribute until a single alternative remains. Note that the purpose of this investigation was not to determine whether decision makers use compensatory or noncompensatory decision making when making job choices using available information on job attributes. However, because it is possible that decision makers use attributes to make choice using either compensatory or noncompensatory strategies, we tested the relation between attribute weights and choice assuming each strategy. Moreover, as we shall discuss below, the predictive efficacy of each type of weight should depend on which of the two types of decision strategies is assumed.

Hypotheses This investigation was aimed at comparing the predictive efficacy of weights produced by DE versus PC methodologies when compensatory versus noncompensatory decision strategies are applied to simulated job choice. There are some limitations to both methodologies that make it somewhat difficult to predict which of the two methods produces weights consistent with weighting schemes during choice. These predictions become clearer, however, when one considers the decision-process theories presented above. When individuals make noncompensatory decisions, they are likely to make decisions on the basis of just one or two highly important attributes (Payne et al., 1993). One criticism of DE methods is that most people lack self-insight to indicate accurately the attributes that are most important to them (e.g., Nisbett & Wilson, 1977). Thus, in a DE questionnaire, when individuals are asked to generate importance ratings to reflect how much weight they would put on an attribute in a choice situation, the weighting schemes they would use are not available to their untrained, novice minds. As a result, they generally tend to spread weights relatively uniformly across dimensions and exhibit too little variance in importance ratings (Shepard, 1964). The result is that the DE method is generally prohibited from detecting the one or two attributes likely to cause respondents to accept or reject an option. PC, on the other hand, requires less self-insight because individuals are simply asked to rate the suitability or probability of accepting individual jobs. Another criticism of DE weights is that they may be tainted by social desirability (e.g., Brookhouse, Guion, & Doherty, 1986). Behind this critique is the idea that individuals

Downloaded from http://orm.sagepub.com at PENNSYLVANIA STATE UNIV on April 11, 2008 © 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Slaughter et al. / Direct Estimates Versus Policy Capturing

289

weight culturally acceptable preferences higher than those that are less acceptable. Some evidence for this criticism comes from Jurgensen’s (1978) finding that, on average, participants ranked type of work as the most important job attribute but ranked pay as the most important job attribute for other people who “are in your type of work . . . are like you in age, number of dependents, and education” (p. 268). This suggests that research participants may provide the highest job attribute ratings to the most socially desirable attributes, resulting in the highest ratings for attributes that would not actually determine individuals’ noncompensatory choices. Social desirability is less likely to be a problem when PC weights are used because ratings of suitability are made based on entire job descriptions, rather than individual attributes. As a result, the most important attributes to an individual—the attributes most likely to determine a noncompensatory decision—are more likely to be identified in a PC study. Before stating this prediction formally, however, it is important to recognize the seemingly counterintuitive nature of this prediction. That is, multiple regression analysis is a compensatory modeling technique, and as a result, the regression-derived weights from PC studies are designed to be maximally predictive for compensatory choices. Therefore, intuitively, one might predict that PC weights should be better predictors of compensatory choices than DE weights. However, these predictions are based on an assumption that individuals use compensatory processes to arrive at suitability ratings for each job in a PC study. Research suggests that this assumption may not be warranted. As Ogilvie and Schmitt (1979) suggested, as the number of cues increases in a PC study, so too does the likelihood of noncompensatory judgment processes. Indeed, researchers have found that participants in PC studies have used noncompensatory processes in performance evaluation (Brannick & Brannick, 1989) and decisions to interview (Rynes, Schwab, & Heneman, 1983). Based on these ideas and the fact that participants will respond to a large number of cues (14), it seems reasonable to expect that they will employ noncompensatory processes and use information on only a few attributes, to judge the suitability of each job in the PC portion of the study. Accordingly, we predicted that PC weights would outperform DE weights when a noncompensatory model is assumed. Hypothesis 1: PC weights are better predictors of choice than DE weights when it is assumed that decision making is based on a noncompensatory strategy.

The advantages carried by PC weights should be less clear when compensatory strategies are assumed. PC studies have been criticized for producing fatigue and carelessness because 1 participants are required to rate so many scenarios (Berk, 1995). Thus, as we explained above, it is likely that many participants reduce their cognitive load by rating scenarios based on only a few attributes that are most valued. Whereas PC weights may be quite useful for determining the one or two most important attributes, they may produce artificial distinctions among attributes that are of lesser importance (e.g., the third, fourth, and fifth most important attribute). Because formulas based on compensatory decision making suggest that all available information is taken into account, these artificial distinctions between less important attributes will be likely to adversely affect the ability of PC weights to predict attribute use during choice when the process is compensatory. Because DE studies can be completed rather quickly, the weights produced by DE methods for moderately important attributes should be less likely to suffer from this problem. Therefore, we hypothesized,

Downloaded from http://orm.sagepub.com at PENNSYLVANIA STATE UNIV on April 11, 2008 © 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

290

Organizational Research Methods

Hypothesis 2: DE weights are better predictors of choice than PC weights when it is assumed that decision making is based on a compensatory strategy. Hypothesis 3: Type of weight and decision strategy assumption will interact to predict the validity of the weights such that (a) when a noncompensatory strategy is assumed, prediction of choice from attribute valuation will be stronger with PC weights; and (b) when a compensatory strategy is assumed, prediction of choice from attribute valuation will be stronger with DE weights.

Method Selection of Attributes for Inclusion in the Current Study Breaugh (1992) noted that one of the major problems with studies on attribute valuation is the number of attributes assessed. Aiman-Smith, Scullen, and Barr (2002) suggested that PC scenarios that fail to include important factors are likely to result in biased estimates of cue importance. Therefore, our goal in the current study was to be as inclusive as possible, while avoiding redundant or very similar attributes. To ensure that we were including all possible job attributes that might have had an influence on participants’choices among jobs, we used a two-step process. First, we inspected the recruitment and job choice literature for potential job attributes. The most complete categorization we could find was a list of 12 attributes from Konrad et al. (2000) that included income, challenging work, opportunity for leadership, work hours, power and authority, easy commute, opportunities for promotion, geographic location, freedom and autonomy, coworkers, prestige and recognition, and supervisor. Because it is impossible to determine a priori all of the important potential factors that will influence choice, it is important to obtain information from multiple available sources and use attributes for which there is consistent support (Karren & Barringer, 2002). Thus, we also conducted a small pilot study to determine whether there were additional attributes that could be influential in job choice. Undergraduate psychology students (N = 244, 68% female, 84% White) at a large university in the southern United States were asked to indicate up to 7 job attributes that they felt would be important characteristics when seeking a full-time job. Phrases that were mentioned often and did not overlap with the previous list could be subsumed under the categories “interesting work” and “dress code.” Therefore, we added these to the previous list of 12, for a total of 14 attributes.

Participants Participants were undergraduate psychology students (N = 398, 70% female) at a large university in the southern United States, who ranged in age from 18 to 42 (M = 19.2, SD = 1.8). On average, participants had held one part-time job and two full-time jobs. Participants reported a variety of different majors, with heaviest concentrations in biology (14.5%), psychology (8.3%), business (6.0%), and kinesiology (7.1%). Approximately 8% were working full time; among those who were not, 34% reported that they would be seeking full-time work within the next 6 months, and 66% reported that they would be seeking full-time work in more than 6 months. Individuals who completed both phases of the study were given course credit.

Downloaded from http://orm.sagepub.com at PENNSYLVANIA STATE UNIV on April 11, 2008 © 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Slaughter et al. / Direct Estimates Versus Policy Capturing

291

Materials for Phase 1 In the DE condition, participants responded to a questionnaire that included three types of direct estimates of the perceived importance of the 14 attributes. First, participants were asked to rate each attribute, based on perceived importance when deciding among job offers, on a 7-point scale (1 = extremely unimportant; 7 = extremely important). Next, they were asked to rank the attributes 1 to 14, in order of perceived importance in a job choice situation. Finally, participants were asked to distribute 100 points among the 14 attributes, based on the amount of weight they thought they would give to each attribute in making choices among job offers. Stimuli for the PC condition included a packet of 70 descriptions of hypothetical jobs that consisted of either high or low levels of the 14 job attributes. Cooksey (1996) recommended that the number of scenarios in a PC study be equal to at least 5 times the number of attributes (14 Attributes × 5 = 70 Scenarios). In the current study, there were actually 64 unique scenarios, with 6 scenarios repeated for the purpose of reliability checks. Although fully crossing 14 the attributes would not have been feasible (2 = 16,384 descriptions), we were able to manipulate the levels of the attributes so that they were uncorrelated using the Orthogonal 2 Design function in SPSS 10.0. This produced a fractional factorial model with uncorrelated cues. This procedure confounds all interactions (we were not interested in interactions between job attributes). Although using uncorrelated cues violates the tenet of representativeness, zero correlations facilitate interpretation of the cue weights (Brookhouse et al., 1986). Inspection of the scenarios revealed that they were reasonably representative and realistic. Of the 64 unique scenarios, the mean number of favorable attributes per scenario was 7.02 (SD = 1.90). There were 19 scenarios with 8 favorable attributes; 15 with 7 favorable attributes; 11 with 9 favorable attributes; 7 with 6 favorable attributes; 5 with 5 favorable attributes; 3 with 4 favorable attributes; and 1 each with 0, 1, 3, and 10 favorable attributes. Full details of the design are available from the first author upon request. After generating the 64 different scenarios, the presentation order of the scenarios was randomly determined; following this procedure, 6 of the scenarios were randomly selected to serve as repeated scenarios and were placed at the end (and thus became Scenarios 65 to 70). We also varied the order of scenario presentation across participants. Half of the PC participants received scenarios in the 1 to 70 order; half received scenarios in the 70 to 1 order. Participants were asked to rate each job description in terms of likelihood of job acceptance on a 7-point scale (1 = extremely unlikely; 7 = extremely likely). An example scenario is presented in Appendix A.

Materials for Phase 2 For both the DE and PC conditions, materials for the second phase of the study included a packet containing five choice sets of either two or three hypothetical job descriptions each. We felt it was important to use multiple choice sets for several reasons. First, any single choice set may have had a problem that was overlooked in the design phase (e.g., one job may be overwhelmingly preferred, providing too little variance on choice and low power to test hypotheses). Second, with only a single decision to make, a participant could have been care-

Downloaded from http://orm.sagepub.com at PENNSYLVANIA STATE UNIV on April 11, 2008 © 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

292

Organizational Research Methods

less (i.e., could have not read the job descriptions carefully) and still chosen the option he or she was predicted to choose. However, this is unlikely to happen repeatedly over a series of decisions. Third, we wanted to avoid mono-operation bias, a common threat to validity in experimental research (Cook & Campbell, 1979). We elected to use set sizes of two and three offers to be realistic and representative. We suspect that most job seekers in the enviable position of entertaining multiple job offers typically have two or three options. The job options were presented as lists of favorable (e.g., “flexible work hours”) and unfavorable (e.g., “uninteresting work”) job attributes (the same attributes for which valuation estimates were derived in Phase 1 of the study). The job descriptions within each choice set were constructed so that all jobs had equal numbers of favorable and unfavorable attributes. To be realistic, these jobs were also constructed so that the choice would be difficult, in that there was often substantial missing information (i.e., there was information available on a particular attribute for one job in the choice set, but not for the other job[s] in the choice set). For each group of job descriptions, participants were asked to choose the one job that they preferred. Order of presentation was counterbalanced, both for order of choice sets and order of jobs within each choice set. An example of one of the job offer choice sets is presented in Appendix B.

Procedure A coin flip was used to randomly assign one large class of undergraduate psychology students (n = 203) to the PC condition and to assign a second, similar class (n = 195) of students to the DE condition. Although it was not possible to randomly assign individuals to experimental conditions, we felt confident about the relative equivalence of the students in the two classes for three reasons. First, the classes were both morning classes. One met at 9:10 a.m., and the other met at 10:40 a.m. These were the two earliest available sections for the class. Second, the instructors for the two sections were both given almost universally high ratings for this course in the online evaluation system. Third, the groups of participants that completed both phases of the experiment did not differ in terms of percentage female, percentage currently working full time, age, or number of full-time or part-time jobs previously held (all p > .05). Because it was important that the time elapsed between Phase 1 and Phase 2 was exactly the same across the two conditions, it was necessary to use a slightly different procedure for the administration of PC materials and the administration of DE materials. Participants assigned to the PC condition were given the prepared packets of descriptions to take home. They were asked to read and rate 10 descriptions at a time (to prevent fatigue and boredom) and to return the completed packets one week later. Although this caused us to sacrifice some experimental control, it is not uncommon to ask respondents to judge profiles on their own time when the profiles cannot feasibly be completed in a single session (Roose & Doherty, 1976). Participants were asked to complete the packets by themselves and in a quiet place and were warned that random responding could be checked and would result in a loss of credit. On the same day that the PC group returned their packets, participants in the DE condition completed their packets during the regular class period. Each group was given verbal and written instructions that stressed the importance of carefully reading the information and making thoughtful decisions.

Downloaded from http://orm.sagepub.com at PENNSYLVANIA STATE UNIV on April 11, 2008 © 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Slaughter et al. / Direct Estimates Versus Policy Capturing

293

Exactly 3 weeks after completion of the Phase 1 materials, each of the groups participated in Phase 2, in which they made five different choices between job offers. This exercise was completed during the normal class meeting time.

Results Participant Attrition There were a number of respondents who participated in Phase 1 but not Phase 2 (i.e., they were not in class on the day that Phase 2 materials were administered). Analyses of differences between respondents who completed both phases versus those who completed only one phase suggested few differences. In the DE condition, the two groups differed in terms of age, F(1, 199) = 5.07, p < .05. Those who did not complete Phase 2 were about one year older (n = 37, M = 20.18, SD = 4.22), than those who completed both phases (n = 166, M = 19.27, SD = 1.49). The two groups did not differ in terms of percentage female, number of full- or part-time jobs, whether they were currently employed, when they planned to begin looking for full-time work, or any of the attribute-valuation variables. In the PC condition, those who did not complete Phase 2 (n = 45) did not differ from those who completed both phases (n = 150) in terms of any of the demographic or attribute-valuation variables.

Attribute Valuation Table 1 presents the means, standard deviations, and intercorrelations for the attribute valuation estimates provided in the DE conditions. Inspection of Table 1 reveals that participants were moderately consistent in assigning weights using different methods of direct attribute valuation. Correlations between weights for the same attributes derived using attribute importance ratings, points allotted, and ranking methods method were all statistically significant. The correlations involving ratings were actually lower than would be expected, with many of them falling in the .30 to .40 range (M = .37). This appears to be because many individuals assigned the same importance ratings to multiple attributes, restricting within-person variance. As Table 1 shows, the correlations between weights for the same attributes using points allotted and rankings are somewhat higher, with most in the .40 to .50 range (M = .46). In terms of the relative weights placed on various attributes across the three methods, participants were very consistent. The Spearman rank-order correlation across ratings and points allotted was ρ = .92, p < .01. For rating and ranking, ρ = .95, p < .01; finally, for ranking and points-allotted, ρ = .95, p < .01. Next, we computed 195 individual regression equations—one for each participant in the PC condition. These analyses were performed to estimate the weight assigned to each attribute by each of the participants in this condition. In each equation, the judgments of suitability provided by each participant were regressed on the 14 attributes constituting the job description (coded as 0 = low, 1 = high). All 14 attributes were entered into the regression equation simultaneously; the standardized regression weight for a given attribute constituted the valuation estimate for that attribute, for a given participant. Means and standard deviations of the weights derived from the 195 equations are presented in Table 2. Although the two groups of participants were fairly similar in terms of rank-order of attribute importance (ρpc, ratings = . 71, p < .01; ρpc, points = .63, p < .05; ρpc, ranking = .61, p

Suggest Documents