job classification approaches and the implementation ...

4 downloads 293 Views 751KB Size Report
This paper compares two job classification methods for showing the appropriateness of cognitive tests in settings that were not involved in supplying data fora ...
PKRSONNEL PSYCHOLOGY

1984. 37

JOB CLASSIFICATION APPROACHES AND THE IMPLEMENTATION OF VALIDITY GENERALIZATION RESULTS EDWIN T. CORNELIUS, III College of Business Administration University of South Carolina FRANK L. SCHMIDT U.S. Office of Personnel Management, and George Washington University THEODORE J. CARRON Ethyl Corporation

This paper compares two job classification methods for showing the appropriateness of cognitive tests in settings that were not involved in supplying data fora validity generalization analysis. One approach was an elaborate quantitative procedure that involved a lengthy job inventory and a multivariate item analysis. This approach was shown to be highly successful when applied to the responses from 1179 job inventories collected in 54 petroleum-petrochemical plants from 30 different companies. The other procedure involved simple job classification judgments by supervisors and incumbents. This latter approach was shown to be as effective, but was much less time consuming and costly. Professional and legal implications of these findings are discussed. Meta analysis is a set of methods for quantitatively integrating findings across studies and determining whether variation in results between studies is real or attributable to various statistical and measurement artifacts. To date, meta analysis has been applied to research literature in a wide variety of areas and in each case has contributed to clarification of previously confusing findings (Glass, McGaw, and The research reported here was funded by the American Petroleum Institute. The authors wish to thank the members of the API Subcommittee on Personnel Selection for their efforts and support of this project. The authors also wish to acknowledge the help of Richard Kennah in developing the preliminary inventory used in this study; and Frank Lane and Robin Shealy for helping to prepare the data for analysis. Requests for reprints should be addressed to Edwin T. Cornelius III, College of Business Administration, University of South Carolina, Columbia, South Carolina, 29208. Copyright © 1984 Personnel Psychology, Inc. 247

248

PERSONNEL PSYCHOLOGY

Smith, 1981; Hunter, Schmidt, and Jackson, 1982). In the study of employment procedure validity, meta analysis methods have been referred to as validity generalization procedures (Schmidt and Hunter, 1977). The results from several studies using validity generalization procedures have been published, and these results have cast doubt on the situational specificity hypothesis of test validity. The situational specificity hypothesis held that test validities were specific to the situation in which the validation study had been carried out, and were not generalizable across other situations, even from the same jobs within the same industry. This hypothesis provided an explanation for the differences in validity coefficients that had been observed from one study to the next (e.g., Ghiselli, 1966). Meta analysis results have now established that these observed differences in validity coefficients are caused primarily by sampling error and other artifacts, and not by real differences in the validity of aptitude tests across different settings (e.g., Callender and Osbom, 1981; Pearlman, Schmidt, and Hunter, 1980; Schmidt, GastRosenberg, and Hunter, 1980; Schmidt, Hunter, and Caplan, 1981; Schmidt, Hunter and Pearlman, 1981; Dunnette, Houston, Hough, Toquam, Lamstein, King, Boshardt, and Kays, Note 1; Pearlman, Note 2; Petersen, Note 3). These findings have important implications for personnel practitioners, for it means that for some job-test combinations, the mean of the prior validity distribution is high enough to make a validity study unnecessary before implementing a testing program. All that is needed in these cases is to show that the jobs under study are the same as those included in the original validity generalization analysis. The major issue is thus one of job classification. The personnel specialist must decide if the job under study can be classified the same as the jobs under which the Bayesian prior is based. If so, the test is valid and can be used in this new setting. It is not clear what type of job classification procedure is best suited to make this type of decision. From a scientific perspective, there is emerging evidence that cognitive tests will show substantial validity across most occupational boundaries (Schmidt, Hunter, and Pearlman, 1981; Hunter, Note 4; Pearlman, Note 2). Therefore, an elaborate procedure is probably not warranted. From a legal perspective, however, if a practitioner were to implement a testing program without conducting a validity study, EEO enforcing agencies might desire elaborate job analysis evidence to support the conclusion of job comparability. What is needed, then, is a defensible job analysis procedure that can be used by practitioners to make the job classification decision necessary to implement validity generalization results. There are two purposes of this study, the first is to illustrate a quantitative job classification procedure that can be used to make use of validity

EDWIN T. CORNELIUS. Ill ET AL.

249

generalization findings in new settings. The methodological approach described below involves administering an elaborate job inventory, and then developing statistical classification decision rules based on an item analysis of the responses from incumbents in jobs for which validation studies are available. These rules may then be used to make job classification decisions in settings for which no validation study is available. We feel this procedure is psychometrically sound, and may be helpful as a model for others who are faced with this type of job classification problem. A second purpose is to compare this fairly elaborate procedure with a much simpler holistic judgment approach for accomplishing the same purpose. Our expectation is that the simpler judgment approach will perform as well as the elaborate statistical procedure. Method Setting Several years ago, test validation studies for various job-test combinations in the petroleum-petrochemical industry were accumulated and analyzed using meta analysis methods appropriate to the study of validity generalization. The results indicated that certain cognitive tests were valid across companies, locations, and even across a range of job titles within three very broad occupational groupings: "operations," "maintenance," and "laboratory" (fordetails, see Schmidt, Hunter, and Caplan, 1981; Callenderand Osbum, 1981)'. To make use of these results, personnel practitioners in this industry needed a sound job analysis procedure that could determine if a given job under study was similar to the jobs in one of the three occupational categories for which validated tests existed.

Job Inventory Job inventory items were developed in two phases. In the first phase,a list of 130 items was generated after observations and interviews at two plants, and after studying job analysis results from studies conducted in 'Actually, the tests were shown to be valid across all occupational areas. Despite this fact, we decided to preserve the occupational classifications and develop a job analysis method that could classify jobs into these families. There were several reasons fordoing so. First, some variance in the distribution of validity coefficients remained after accounting for statistical artifacts. Therefore, it was possible to increase slightly the level of validity by preserving these distinctions. Second, future test validation studies in these occupational areas might uncover tests that are differentially valid across the groupings. Therefore, it would be helpful to have a job classification procedure available. Finally, for performance appraisal and other purposes, it would be helpful to have such a job classification capability.

250

PERSONNEL PSYCHOLOGY

the petroleum-petrochemical industry. In the second phase, pilot data were collected from a refinery and a chemical plant using the preliminary inventory. On the basis of the pilot data, a subset of items was identified for use. The final inventory contained three basic parts. Part I consisted of 39 fairly heterogeneous "Work Activities," which were task-oriented and worker-oriented descriptors of varying levels of specificity. Part II contained 26 "Ability Elements" that were taken primarily from descriptions of General Aptitude Test Battery (GATB) abilities found in the Handbook for Analyzing Jobs i\J .S. Department of Labor, 1972). Part III of the inventory called fora job classification decision by the person filling out the questionnaire. The names of four occupational groupings were presented, as well as a listing of representative job titles in each category. The task of the rater was to indicate which category best described the job s/he was analyzing.

Sample Fifty-four plants from 30 different companies provided usable inventories. There were two types of plants: those for which test validation reports were available and used in the original validity generalization study (the PARTICIPATE sample), and those for which no validation studies were available (NON-PARTICIPATE). The number of inventories collected from PARTICIPATE plants was 508; the number from NON-PARTICIPATE plants was 671. These plants were located in 23 different states, including Alaska and Hawaii. In terms of jobs sampled, forty-one percent of the inventories described jobs in operations, 34 percent described jobs in maintenance, 15 percent described laboratory jobs, and 9 percent described "miscellaneous" jobs. In terms of raters, 45.4 percent were filled out by job incumbents, 46.3 percent by supervisors, 6.1 percent by ex-incumbents, and 2.2. percent by ex-supervisors. Most raters were highly experienced in the jobs they were rating. The median number of months experience was 78.5 (about six and a half years). However, this was a highly skewed distribution: forty percent of the sample reported 99 or more months (over 8 years) job experience.

Procedure Participating firms were solicited via direct mailout to human resource specialists in member companies of the American Petroleum Institute. There were no formal sampling rules used; the purpose was to assemble

EDWIN T. CORNELIUS, III ET AL.

251

as many companies as possible. A special effort was made to obtain data from plants that had supplied validity studies for the validity generalization data base. The responsibility for data collection in each plant was assigned to an on-site coordinator. Each coordinator was given a target number of jobs to analyze, and was urged to find at least two different raters for each job title analyzed. The coordinator also collected the expert judgments that were used as the criterion for assignment of job titles into the operations, maintenance, laboratory, or "other" occupational categories. The procedure was as follows: Coordinators first identified a panel of five people who were experts in all the jobs in the plant. These five experts independently filled out a questionnaire which called for them to categorize the job titles in the plant according to the four major occupational groupings outlined above. If there were any disagreements among the experts, the coordinator convened a meeting of the panel to discuss the differences and reach a consensus over the classification judgments.

Results Reliability of the Job Inventory Responses In many plants, the same job title was evaluated by two or more raters. For each job title with two raters, the Pearson-r correlation was computed between the responses of the two raters. Whenever a job title in a plant was analyzed by more than two raters, all possible pairwise correlations were computed and then averaged. The resulting coefficients were interpreted as estimates of the reliability of the responses. Across all plants and occupational groupings there were 409 such agreement coefficients. Table 1 gives the Mean and Median of these agreement coefficients, broken down by occupational group and type of job element (activities or abilities).

Statistical Classification Results The primary goal of this research was to develop a quantifiable procedure for accurately assigning jobs to one of the three occupational groupings: operations, maintenance, and laboratory. The results of several analyses are described below, reported separately for activity elements and ability elements. Results using activity elements. Table 2 presents the results of several Multiple Discriminant Analyses (MDA). In the first analysis, an MDA was performed on the job inventory responses from the PARTICIPATE

252

PERSONNEL PSYCHOLOGY TABLE 1 Summary of Reliability Results Across Ail Plants

Group Operations Maintenance Laboratory Misc. Jobs

Group Operations Maintenance Laboratory Misc. Jobs

No. of Jobs 161 149 56 43

A. Observed Results Average Reliability Coefficients Activities Abilities Mean Mdn Mean Mdn .59 .61 .49 .50 .57 .59 .46 .48 .54 .56 .46 .51 .68 .69 .54 .53

Total Mean Mdn .58 .58 .56 .56 .53 .54 .66 .69

B. Estimated Results Estimate of Reliability Using Pooled Judgments from Two Raters Activities Abilities Total .74 .66 .73 .73 .63 .72 .70 .63 .69 .81 .70 .80

sample. For this analysis, the 39 activity elements were the profile variables, and the three occupational groups (operations, maintenance, laboratory) were the levels of the classification variable. All univariate F-ratios except one were statistically significant at the .05 level or better. The multivariate procedure extracted two highly significant roots, indicating that the three occupational groups could be separated statistically using two orthogonal discriminant functions. In order to determine the practical consequences of these results, the discriminant weights were applied to the inventory responses of the NON-PARTICIPATE sample, and the scores were entered into a Maximum Likelihood classification analysis. Use of the discriminant scores resulted in 96 percent correct classification, thus emphasizing the practical as well as statistical significance of the results. Results using ability elements. The analyses were repeated using the 26 ability elements as dependent variables. It was believed a priori that the occupational groupings would show more overlap, and thus less separation on the ability elements, since plant jobs were probably more similar on underlying abilities required than on work activities themselves. The results of the MDA are also reported in Table 2. All but one of the univariate F-ratios were statistically significant. The multivariate routine extracted two significant latent roots. A maximum likelihood classification analysis was then performed on the NON-PARTICIPATE sample using the discriminant weights from the PARTICIPATE sample. The total number of inventories correctly classified was 80 percent, substantially less than the 96 percent accuracy for activity elements. This means that there are less differences across job categories on ability

EDWIN T. CORNELIUS, III ET AL.

253

TABLE 2

Results of Various Multiple Discriminant Analyses Wilk's ChiSignifiSquare df Function Eigenvalue Lambda cance A. Analysis Using Three Occupational Groups 39 Activity Elements as Dependent Measures 1 78 4.44 .051 1174 .0001 2 2.62 .276 506 38 .0001 26 Ability Elements as Dependent Measures 1 .27 517 52 1.59 .0001 2 .41 .71 137 25 .0001 B. Analysis Using Four Occupational Groups 39 Activity Elements as Dependent Measures 1 117 3.97 .039 1422 .0001 2 2.48 .195 718 76 .0001 .47 3 .679 170 37 .0001 26 Ability Elements as Dependent Measures 1 650 78 1.46 .233 .0001 .41 2 .572 249 50 .0001 .24 3 .806 96 24 .0001

Canonical Correlation

.90 .85 .78 .54

.89 .84 .57 .77 .54 .44

elements than on activity elements. Results predicting membership in four occupational groupings. Although the job analysis inventory was designed to separate jobs into only three major occupational groupings, job inventories and criterion classification judgments were nevertheless collected on a fourth sample of "miscellaneous" jobs. These jobs were not classifiable into operations, maintenance, or laboratory categories. Examples include truck driver, security guard, fireman, and general laborer. An MDA was first carried out on the inventories from PARTICIPATE plants using the 39 activity elements as profile variables. There are three possible orthogonal discriminant functions in a four group problem of this sort. All three roots were significant beyond the .0001 level of probability. The percentage of inventories correctly classified in the cross-validation sample was 90 percent. The corresponding results using 26 ability elements were statistically significant, but the percent correctly classified in the cross-validation sample was only 72 percent. Conclusions regarding statistical classification. The results of the statistical procedure were highly successful. The various activity elements on the job inventory can be differentially weighted and applied to a crossvalidation sample of job analysis inventories with 96 percent accuracy of classification. Further, even though the elements on the inventory were not designed to separate jobs into four occupational groupings, the derived weighting procedure can do so at about 90 percent accuracy. In general, the activity elements on the inventory do a better job of classification than the ability elements.

254

PERSONNEL PSYCHOLOGY TABLE 3 A Comparison of Percent Correctly Classified for Statistical and Holistic Judgment Procedures Statistical Procedure

Group Operations Maintenance Laboratory Misc. Jobs Total Operations Maintenance Laboratory Total

Holistic Judgment

Activities

Four Group Classification Results 98.7 94.3 91.4 91.9 99.4 93.9 90.7 68.4 95.6 90.9 Three Group Classification Results" 95.6 97.8 93.4 96.1 96.0

Abilities 69.5 81.9 68.5 50.8 72.0 81.4 84.0 67.7 79.5

"It is not possible to calculate the percentage correctly classified in the three group case using the holistic judgment procedure since raters were never asked to make three-group classification judgments. The figure 96.1 was computed by counting the number of correct classifications in these three groups in the four-group holistic judgment task. For reasons outlined in the text, we believe this to be an extremely conservative estimate of the accuracy of holistic judgments in the three group case.

Holistic Judgment Classification Results A second research question concemed how well a fairly simple, holistic job classification judgment by incumbents and supervisors would compare to the more elaborate statistical procedure described in the last section. In Part III of the job inventory, each rater was asked to make such a judgment. The results with these judgments are presented in the first column of Table 3. As can be seen, judges are highly accurate (99 percent correct) in classifying operations jobs and laboratory jobs. They are less accurate (91 percent) in analyzing maintenance and "miscellaneous" jobs. Most of the confusions in the holistic judgments involved classifying a " miscellaneous" job inappropriately as a maintenance job (probable reasons for this confusion are presented in the discussion section). Despite this problem, the results for the holistic classification procedure were very good (96 percent correct classifications averaged across all jobs), and compared quite well with the results from the more elaborate statistical procedure.

Discussion In order to make use of validity generalization results, practitioners need a defensible job classification tool so they can determine if a given job is similar enough to jobs that provided validity studies in a Bayes-

EDWIN T. CORNELIUS, III ET AL.

255

ian validity generalization analysis. If the job can be classified the same as the jobs upon which the Bayesian prior is based, then the test can be considered valid in this new setting without having to conduct a validity study. In this paper we illustrated two job classification approaches to this problem. In one approach, a 65-item job inventory was developed and administered to 1179 incumbents and supervisors in 54 plants from 30 companies located throughout the country. The responses from the job inventories were analyzed using a multivariate item analysis procedure. In the second approach, incumbents and supervisors provided simple, holistic job classification judgments. Two major conclusions from this study are: L A statistical classification based on job inventory responses is highly accurate. The combination of job inventory data and an item analysis methodology was successful in accomplishing the first purpose of this research. A sound and proven quantitative job analysis procedure is now available that practitioners may use to determine if any given job belongs to either operations, maintenance, or laboratory groupings for which validity generalizes. From an operational standpoint, the results from this study would be applied as follows. First, a personnel specialist would administer the job inventory to two or more experts in the job under question. Then,discriminant weights developed in this study would be applied to the averaged responses from the inventory. Finally, well known formulas would be applied to the resulting discriminant scores to determine the probability that the job in question belongs to either the operations, maintenance, or laboratory groupings (see Overall and Klett, 1972, chapter 14 for appropriate formulas). If the probability for any one group were high, then the specialist could use the cognitive tests that had been shown to be valid for that particular occupation in this new setting, without having to conduct a separate validation study. 2. Simple holistic job classification judgments perform as well as (if not better than) the elaborate statistical procedure. It is clear that direct job classification judgments from supervisors and incumbents are at least as accurate as the statistical procedure outlined above. This finding has potential legal and practical ramifications, since practitioners in the petroleum-petrochemical industry now have an even simpler, accurate, and inexpensive alternative to the elaborate job inventory approach. In fact, we feel that the 96 percent accuracy rate reported here is an underestimate of the true accuracy of global personal judgments. As a matter of hindsight, the quality of the judgments was probably hampered by inadequate instructions in the holistic judgment task. Most of the confusions in the holistic judgments involved classifying a

256

PERSONNEL PSYCHOLOGY

"miscellaneous" job inappropriately as a maintenance job. This often occurred in plants where incumbents with job titles such as "laborer" or "truck driver" worked primarily in maintenance areas of the plant. In some instances these jobs were considered to be maintenance jobs according to a union contract. This problem could have been rectified with instructions to the rater that the type of work performed should form the basis of the classification, and not the employee's functional area of the plant. Our guess is that if the study were repeated with improved instructions, the accuracy of the holistic judgments for the "maintenance" and "miscellaneous" categories would approach that achieved in the operations and laboratory categories (i.e., 99%). Some readers of this paper may feel that this second purpose of the study, and the subsequent findings, are trivial in nature. That is, it might appear to some that incumbents obviously should be able to classify job titles into one of four occupational categories with a high degree of accuracy (after all, shouldn't holistic judgments by incumbents predict holistic judgments by job experts?). In truth, if it were not for today's legal climate, a study of this sort might not be necessary. However, as we point out below in detail, an analysis of court cases and government selection guidelines indicates that the type of holistic job classification judgment we used would most likely be considered by many to be indefensible legally. There is an apparent emphasis in the legal arena for detailed, task-oriented job analysis data when making personnel decisions (Thompson and Thompson, 1982). In this paper we illustrated how such a detailed procedure can be used to make job classification decisions for validity generalization purposes. However, and equally importantly, we also illustrated that this type of analysis is not necessary. Although this detailed job analysis information may be useful for other purposes, it is "quantitative overkill" if the purpose is to make a job classification decision for selection purposes.

Are extensive job analysis techniques necessary for selection? A major question prompted by these results concerns the extent of job analysis data necessary for justifying the use of selection systems in organizational settings. There are at least three different ways in which job analysis is important for selection purposes: 1) determining whether jobs are similar enough to be combined into a single selection system; 2) identifying knowledges, skills, and abilities (or aptitudes) that are important for job performance; and 3) determining whether a test can be transported from a setting in which it has been shown to be valid to a new setting. The extensiveness of the job analysis data base needed

EDWIN T. CORNELtUS. Ill ET AL.

257

to accomplish these three objectives is reviewed below from both scientific and legal perspectives. The scientific perspective. Although comparative job analysis studies are only now beginning to emerge (Ash, Levine, and Sistrunk, 1983), the early evidence suggests that extensive and sophisticated approaches to studying jobs may not be appropriate for the three uses cited above. For example, this paper reports the first comparative study that provides data relevant to the third use of job analysis information cited above. On the basis of results from the petroleum-petrochemical industry reported in this paper, a simple informed judgment by job experts (incumbents or supervisors) is enough to determine whether a test can be transported from one setting to another. In order to determine if jobs are similar enough to be combined into a single selection system, it appears that simple judgments from informed experts (supervisors and incumbents) are sufficient. Sackett, Cornelius, and Carron (1981) compared an elaborate task-based approach and a simple global judgment approach to determining which foreman jobs in a chemical processing plant were similar enough that they could be combined and treated the same. The task-based approach took hundreds of man hours to complete. The global judgment approach took about 15 minutes of time from supervisors and incumbents. The results from the two methods were essentially the same. There is also evidence that extensive task-oriented approaches to job analysis may not be necessary for identifying human attributes required in work. For example, Cornelius and Lyness (1980) generated complete task statements for each of 10 separate jobs. For each job, incumbents and analysts evaluated the individual tasks for evidence of several human attributes required to perform the tasks. This elaborate procedure took some time, but nevertheless followed the kinds of procedures that have been interpreted by some to be implied by government guidelines. The authors then compared the results with a fairly simple procedure in which incumbents and analysts gave global judgments about whether the attribute was required for the job or not. The results indicated that there was no superiority for the elaborate, task-based approach. Holistic judgments were adequate for identifying underlying abilities and motivational components inherent in work. In a similar vein, Hogan and Fleishman (1979) demonstrated that simple human judgments using a seven point rating scale may be all that is required in establishing the physical requirements of jobs. These authors compared the actual metabolic costs of performing 30 occupational tasks with perceived effort ratings by professional job analysts. The Pearsonr correlation was .81. The study was then repeated using college students. The resulting correlation was r = .80 for male students and r = .70

258

PERSONNEL PSYCHOLOGY

for female students. These findings are important in the analysis of physically demanding jobs, for it means that simple judgments by naive observers (college students) produce the same results as elaborate physiological measures of work. The legal perspective. Contrary to the evidence presented above, complex, task-based or behavior-based procedures may be required to satisfy government guidelines and/or judges in EEO cases. For example, in the Supreme Court case of Albemarle v. Moody (1975) the company was faulted, in part, for not conducting a job analysis before combining various lines of progression into a single promotion system for which the same aptitude test was used. Recent findings on the robustness of aptitude test validity, as well as the Sackett et al. (1981) finding regarding the utility of global judgments indicates such requirements are scientifically and professionally unnecessary. Thompson and Thompson (1982) recently analyzed several court cases in which job analysis data, or the lack of it, formed the basis of a legal decision in the EEO area. They maintain that the following standards have been applied by the courts, and suggest that practitioners follow these when analyzing jobs for the purpose of developing selection systems: 1) a job analysis must be performed on the exact job for which a selection device is to be used, and this analysis must be reduced to written form, 2) data for the job analysis must be collected from several up-to-date sources, and the data should be collected by an expert job analyst, 3) tasks, duties, and activities must be identified. In addition, these authors reiterate the identification of tasks as the important prerequisite for conducting an "acceptable" job analysis. It is clear that the holistic job classification judgment used in our study and the global judgments used by Sackett et al. (1981) do not satisfy any of the Thompson and Thompson criteria, and yet in each case, data were collected to show that the simple judgment approach was valid for the purpose at hand. And further, the simple judgment approach produced the same results as the more complex and costly method at hand. This means that professional and legal criteria for conducting job analyses are at odds. Until legal practice conforms with published scientific knowledge, personnel consultants and practitioners may be forced by legal pressures to perform multiple analyses as we have done in this study. That is, personnel specialists should perform both an elaborate job analysis and a simple judgment approach in the same study. If the two produce similar results, as research shows they do, then the simpler procedure may be implemented for cost and productivity reasons. A Caveat The results reported here do not imply that extensive "micro level" methods of job analysis are not useful for any purpose. Obviously, they

EDWIN T. CORNELIUS, III ET AL.

259

can be useful. For example, task-based approaches may be necessary when developing performance feedback instruments. Likewise, detailed job analysis data may be useful when developing training programs for individual jobs. Even for selection purposes, there may be situations for which detailed job analyses are needed. For instance, it may be that task-based approaches are important for selection systems that involve constructing work samples, simulations, and highly specific job knowledge tests (as in Levine, Ash, and Bennett, 1980). However, if the purpose of the job analysis is to combine jobs for administering ability/aptitude tests, or for transporting tests from one setting to the next, any procedure other than simple judgments by incumbents and supervisors is likely to be quantitative "overkill." REFERENCE NOTES

1.

2.

3.

4.

Dunnette, M. D., Houston, J. S., Hough, L. M., Toquam, J., Lamnstein, S., King, K., Boshardt, M. J. and Kays, M. (1982). Development and validation of an industry wide electric power plant operator selection system. Personnel Decisions Research Institute Technical Report Number 72. Minneapolis. Pearlman, K. (1982, August). Bayesian approach to validity generalization. S. Rains Wallace Dissertation Award Presentation at the Ninetieth annual meeting of the American Psychological Association, Washington, D.C. Peterson, N. G. (1982, October). Investigation of validity generalization in clerical and technical/professional occupations in the insurance industry. Conference on Validity Generalization, Personnel Testing Council of Southern California, Newport Beach, California. Hunter, J. E. (1982, under review). Test Validation for 12,000 jobs: An application of job classification and validity generalization analysis to the general aptitude test battery (CATB). REFERENCES

Albemarle Paper Co. v. Moody, (1975) 10 FEP 1181. Ash, R. A., Levine, E. L., and Sistrunck, F. (1983). The role of jobs and job-based methods in personnel and human resources management. In Rowland, K. M. and Ferris, G. C. (Eds.), Research in Personnel and Human Resources Management. Greenwich, Conn: JAI Press. Callender, J. C. and Osbum, H. G. (1981). Testing the constancy of validity with computer-generated sampling distributions of the multiplicative model variance estimate: Results for petroleum industry validation research. Journal of Applied Psychology, 66(3), 274-281. Cornelius, E. T., and Lyness, K. S. (1980). A comparison of holistic and decomposed judgment strategies in job analyses by job incumbents. Journal of Applied Psychology, 65(2), 155-163. Ghiselli, E. E. (1966). The validity of vocational aptitude tests. New York: Wiley. Glass, G. V., McGaw, V., and Smith, M. L. (1981). Meta Analysis in Social Research. Beverly Hills: Sage. Hogan, J. C. and Fleishman, E. A. (1979). An index of the physical effort required in human task performance. Journal of Applied Psychology, 64(1), 197-204. Hunter, J. E., Schmidt, F. L. and Jackson, G. B. (1982). Meta analysis: Accumulating research findings across studies. Beverly Hills: Sage.

260

PERSONNEL PSYCHOLOGY

Levine, E. L., Ash, R. A., and Bennett, N. (1980). Exploratory comparative study of four job analysis tnethods. lournal of Applied Psychology, 65, 524-535. Overall, J. E. and Klett, C. J. (1972). Applied Multivariate Analysis. New York: McGi^wHill. Peariman, K., Schmidt, F. L., and Hunter, J. E. (1980). Validity generalization results for tests used to predict job proficiency and training success in clerical occupations, lournal of Applied Psychology, 65, 373-406. Sackett, P. R., Cornelius, E. T., and Carron, T. J. (1981). A comparison of global judgment vs. task oriented approaches to job classification. PERSONNEL PSYCHOLOGY, 34(4), 791-804. Schmidt, F. L., Gast-Rosenberg, L, and Hunter, J. E. (1980). Validity generalization results for computer programmers. Journal of Applied Psychology, 65, 643-661. Schmidt, F. L., and Hunter, J. E., (1977). Development of a general solution to the problem of validity generalization. Journal of Applied Psychology, 62, 529-540. Schmidt, F. L., Hunter, J. E., and Caplan, J. R. (1981). Validity generalization results for two groups in the petroleum industry. Journal of Applied Psychology, 66, 261-273. Schmidt, F. L., Hunter, J. E., and Peariman, K. (1981). Task differences as moderators of aptitude test validity in selection: A red herring. Journal of Applied Psychology, 66, 166-185. Thompson, D. E. and Thompson, T. A. (1982). Court standards for job analysis in test validation. PERSONNEL PSYCHOLOGY, 35, 4, 465-874. U.S. Department of Labor, Handbook for Analyzing Jobs. (1972). Washington, D.C.: Government Printing Office.

Suggest Documents