Industrial and Organizational Psychology Sampling in ...

3 downloads 0 Views 283KB Size Report
Jul 28, 2015 - Page 2. 232 gwenith g. fisher and kyle sandell. Landers, R. N., & Behrend, T. S. (2015). An inconvenient truth: Arbitrary distinctions be-.
Industrial and Organizational Psychology http://journals.cambridge.org/IOP Additional services for Industrial

and Organizational

Psychology: Email alerts: Click here Subscriptions: Click here Commercial reprints: Click here Terms of use : Click here

Sampling in Industrial–Organizational Psychology Research: Now What? Gwenith G. Fisher and Kyle Sandell Industrial and Organizational Psychology / Volume 8 / Issue 02 / June 2015, pp 232 - 237 DOI: 10.1017/iop.2015.31, Published online: 28 July 2015

Link to this article: http://journals.cambridge.org/abstract_S1754942615000310 How to cite this article: Gwenith G. Fisher and Kyle Sandell (2015). Sampling in Industrial–Organizational Psychology Research: Now What?. Industrial and Organizational Psychology, 8, pp 232-237 doi:10.1017/ iop.2015.31 Request Permissions : Click here

Downloaded from http://journals.cambridge.org/IOP, IP address: 129.82.107.75 on 29 Jul 2015

232

gwenith g. fisher and kyle sandell

Landers, R. N., & Behrend, T. S. (2015). An inconvenient truth: Arbitrary distinctions between organizational, Mechanical Turk, and other convenience samples. Industrial and Organizational Psychology: Perspectives on Science and Practice. Schwarz, N., & Vaughn, L. A. (2002). The availability heuristic revisited: Ease of recall and content of recall as distinct sources of information. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 103–119). Cambridge, United Kingdom: Cambridge University Press. Vogt, P. W., Gardner, D. C., & Haeffele, L. M. (2012). Sampling, selection, and recruitment. When to use what research design (pp. 126–129). New York, NY: Guilford Press.

Sampling in Industrial–Organizational Psychology Research: Now What? Gwenith G. Fisher and Kyle Sandell Colorado State University

We agree with the authors of the focal article that too little attention is paid to sampling in industrial–organizational (I-O) psychology research. Upon reflection and in response to the focal article by Landers and Behrend (2015), we answer three primary questions: (a) What is it about our training, science, and practice as I-O psychologists that has led to less focus on sampling issues? (b) Does it matter? (c) If so, then what should we do about it? At the heart of research methods are two primary issues: sampling and measurement. There seems to be consistent agreement among I-O psychologists that we place a high value on and great deal of effort toward measurement precision: What constructs do we aim to measure, and what evidence do we have for the reliability and validity of those measures (e.g., DeNisi, 2013; Guion, 1998; Hough & Connelly, 2013)? However, much less attention has been paid to sampling. Effective sampling involves clearly identifying the population of interest for a particular research study and learning appropriate strategies and best practices for representatively sampling from that population (Hanges & Wang, 2012). Why Is Little Attention Paid to Sampling?

There are many reasons why we have not focused much on sampling in IO psychology. First, we would like to add to the list of reasons offered by Gwenith G. Fisher and Kyle Sandell, Department of Psychology, Colorado State University. Correspondence concerning this article should be addressed to Gwenith G. Fisher, Department of Psychology, 1876 Campus Delivery, Colorado State University, Fort Collins, CO 80523–1876. E-mail: [email protected]

s a m p l i n g i n i - o p syc h o l o g y r e s e a r c h

233

Landers and Behrend by stipulating that little emphasis on sampling may be due to a lack of clear criteria regarding what constitutes “good” or even “acceptable” sampling. For example, Rynes described “better” datasets as having “larger sample sizes, higher response rates, better measures, and fewer errors” (Rynes, 2012, p. 410) and repetitively referred to larger samples throughout her chapter as if that were the criterion for “better.” However, does larger necessarily mean better? We need large enough samples to avoid Type II errors. But beyond that, we suggest that large samples are not useful if the larger numbers of people, organizations, or other sample elements are not representative of the population about which we aim to make conclusions. Second, we go further than Landers and Behrend to suggest that I-O psychologists don’t emphasize sampling in the peer review process enough. Our evidence for this claim is based on a literature review we recently conducted in which we examined articles published between 2011 and 2013 in Journal of Applied Psychology, Academy of Management Journal, Journal of Management, and Personnel Psychology. We investigated the extent to which sample characteristics and response rates were reported in articles among these top journals—extending prior work by Shen et al. (2011). Research in these journals during this timeframe reported on 345 survey samples. Many articles included multiple samples, and these were coded as separate occasions rather than combined into one code. Out of the 345 survey samples, only 32 (9.3%) were representative of either the organization studied or drawn from or the target population at large. Representativeness was coded liberally as a “Yes” if the author(s) indicated that the sample was representative of the organization from which the author(s) obtained the sample or the population at large. This finding suggests that researchers either do not obtain representative samples or do not deem it important enough to mention so in their manuscripts. This finding is particular troubling because, as Landers and Behrend asserted, organizational samples are subject to a host of idiosyncrasies that may not be present across the target population (e.g., organizational culture, industry, selection instruments used). If a sample used within a particular organization is not representative of that organization— much less the target population—it is possible that still more sample idiosyncrasies may affect the external validity of a study’s findings (Hanges & Wang, 2012). Our literature review also sought to investigate response rates from survey research studies. The average response rate across all 345 coded samples was 61.02%, a response rate that is well above the average for organizational research (53%; Baruch & Holtom, 2008). However, out of the 345 survey samples coded, 105 (30.43%) were missing response rate information. This oversight is problematic because researchers do not consistently provide

234

gwenith g. fisher and kyle sandell

important information to help readers assess the quality of the sampling used in the research. If response rate information is missing, then it is also likely that results of a nonresponse analysis to assess nonresponse bias are not included either. Last, we assert that little attention is paid to sampling among I-O psychologists because it is not emphasized during graduate training at the onset of their careers. Current graduate training in I-O psychology typically requires a single course on research methodology, often taught to psychology graduate students across different areas within psychology in the same course. Such a course likely provides an overview of the various types of sampling, their pros and cons, as well as the frequency with which they occur in psychological research. This information is usually reiterated to some extent in statistics and measurement courses also taken at the graduate level. Despite the reoccurrence of this topic, most teaching material rarely goes beyond the sort of dichotomous description discussed in the focal article— that is, sampling strategies are usually broadly painted as “good” or “bad.” In stark contrast, measurement issues are studied much more deeply in graduate level coursework in I-O psychology (Byrne et al., 2014). Therefore, measurement issues rise to the forefront, whereas sampling strategy takes a back seat. Students may not learn to recognize its importance and consequently apply a limited focus on sampling in their work in science and/or practice. In contrast to psychology, other fields, such as sociology, economics, epidemiology, and survey methodology, place more emphasis on sampling error as an important source of measurement error (e.g., Biemer, Groves, Lyberg, Mathiowetz, & Sudman, 2011).

Does Sampling Matter?

The next question is whether and to what degree sampling really matters in research studies. Grzywacz, Carlson, and Reboussin (2013) conducted simulations to examine the effect of overrepresentations of women in samples of studies of work/family spillover. Their simulations demonstrated that sampling can impact the results of a research study. Specifically, they found that differences in proportions of males and females in a research study could lead to different results regarding work/family experiences, highlighting the importance of sample composition. I-O psychologists traditionally rely on meta-analysis as a tool for estimating “true” relations among variables, correcting for measurement error, and averaging across methodological differences among studies. Results by Grzywacz et al. (2013) remind us that outcomes of a meta-analysis may depend on the sampling used in studies included in the meta- analysis. Therefore it is critical to assess the impact of sample characteristics as well as

s a m p l i n g i n i - o p syc h o l o g y r e s e a r c h

235

adjust for sampling error when conducting meta-analyses (Schmidt & Hunter, 2014). Recommendations for Sampling in I-O Psychology Research

We do not suggest focusing less on measurement but rather advocate that I-O psychologists pay more attention to the issue of properly sampling research participants to increase external validity and be clearer about sample composition. In particular, I-O psychologists need to develop clear criteria regarding what constitutes “good” or even “acceptable” sampling. Next, I-O psychologists should focus more effort on the specification of the sampling frame for a particular study. This step follows after identifying the population of interest; the sample frame includes the methods by which all elements of the target population will be obtained. The goal is to obtain as much coverage of the target population as possible within a particular sample frame (Groves & Couper, 2002; Groves et al., 2009). We suggest that sample frame specification is often overlooked in I-O psychology research. As Landers and Behrend pointed out, we rely on the “best available” in a convenience sample rather than devote considerably more effort to designing the sampling frame for a study. Sampling may include deriving a probability sample (i.e., members or elements of a sample frame are selected using chance methods), stratified sampling (classifying the sample into subpopulations based on supplementary information, e.g., geographical location or individuals’ demographic characteristics, such as age, race, or gender) and then selecting sample members separately for each stratum, proportionate stratification (sample elements are selected from each stratum in proportion to the stratum’s frequency in the population), disproportionate sampling (selecting units from the population at a different rate than they occur in the population), clustered sampling (a sampling technique in which frame elements are selected jointly rather than individually; Groves et al., 2009), or various types of nonprobability sampling (convenience, snowball, etc.). In I-O psychology we should accurately identify the type of sampling strategy used and provide a proper justification for such methods when describing the research design. Next, we call for researchers to continue to examine evidence of sampling error, which occurs when not all elements of a sampling frame are measured. Typically in I-O psychology we assess sampling error by examining response rates to surveys. Results of our literature review described above suggest room for improvement. Response rates should be reported, nonresponse should be systematically investigated, and one should not assume that large samples or samples obtained with a high response are automatically representative of the target population. It is possible that even when the response rate is high, nonresponse may occur disproportionately

236

gwenith g. fisher and kyle sandell

in relation to key characteristics of sample members (Rogelberg & Stanton, 2007). Thus, missing data analyses to systematically investigate nonresponse become critically important. As per the recommendation by Rogelberg and Stanton (2007), journals should offer clear instructions to authors regarding the reporting of survey response rates as well as regarding how authors ought to address nonresponse in their research. Last, we call for more emphasis on sampling in research methods training in graduate I-O psychology programs. The next generation of I-O psychologists should understand and attend more to sampling issues in research, including the use of proper methods (e.g., focus on the sample frame and assessment of sampling error) as described above and reporting of procedures. Training may be more effective in I-O psychology if it addresses particular issues or challenges specific to organizational research rather than training students about sampling in the same manner across all areas of psychology.

References Baruch, Y., & Holtom, B. C. (2008). Survey response rate levels and trends in organizational research. Human Relations, 61(8), 1139–1160. Biemer, P. P., Groves, , R. M. Lyberg, L. E. Mathiowetz, N. A. Sudman, S. (Eds.). (2011). Measurement errors in surveys. San Francisco, CA: Wiley. Byrne, Z. S., Hayes, T. L., McPhail, S. M., Hakel, M. D., Cortina, J. M., & McHenry, J. J. (2014). Educating industrial–organizational psychologists for science and practice: Where do we go from here? Industrial and Organizational Psychology: Perspectives on Science and Practice, 7(1), 2–14 DeNisi, A. S. (2013). An I/O psychologist’s perspective on diversity and inclusion in the workplace. In B. Ededs, M. Ferdman, & B. R. Deane (Eds.), Diversity at work: The practice of inclusion (pp. 564–579). San Francisco, CA: Wiley. Groves, R. M., & Couper, M. P. (2002). Designing surveys acknowledging nonresponse. In M. Ver Ploeg, R. A. Moffitt, & C. F. Citro (Eds.), Studies of welfare populations: Data collection and research issues (pp. 13–54). Washington, DC: National Academy Press. Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology (2nd ed.). New York, NY: Wiley. Grzywacz, J. G., Carlson, D. S., & Reboussin, B. A. (2013). A primer on sampling. In J. G. Grzywacz & E. Demerouti (Eds.), New frontiers in work and family research (pp. 110–132). New York, NY: Psychology Press. Guion, R. M. (1998). Assessment, measurement, and prediction for personnel decisions. Mahwah, NJ: Erlbaum. Hanges, P. J., & Wang, M. (2012). Seeking the Holy Grail in organizational science: Uncovering causality through research design. In S. Kozlowski (Ed.), The Oxford handbook of organizational psychology (pp. 79–116). New York, NY: Oxford. Hough, L. M., & Connelly, B. S. (2013). Personality measurement and use in industrial and organizational psychology. In K. F. Geisinger (Ed.), APA handbook of testing and assessment in psychology: Vol. 1. Test theory and testing and assessment in industrial and organizational psychology (pp. 457–476). Washington, DC: American Psychological Association.

s a m p l i n g i n i - o p syc h o l o g y r e s e a r c h

237

Landers, R. N., & Behrend, T. S. (2015). An inconvenient truth: Arbitrary distinctions between organizational, Mechanical Turk, and other convenience samples. Industrial and Organizational Psychology: Perspectives on Science and Practice. Rogelberg, S. G., & Stanton, J. M. (2007). Understanding and dealing with organizational survey nonresponse. Organizational Research Methods, 10(2), 195–209. Rynes, S. (2012). The research-practice gap in I/O psychology and related fields. In S. Kozlowski (Ed.), The Oxford Handbook of organizational psychology (pp. 409–452). New York, NY: Oxford. Schmidt, F. L., & Hunter, J. E. (2014). Methods of meta-analysis: Correcting error and bias in research findings. Thousand Oaks, CA: Sage. Shen, W., Kiger, T. B., Davies, S. E., Rasch, R. L., Simon, K. M., & Ones, D. S. (2011). Samples in applied psychology: Over a decade of research in review. Journal of Applied Psychology, 96(5), 1055–1064.