578
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 58, NO. 3, AUGUST 2011
Addressing Common Method Variance: Guidelines for Survey Research on Information Technology, Operations, and Supply Chain Management Christopher W. Craighead, David J. Ketchen, Jr., Kaitlin S. Dunn, and G. Tomas M. Hult
Abstract—Common method variance (CMV) is the amount of spurious correlation between variables that is created by using the same method—often a survey—to measure each variable. CMV may lead to erroneous conclusions about relationships between variables by inflating or deflating findings. We analyzed recent survey research in IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, Journal of Operations Management, and Production and Operations Management to assess if and how scholars address CMV. We found that two-thirds of the relevant articles published between 2001 and 2009 did not formally address CMV, and many that did address CMV relied on relatively weak remedies. These findings have troubling implications for efforts to build knowledge within information technology, operations and supply chain management research. In an effort to strengthen future research designs, we provide recommendations to help scholars to better address CMV. Given the potentially severe effects of CMV, authors should apply the recommended CMV remedies within their survey-based studies, and reviewers should hold authors accountable when they fail to do so. Index Terms—Common method variance (CMV), common method bias, empirical research, information technology, methods, operations management, supply chain management, survey research.
I. INTRODUCTION URVEY research has played a prominent role in testing theorized relationships that have collectively enhanced the information technology (IT), operations management (OM), and supply chain management (SCM) bodies of knowledge [e.g., 18]. When examining theorized relationships, surveys have important strengths that are quite appealing, such as the ability to efficiently obtain large samples and to generalize findings across multiple populations. Yet, surveys are also prone to certain problems including common method variance (CMV)— spurious correlation that arises from using the same method to
S
Manuscript received September 23, 2010; revised February 4, 2011; accepted March 14, 2011. Date of publication May 5, 2011; date of current version July 20, 2011. Review of this manuscript was arranged by Department Editor S. S. Talluri. C. W. Craighead and K. S. Dunn are with the Department of Supply Chain and Information Systems, Smeal College of Business, Pennsylvania State University, University Park, PA 16802 (e-mail:
[email protected];
[email protected]). D. J. Ketchen, Jr., is with the College of Business, Auburn University, Auburn, AL 36849-5241 (e-mail:
[email protected]). G. T. M. Hult is with the Eli Broad Graduate School of Management, Michigan State University, East Lansing, MI 48824-1121 (e-mail:
[email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TEM.2011.2136437
Fig. 1.
Common method bias.
measure the independent and dependent variables within a relationship [41]. Because CMV may lead to wrong conclusions, the merits of research designs that do not address CMV have been questioned [25], [35], [56]. As shown in Fig. 1, a researcher who wishes to test a theorized relationship via a survey cannot do so directly. Instead, she/he must identify and measure variables that reflect the observed constructs. Then, based upon these observations, the researcher derives conclusions relative to the theorized relationships (i.e., supporting or not supporting one or more hypotheses). However, measurement is never perfect, thus all studies are subject to measurement error. Measurement error consists of two parts: random error and systematic error [59]. Both random and systematic errors interfere with the ability to accurately capture a relationship, but the latter is considered a more severe threat. Random error is often addressed by using multiple items that seek to capture the same underlying construct [59]. On the other hand, systematic error (i.e., method effects or error introduced by the measurement method) can drastically inflate or deflate the measured relationships between independent and dependent variables [19], [56] and thus can significantly threaten the validity of research findings [50]. Compared to other problems that researchers can encounter when conducting survey research (e.g., nonresponse bias, reliability, or internal/external validity issues), CMV may be the most troublesome because it can account for a considerable portion of variance and is a leading contributor to systematic error within survey research. Podsakoff et al. [50], for example, found that more than a quarter (26.3%) of the variance
0018-9391/$26.00 © 2011 IEEE
CRAIGHEAD et al.: ADDRESSING COMMON METHOD VARIANCE
in a relationship can be attributed to CMV. Yet, scholars note that the magnitude of method variance is not the same in every field [67]. Cote and Buckley [16], for example, found that method variance is approximately 15.8% in marketing research, 28.9% in sociology and psychology research, and 30.5% in educational research. Overall, it is clear that “. . .CMV may be the most common and dangerous threat to correct interpretation of research results. . .” [47, p. 421] and that scholars need to more fully address CMV [9], [50]. Given the threat that CMV imposes, each field should examine if/how CMV is addressed among its studies [19]. In March 2009, for example, the editorial team at Journal of International Business Studies revealed that only 38.92% of the empirical articles published in the journal between 2000 and 2009 mentioned CMV [11]. This led the editors to label CMV as a “serious concern.” Further, the editors stated their intent to desk reject manuscripts that suffer from CMV and to ask the authors to use CMV remedies before resubmitting the manuscript [11]. Previous OM scholars have discussed the potential of CMV to distort OM findings [33], [43]. Rungtusanatham et al. [56] found that only a small percentage of survey articles published between 1980 and 2000 considered CMV. Although this indicates that CMV is a potential problem, there has not been a systematic review of the literature to assess if and how CMV is being addressed within IT, OM, and SCM research. Moreover, it remains unknown whether practices since 2000 have been appropriate. Accordingly, our objectives are to: 1) describe the various CMV remedies that researchers may use; 2) assess the extent to which CMV is addressed within IT, OM, and SCM research; and 3) suggest strategies to minimize the potential effects of CMV within future research designs and thereby create more confidence in the results studies provide. II. COMMON METHOD VARIANCE REMEDIES We evaluated studies that used surveys as the primary data collection method and were published in the IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT (IEEE-TEM), Journal of Operations Management (JOM), and Production and Operations Management(POM) between 2001 and 2009. The choice of journals allows us to analyze pure OM journals (JOM and POM) as well as an IT focused journal that is more interdisciplinary in nature (IEEE-TEM). IEEE-TEM also allows us to make comparisons of how CMV is addressed in OM compared to IT research. We identified 320 relevant articles using a two-phase process. Out of the 320 articles that were identified, 123 articles (or 38.44%) appeared in IEEE-TEM, 33 articles (or 10.31%) appeared in POM, and 164 articles (or 51.25%) appeared in JOM. A keyword search was conducted to collect the majority of articles. Because this approach can miss articles [5], however, each issue was examined by hand to ensure that all relevant studies were uncovered. Although there are fluctuations over the 2001–2009 time period, the average number of survey articles per year was 35. During the most recent five years (2005– 2009), the percentage of survey articles in IEEE-TEM, JOM, and POM combined averaged 28.17%, which appears to support the
579
contention that survey research has become a key methodology within IT, OM, and SCM research. These findings are consistent with the trends noted in Craighead and Meredith [18]. After the articles were collected, two OM researchers independently evaluated 30 articles to determine if the author(s) had used CMV remedies. The two reviewers agreed on 29 of the 30 articles, which resulted in a 0.96 crude agreement or an almost perfect Cohen’s kappa [14] of 0.9 [40]. Given the high level of agreement, the remaining articles were evaluated primarily by one researcher, but questionable articles were discussed [46]. In the following, we describe our findings. As we discuss these findings, we briefly describe the remedies and possible avenues to enhance how CMV is addressed in future research. We organize our discussion into two categories: 1) remedies applied in single source studies (i.e., independent and dependent variables are collected from the survey); and 2) remedies applied in multiple source studies (i.e., the research used two sources of information to capture the independent and dependent variables). A large majority of the articles we examined used single source designs (248 out of 320, or 77.50%). The percentage of survey articles using single source designs exceeded 65% in 2001 and 2009, and exceeded 75% during the years 2002–2008. III. SINGLE SOURCE STUDIES Remedies that may be applied to single source studies can generally be divided into two groups. The first group involves remedies that attempt to statistically detect and/or control for CMV after the data have been collected. The second group involves remedies that are implemented prior to/during data collection. In essence, these remedies attempt to create some type of separation between the collection of the data for the independent and dependent variables. Table I contains the results relative to the single source studies by year. Each of the articles was classified based on if/how CMV was addressed. Specifically, the articles were put into one of the four categories—each category is discussed in the following. A. Single Source Studies—No CMV Remedies As shown in Table I, 71.77% of the single source survey articles did not address CMV. While the percentages improved during the nine-year period, the percentages even in the last four years (2006–2009 average = 59.10%) are substantial. This is quite disturbing given the potential seriousness of CMV in survey research (e.g., [25]) coupled with the prominent role that survey research has played in testing theory. This, at a minimum, should cause scholars to scrutinize our extant research and knowledge base and reexamine key findings with various methods, such as meta-analyses and replication studies. Meta-analyses, which aggregate results across multiple studies, are effective in removing biases (artifacts) inherent in individual studies [30]. The meta-analysis’ ability to remove systematic error (of which CMV is a major contributor) would be quite beneficial in reexamining prior studies where CMV was not addressed. However, a meta-analysis requires multiple studies in a topic area and therefore may not be appropriate for important, yet niche, or emerging areas—replication (i.e.,
580
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 58, NO. 3, AUGUST 2011
TABLE I CMV REMEDY CATEGORIZATION FOR SINGLE SOURCE STUDIES (n = 248) AND CMV REMEDIES FOR SINGLE SOURCE STUDIES IN IEEE-TEM (n = 106)
duplication of a past study) could be used in this situation. If the replication study (addressing CMV) confirmed the prior research, more confidence could be placed on the findings [20]. However, to the extent the results differed, CMV and other concerns, methodological as well as theoretical, would be heightened [51]. B. Single Source Studies—Statistical Remedies As shown in Table I, the most common remedy applied in single source survey studies was statistical remedies (24.19%), which are post hoc statistical tools that attempt to detect or eliminate the presence CMV. The latest two years (2008–2009) show a higher percentage of this remedy with 40.63% and 45.00%, respectively. The most commonly used statistical remedy during the nine-year period was Harman’s single-factor test [23]. Harman’s single-factor test is by far the most commonly used remedy to control for CMV across all fields [44], including management (e.g., [2], [63]), business ethics (e.g., [53]), psychology (e.g., [15], [50]), and marketing (e.g., [1], [52]). For this test, all variables are subject to exploratory factor analysis (EFA), and CMV is assumed to exist if one factor accounts for the majority of variance in the variables or if one factor surfaces from unrotated factor solutions [50]. Although it is used extensively, scholars recommend this remedy be used as a last resort because of its inability to detect moderate to small levels of CMV (e.g., [41]). It should be noted that Harman’s single-factor test [23] is traditionally performed using
EFA, but confirmatory factor analysis (CFA) is becoming much more common (e.g., [27]). Using CFA is more robust because differences between the one-factor model versus the multifactor model can be tested via the chi-square difference test. Using only EFA allows researchers to provide an estimate based on the results, but no direct test of differences. Although we call for more replication research, these types of studies are quite uncommon within IT, OM, and SCM research, with the notable exception of a 2006 special issue of JOM [22]. Therefore, researchers should make every effort to instill confidence in their results, including applying remedies to combat CMV. We believe that Harman’s single-factor test (with CFA setting) should be the minimum standard for addressing CMV, which is the first recommendation shown in Table II. It is straightforward, does not require additional resources, and can be used in almost any research design. It is also an appealing remedy in that it can be applied after data collection, and therefore is a reasonable request that could be made by a reviewer. However, it should be noted that the use of (most) post hoc statistical remedies such as Harman’s or unmeasured latent methods factor [50] is increasingly being questioned [55]. Given these concerns, researchers should consider using the marker variable technique. The marker variable technique [41] consists of incorporating an additional variable into the study that is theoretically unrelated to at least one other variable of interest [24]. The marker variable technique is a commonly used CMV remedy in a variety of disciplines, such as business ethics (e.g., [64]) and psychology (e.g., [65]). CMV is evaluated based on the correlation between the theoretically unrelated variables [44]. A recent example of the use of a marker variable is Yee et al. [68], which examined employee satisfaction’s effect on performance in high contact service environments. In addition to other remedies applied (e.g., Harman’s single-factor test), Yee et al. [68] used the marker variable technique by including an additional question about living environment satisfaction (theoretically unrelated) on the questionnaire. The authors examined the correlation between the marker variable and five service quality indicators (dependent variables of the study) and found no significant correlations, thus instilling more confidence in the results. As shown in Table II, we encourage scholars to consider using the marker variable technique as it has been shown to be an effective remedy for CMV [44]. At the intersection of IT, OM, and SCM research, scholars often survey a manager about a process or characteristic of internal operations or external supply chains and then relate it to one or more performance metrics (e.g., quality, cost, flexibility, and delivery). Researchers are normally quite diligent about using respondents that are subject matter experts, which is appealing in that the manager chosen is often quite knowledgeable about the operations and/or supply chain. However, the chosen process and performance is often under the purview of the responding manager, and therefore there may be a tendency to inflate the constructs, which is consistent with the marker variable technique. The additional appeal of the marker variable technique is the flexibility inherent in how it may be implemented. The technique may be implemented as a correlated marker as described
CRAIGHEAD et al.: ADDRESSING COMMON METHOD VARIANCE
581
TABLE II STRATEGIES TO ADDRESS CMV IN SINGLE SOURCE RESEARCH
by Lindell and Whitney [41] or as a CFA marker, which is illustrated in Richardson et al. [55]. Further, the marker may be selected a priori or post hoc and may be operationalized as a multi-item scale, a single-item scale, or an objective item [67]. Despite the benefits of the flexibility, there is a challenge associated with the use of this technique—there are only a few examples (e.g., [68]) of marker variable usage in the extant IT, OM, and SCM literature. We, therefore, refer the interested reader to Williams et al. [67] for an exemplary review of the marker variable technique that may be adapted to a researcher’s specific context. C. Single Source Studies—Separation Remedies Separation remedies were rarely used (6 out of 248 single source studies: 2.42%—see Table I). As shown in Fig. 1, the problem inherent in CMV is a result of the independent and dependent variable being captured with the same method. In studies utilizing only one source, this is certainly the situation. However, there are remedies that may alleviate this situation that focus on how the data are collected. In essence, these techniques attempt to create some type of separation when collecting the data. Psychological separation of measurement, for example, is created when the researcher makes it appear that the independent and dependent variables are not related via a cover story or other means [50]. A recent example of this technique in IT research is Chen, Shih, and Yang [12]. While potentially valuable for addressing CMV, this form of separation may lengthen the time to complete the survey and thus negatively affect the response rate. Another approach is to create methodological separation of measurement, which is when participants respond to the independent and dependent variables in various formats, such as Likert scales and open-ended questions [24]. Further, CMV can
be remedied to the extent some of the measures could be more objective as opposed to purely subjective (e.g., [29]). Hult et al. [28], for example, use both objective and subjective measures of cycle time, which mitigated the effect of CMV. However, scholars should carefully consider the statistical analysis ramifications of this approach. When creating methodological separation of measurements, researchers should use various response formats rather than measures that lie on the objectivesubjective continuum. Temporal separation of measurement is created when the independent and dependent variables are measured at different points in time [50]. Kowtha [36], for example, used a longitudinal survey design and administered three waves. In the first wave, they gathered data on control variables and gender. They then collected data on the predictor variables and the criterion variables in the second and third survey waves, respectively. Boyer and Frohlich [7] also created temporal separation of measurement by distributing a survey to customers of online/home delivery grocers and then following up five months later to collect post hoc data on their purchasing experience. By introducing a time lag between when the independent and dependent variables were measured, the authors created separation of measurement and alleviated the potential threat of CMV. As shown in Table II, we recommend that researchers consider using these separation remedies whenever possible. Although all of the separation remedies may be beneficial, temporal separation seems particularly well fit for IT, OM, and SCM research. As described earlier, researchers in these areas often capture information via survey on some type of process or characteristic of an operation or supply chain that often represents some type of resource (tangible or intangible) or capability. Intuitively, the resource-based view of the firm [4] has been a widely adopted theoretical foundation in IT, OM, and SCM research (cf., [48]). Temporal separation is quite appealing in this context, as it (at least partially) accounts for the potential
582
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 58, NO. 3, AUGUST 2011
lag effect of resource and capabilities on performance. However, temporal separation does present the challenge of attrition wherein a respondent may complete the first, but not the second portion of the survey. The lack of response in the second portion of the survey would render the first response virtually worthless. This, of course, would present an additional barrier to obtaining a reasonable response rate. Further, events could occur during the time lag between surveys that could alter the dependent variable and therefore contaminate the results. Therefore, while this technique is certainly appealing, IT, OM, and SCM scholars should approach the temporal separation technique with caution.
D. Single Source Studies—Statistical and Separation Remedies Combined While we separately discussed statistical and separation CMV remedies for single source studies, it should be noted that they are not mutually exclusive—quite the contrary. Combining statistical and separation methods is the ideal situation because it helps overcome limitations inherent in each approach and provides additional validation to the research conclusions. As shown in Table II, we suggest this combination of techniques be implemented in future research. This, unfortunately, has not been very common in past single source studies. As shown in Table I, only 1.61% of the studies between 2001 and 2009 used the combination of statistical and separation remedies. For example, Das and Joshi [20] examined the effect of differentiation strategies on process innovativeness in service organizations. To address CMV, the authors used the combination of psychological separation and Harman’s single-factor test.
E. Single Source Studies Within IEEE-TEM As discussed earlier, the extent to which CMV is addressed differs significantly across fields. As part of this analysis, we took a closer look at the single source studies published in IEEE-TEM. This allowed us to compare the CMV remedies used in OM-focused studies with CMV remedies used in IT/otherfocused articles. Comparing and contrasting the CMV remedies used in related areas helps scholars understand how other researchers are addressing the issue and may help scholars in both areas more fully address CMV. As shown in the bottom two rows in Table I, the IT/other studies did a better overall job of addressing CMV than the OMfocused studies. Almost 88% of the single source OM studies did not address CMV, compared to approximately 68% of the IT/other studies. The five OM articles that did address CMV relied exclusively on statistical remedies whereas the IT/other studies used separation and statistical/separation combined. In sum, while the majority of articles in both groups did not address CMV, the IT/other articles used more rigorous remedies. It is evident that IT scholars more fully address CMV in single source studies than OM scholars, but both scholars need to use more rigorous CMV remedies.
F. Single Source Studies—Summary While the results in Table I are a bit alarming with the large percentages of studies not using CMV remedies, there appear to be more remedies applied in the later years (2008 and 2009, in particular—see Table I). However, researchers have relied almost exclusively on post hoc statistical remedies (i.e., Harman’s). While this is a step in the right direction, future research should more fully embrace the various remedies that can be implemented. Table II summarizes our recommendations for future single source studies. It should be noted that the recommendations are not meant to be comprehensive, but they are rather remedies that may fit well with the IT, OM, and SCM contexts. Further, the recommendations should not be considered as mutually exclusive. When appropriate to the research, the recommendations should be combined to instill more confidence in the ensuing contributions to the extant body of knowledge. However, this does not necessarily mean that more techniques are always better. Researchers should consider their research context in light of the strengths and weaknesses of the techniques and choose accordingly. In addition to traditional CMV remedies that may be used by researchers, we have suggested five other approaches that may provide more confidence in the study’s results—see Table III. The first strategy, which we refer to as quantitative validation, is where researchers could use a secondary source of data to corroborate their results. If it is not reasonable to use secondary data as either the dependent or independent variable (and thus have complete methodological separation as described in Section IV), it may be used as a means to validate the survey findings. Even showing a significant correlation between secondary data may enhance the confidence in the study’s results. The second strategy, referred to as subsample validation, is where additional survey responses are obtained from a subset of the sample. If it is not reasonable to obtain a second (query) source of information for the full sample (and thus be able to use multiple source remedy), it may be reasonable to obtain an additional survey respondent to create a subsample for various robustness checks. Multitrait-multimethod (MTMM) or escalating the unit of analysis (described in the next section) may be possible even on a subsample of additional responses. Researchers should, however, consider the statistical issues (e.g., subsample size needed) involved in this approach. The third strategy, referred to as qualitative validation, is where researchers may be able to instill more confidence in their results by complementing the survey results with qualitative information (e.g., interviews, company annual reports). The fourth strategy involves the author(s) highlighting the use of interaction terms. The use of interaction terms helps alleviate CMV concerns particularly if they are significant [60]. The final proposed strategy involves the author(s) highlighting any low correlations among the variables captured in the survey. These low correlations (particularly if near zero) may be an indicator that CMV is not prevalent [8]. These five suggestions by no means should be used to replace traditional CMV remedies. These five strategies, however, can be used to instill more confidence that CMV is not prevalent in the research.
CRAIGHEAD et al.: ADDRESSING COMMON METHOD VARIANCE
583
TABLE III STRATEGIES TO INSTILL MORE CONFIDENCE IN THE RESULTS RELATIVE TO CMV*
Because CMV poses a significant threat to a study’s conclusions [17], [21], [66], we hope that authors and reviewers more fully address CMV in future research by bearing in mind the strategies summarized in Tables II and III when designing and evaluating research. Although multiple source studies (discussed in Section IV) may be better equipped to mitigate CMV, it should not assumed that CMV makes all single source studies fatally flawed. CMV can certainly be rigorously addressed in single source studies using the techniques discussed in Tables II and III. However, if CMV is not addressed, the study may be flawed and thus its contribution should be questioned. IV. MULTIPLE SOURCE STUDIES Multiple source studies are in the minority with a total of 72 out of 320 (22.50%) during the years 2001–2009. Single source studies are indeed beneficial in certain research contexts, such as those that focus on individual perceptions, decisions, and actions. Also from a practical standpoint, single source studies are more efficient in terms of time and cost. Despite these strengths, we believe that the use of multiple sources in IT, OM, and SCM research should be increased—significantly. Research in these areas has evolved to encompass more macro level issues, such as supply chain management [48], [56], yet (based on our results) it continues to rely heavily on single source strategies. We are certainly not the first scholars to call for the use of multiple sources, particularly in the context of supply chain research. However, the basis for this call to action is not simply to move beyond single informant studies, but rather to more fully address CMV. Similarly to single source remedies, multiple source remedies can be divided into remedies that attempt to statistically detect and/or control for CMV after the data have been collected and remedies that are implemented prior to/during data collection (i.e., separation remedies). Table IV contains the results relative to the multiple source studies by year. Each of the articles was
classified based on if/how CMV was addressed. Specifically, the articles were put into one of five categories—each category is discussed in the following. A. Multiple Source Studies—No CMV Remedies As shown in Table IV, 54.74% of the multiple source studies did not use any traditional CMV remedies. The articles that fell into this category obtained multiple survey responses for each unit of analysis. Using multiple responses to capture both the independent and dependent variables instills more confidence in the research findings [43], but the study’s results still suffer from limitations inherent in single method studies [31]. For example, let us assume that two managerial responses are obtained from a manufacturing plant about a process and its effect on plant performance. If CMV is not addressed, both responses could be affected by CMV, and let us assume, for argument’s sake, that the measures are inflated. If these two responses are averaged, the result is still an inflated value for the variables. Thus, CMV could still be problematic even if some type of interrater agreement (IRA) measure is used [33]. We certainly are not questioning the value of IRA or reliability measure, as they instill more confidence in a study’s results because they capture consistency and agreement between multiple sources [33]. However, these measures do not decrease CMV in a statistically significant manner [21]. Again, while the use of multiple sources is noteworthy, researchers should use a remedy that will either detect/control or eliminate CMV when utilizing multiple sources. B. Multiple Source Studies—Single Source Statistical and Separation Remedies As shown in Table IV, 11.11% of the multiple source studies published during the nine-year period used single source statistical and/or separation remedies, such as Harman’s single-factor test [23] or temporal separation. Using single source remedies
584
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 58, NO. 3, AUGUST 2011
TABLE IV CMV REMEDY CATEGORIZATION FOR MULTIPLE SOURCE STUDIES (n = 72) AND CMV REMEDIES FOR MULTIPLE SOURCE STUDIES IN IEEE-TEM (n = 17)
in multiple source studies certainly instills more confidence in the research findings than using no CMV remedies. However, in this situation, authors are likely not taking full advantage of their efforts, which may be better served by using multiple source statistical and/or separation remedies. C. Multiple Source Studies—Multiple Source Statistical Remedies As shown in Table IV, multiple source statistical remedies, which are statistical techniques that use both sources to detect and isolate CMV, were rarely used during the nine-year period. Only 1.39% of the studies used multiple source statistical remedies to control for CMV. More specifically, during the entire nine-year period, only one of six articles published in 2007 used multiple source statistical remedies, such as the MTMM procedure. The MTMM procedure, which has historically been used in various other disciplines such as management (e.g., [39]) and marketing (e.g., [54]), consists of measuring each variable with multiple methods and then using that data to create a matrix that shows the correlation between each trait-method pair [10], [61]. Although more robust than IRA measures, the traditionally used MTMM has major limitations, such as there are no formal guidelines to determine the level of CMV in the study [44]. The CFA-MTMM procedure builds on the traditional MTMM procedure and addresses this limitation [62]; it uses CFA to estimate trait, method, and error variance as well as the correlations between trait factors [66]. This remedy, which was demonstrated in Klein [34], allows a researcher to: 1) estimate
the true relationships (i.e., relationships that are free from method biases and random error) between latent factors; and 2) evaluate discriminant and convergent validity of measures [32]. Although it is seldom used in IT, OM, and SCM research, we encourage scholars to use the CFA-MTMM procedure to control for CMV in multiple source studies to instill more confidence in their findings (see Table V). Using this multiple source statistical remedy significantly increases the credibility of the study’s results and its contributions to knowledge. The CFA-MTMM procedure may be particularly appealing to researchers in situations where a capability is cocreated or shared (e.g., functional interfaces, supply chain integration), if information is captured from both co-creators (e.g., buyer and supplier). Despite the benefits of CFA-MTMM, there are challenges and caveats to this technique (cf., [38]). From a practical perspective, obtaining two or more responses and/or designing two methods to extract data can be problematic. The biggest challenge for scholars, however, may be the lack of its use in prior research. Yet, we believe that IT, OM, and SCM scholars may obtain necessary insights from other fields, such as psychology, which has extensively used MTMM. We, therefore, refer the interested reader to Lance et al. [38] for an exemplary review and discussion of the MTMM technique that may be adapted to the OM context. D. Multiple Source Studies—Multiple Source Separation Remedies As shown in Table IV, the most common remedy applied in multiple source studies was separation remedies. During the
CRAIGHEAD et al.: ADDRESSING COMMON METHOD VARIANCE
585
TABLE V STRATEGIES TO ADDRESS CMV IN MULTIPLE SOURCE RESEARCH
nine-year period, 29.17% of the multiple source studies used separation remedies to control for CMV. Single source separation remedies attempt to create some type of separation in how the data are collected, but they still rely on the same source for the dependent and independent variables—conversely, multiple source separation strategies do not. Two common remedies are: 1) escalating the unit of analysis; and 2) using an alternate method to capture either the independent or dependent variable (we refer to the latter as complete methodological separation). The first multiple source separation remedy, escalating the unit of analysis, consists of two phases [49]. First, a large sample of individuals (e.g., all employees in a company) is reduced to smaller, but still meaningful, subunits (e.g., employees in individual departments). Next, a researcher randomly selects participants from each subunit to estimate values for certain variables and uses the remaining participants in the subunit to estimate values for the other variables [49]. Lai and Wong [37], for example, used this remedy by dividing their questionnaire into two parts. They had information system (IS) internal auditors answer questions about the effectiveness of global information system (GIS) strategy and had IS directors answer questions dealing with the determinants of GIS strategy. Similarly, Linderman et al. [42] divided their questionnaire into two parts and had certain employees respond to performance questions and other employees respond to questions about quality tools, goals, and methods. Escalating the unit of analysis decreases the potential of CMV and generally increases the overall response rate because there are fewer questions on each survey [42]. However, the challenge for scholars is the need to obtain matched pairs—a response without its complement is virtually worthless. As shown in Table V, we recommend researchers consider using escalation of the unit of analysis. This technique is appealing to IT, OM, and SCM researchers because it may be appropriate for a wide variety of topics investigated in these
areas, such as OM capabilities (e.g., quality, flexibility). Escalating the unit of analysis, for example, may be an appealing approach to control for CMV when a supply chain capability is primarily resident in one of the firms (e.g., supplier had implemented six sigma) rather than the capability being co-created or shared among firms. In this situation, the supplier could complete the survey items relative to the capability (six sigma practices) and the buying firm could complete the survey items relative to performance (quality of products). However, to the extent that either six sigma practices or the quality of the products could be captured with secondary information, the research could realize a complete methodological separation. Complete methodological separation of measurement involves obtaining measures of the independent and dependent variables from different sources through different methods. Chesteen et al. [13], for example, distributed a survey to nursing assistants to assess the exchange between management and patients, performed random inspections of nursing homes to gather quality measures, and obtained patient disability measures and other operational data through reports submitted to the state of Utah. By collecting data through multiple sources with multiple methods, the authors completely eliminated the potential threat of CMV. Complete methodological separation of measurement is the most rigorous approach to combat CMV. Thus, as shown in Table V, we encourage scholars to design studies where the independent and dependent variables are captured through multiple sources with multiple methods when possible. Within IT, OM, and SCM research, for example, this would often involve some type of organizational concept (e.g., knowledge) captured via survey and performance data captured via a secondary source (e.g., Compustat). Conversely, the role of the survey could be switched and the secondary data be used to capture the
586
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 58, NO. 3, AUGUST 2011
independent variable (e.g., OM researcher examines the effects of poor financial performance on OM practices). To the extent research continues to move toward more macro level issues [48], [56], the use of alternate methods (e.g., secondary data) appears to hold more appeal. However, there are certainly caveats to this remedy such as the reliance on the accuracy and timing of the secondary data. Additionally, while it addresses CMV, secondary data may result in type II errors due to being a bit off target [47], whereas a survey could be designed to better capture the construct of interest. E. Multiple Remedies
Source
Studies—Statistical
and
Separation
As shown in Table IV, 4.17% of the multiple source studies used multiple source separation (e.g., complete methodological separation of measurement) and used single source statistical remedies (e.g., Harman’s single-factor test). Although using an alternate method to capture either the independent or dependent variables alleviates most concerns of CMV, using single source statistical remedies may be used when predictor and criterion variables are not captured from the alternate source. Craighead et al. [17], for example, used complete methodological separation of measurement by capturing the independent variables through a large-scale survey and then relied on two objective measures, return-on-assets and Altman’s Z, as their dependent variables. However, their model contained other predictorcriterion variables that were both captured in the survey. Therefore, to ensure CMV was not a serious threat in the survey dataset, the authors used Harman’s single-factor test with CFA setting.
six IT/other-focused articles used this technique. The most rigorous CMV remedy (i.e., multiple source statistical and separation remedies) was used in only one article, which was IT/otherfocused. Overall, the IT/other-focused articles used more rigorous CMV remedies than the OM-focused articles, but once again, scholars in both disciplines need to more fully address CMV. G. Multiple Source Studies—Summary While it is encouraging that researchers are collecting data from multiple sources, multiple source studies represent only 22.50% of the survey articles published between 2001 and 2009 in IEEE-TEM, JOM, and POM. Further, the majority of multiple source studies (54.17%—see Table IV) did not use any traditional CMV remedies. Although collecting data from multiple sources instills more confidence in the research findings [43], the results for multiple sources studies (Table IV) reveal that scholars are not using the multiple data sources to combat CMV to the fullest extent. Additionally, results in Table IV do not clearly show an improvement over time. Thus, we encourage scholars to be more cognitive of implementing either multiple source separation and/or statistical remedies while designing and executing multiple source studies. Table V summarizes our recommendations for various remedies that can be implemented in future multiple source studies. Similar to our recommendations provided in Table II, these remedies are not meant to be comprehensive, but rather they are remedies that may be appealing for multiple source studies. Ideally, scholars will move toward multiple source strategies and when doing so will more fully address CMV in the future.
F. Multiple Source Studies Within IEEE-TEM As highlighted earlier, it is important to compare and contrast the CMV remedies used in related areas to help scholars benchmark against how other researchers are addressing CMV. Therefore, we performed a more detailed analysis of the multiple source studies published in IEEE-TEM to compare and contrast how OM-focused articles address CMV compared to the IT/other-focused articles. As shown in the bottom two rows in Table IV, there are only 17 (total) multiple source studies published in IEEE-TEM between 2001 and 2009, four of which are OM-focused and 13 that are IT/other-focused. Despite the lower number of multiple source studies published in IEEE-TEM, it appears that the multiple sources more fully addressed CMV than the single source studies published in IEEE-TEM (see Table I). Once again, the IT/other-focused articles did a better job controlling for CMV than the OM-focused articles. Only one OM-focused article and six IT/other-focused articles did not use any traditional CMV remedies. By far the most popular CMV remedy used in multiple source studies was separation remedies, which is when the independent variable is captured through one method (i.e., survey) and the dependent variable is obtained through a different method (i.e., secondary data). More specifically, three OM-focused articles and
V. CONCLUDING THOUGHTS FOR FUTURE RESEARCH IN IT, OM, AND SCM CMV can distort observed relationships, thereby causing researchers to reach erroneous conclusions. As such, the unchecked presence of CMV undermines a study’s potential contributions to knowledge, which may be particularly problematic in survey research. This has troubling implications for research efforts in the IT, OM, and SCM areas given the prominent role that survey research plays in testing theorized relationships. The CMV problem has led prominent journals (e.g., Journal of Applied Psychology, Journal of International Business Studies) in other fields to adopt very skeptical stances related to CMV. Our recommendation is that the IT, OM, and SCM areas should share this skepticism. Indeed, we hope that our results regarding CMV remedies serve as a call to action for authors and reviewers to strengthen future research designs. Given the potentially severe effects of CMV, authors should apply CMV remedies (summarized in Tables II, III, and V) within their survey-based studies, and reviewers should hold authors accountable when they fail to do so. Further, given the relative strength of CMV remedies, authors and reviewers should consider “raising the bar” relative to addressing CMV.
CRAIGHEAD et al.: ADDRESSING COMMON METHOD VARIANCE
REFERENCES [1] N. F. Ashill, M. Rod, P. Thirkell, and J. Carruthers, “Job resourcefulness, symptoms of burnout and service recovery performance: An examination of call centre frontline employees,” J. Serv. Marketing, vol. 23, no. 5, pp. 338–350, 2009. [2] B. J. Avolio, F. J. Yammarino, and B. M., “Identifying common method variance with data collected from a single source: An unresolved sticky issue,” J. Manage., vol. 17, no. 3, pp. 571–587, Sep. 1991. [3] S. Ba and W. C. Johansson, “An exploratory study of the impact of eservice process on online customer satisfaction,” Prod. Oper. Manage., vol. 17, no. 1, pp. 107–119, Jan./Feb. 2008. [4] J. B. Barney, “Firm resources and sustained competitive advantage,” J. Manage., vol. 17, no. 1, pp. 99–120, Mar. 1991. [5] C. Blaszczynski and J. C. Scott, “The researcher’s challenge: Building a credible literature review using electronic databases,” Inf. Technol., Learn. Perform. J., vol. 21, no. 2, pp. 371–384, Fall 2003. [6] T. D. Bock and P. V. Kenhove, “Consumer ethics: the role of self-regulatory focus,” J. Bus. Ethics, vol. 97, no. 2, pp. 241–225, Apr. 2010. [7] K. K. Boyer and M. T. Frohlich, “Analysis of effects of operational execution on repeat purchasing for heterogeneous customer segments,” Prod. Oper. Manage., vol. 15, no. 2, pp. 229–242, Jun. 2006. [8] M. T. Brannick, D. Chan, J. M. Conway, C. E. Lance, and P. E. Spector, “What is method variance and how can we cope with it? A panel discussion,” Organizational Res. Methods, vol. 13, no. 3, pp. 407–420, Jul. 2010. [9] J. Campbell, “Editorial: Some remarks from the outgoing editor,” J. Appl. Psychology, vol. 67, no. 6, pp. 691–700, 1982. [10] D. T. Campbell and D. W. Fiske, “Convergent and discriminate validation by the multitrait-multimethod matrix,” Psychological Bull., vol. 56, no. 2, pp. 81–105, Mar. 1959. [11] S. J. Chang, A. V. Witteloostuijn, and L. Eden, “From the editors: Common method variance in international business research,” J. Int. Bus. Stud., vol. 41, no. 2, pp. 178–184, Feb. 2010. [12] C.-J. Chen, H.-A. Shih, and S.-Y. Yang, “The role of intellectual capital in knowledge transfer,” IEEE Trans. Eng. Manage., vol. 56, no. 3, pp. 402– 411, Aug. 2009. [13] S. Chesteen, B. Helgheim, T. Randall, and D. Wardell, “Comparing quality of care in non-profit and for-profit nursing homes: A process perspective,” J. Oper. Manage., vol. 23, no. 2, pp. 229–242, Feb. 2005. [14] J. Cohen, “A coefficient of agreement for nominal scales,” Educ. Psychological Meas., vol. 20, no. 1, pp. 37–46, Apr. 1960. [15] A. Cohen, “Individual values and the work/family interface: An examination of high tech employees in Israel,” J. Managerial Psychology, vol. 24, no. 8, pp. 814–832, 2009. [16] J. A. Cote and M. R. Buckley, “Estimating trait, method, and error variance: Generalizing across 70 construct validation studies,” J. Marketing Res., vol. 24, no. 3, pp. 315–318, Aug. 1987. [17] C. W. Craighead, G. T. M. Hult, and D. J. Ketchen, “The effects of innovation-cost strategy, knowledge, and action in the supply chain on performance,” J. Oper. Manage., vol. 27, no. 5, pp. 405–421, Oct. 2009. [18] C. W. Craighead and J. Meredith, “Operations management research: Evolution and alternative future paths,” Int. J. Oper. Prod. Manage., vol. 28, no. 8, pp. 710–726, 2008. [19] S. M. Crampton and J. A. Wagner, “Percept-percept inflation in microorganizational research: An investigation of prevalence and effect,” J. Appl. Psychology, vol. 79, no. 1, pp. 67–76, 1994. [20] S. R. Das and M. P. Joshi, “Process innovativeness in technology services organizations: Roles of differentiation strategy, operational autonomy and risk-taking propensity,” J. Oper. Manage., vol. 25, no. 3, pp. 643–660, Apr. 2007. [21] D. H. Doty and W. H. Glick, “Common method bias: Does common methods variance really bias results?,” Organizational Res. Methods, vol. 1, no. 4, pp. 374–406, Oct. 1998. [22] M. T. Frohlich and J. R. Dixon, “Reflections on replication in OM research and this special issue,” J. Oper. Manage., vol. 24, no. 6, pp. 865–867, 2006. [23] H. H. Harman, Modern Factor Analysis, 3rd ed. Chicago, IL: Univ. Chicago Press, 1976. [24] D. A. Harrison, M. E. McLaughlin, and T. M. Coalter, “Context, cognition, and common method variance: Psychometric and verbal protocol evidence,” Organizational Behav. Human Decis. Processes, vol. 68, no. 3, pp. 246–261, Dec. 1996. [25] G. S. Howard, “Why do people say nasty things about self-reports?” J. Organizational Behav., vol. 15, no. 5, pp. 399–404, Sep. 1994.
587
[26] R. Hubbard, D. E. Vetter, and E. L. Little, “Replication in strategic management: Scientific testing for validity, generalizability, and usefulness,” Strategic Manage. J., vol. 19, no. 3, pp. 243–254, Mar. 1998. [27] G. T. M. Hult, D. J. Ketchen, S. T. Cavusgil, and R. J. Calantone, “Knowledge as a strategic resource in supply chains,” J. Oper. Manage., vol. 24, no. 5, pp. 458–475, Sep. 2006. [28] G. T. M. Hult, D. J. Ketchen, and S. F. Slater, “Information processing, knowledge development, and strategic supply chain performance,” Acad. Manage. J., vol. 47, no. 2, pp. 241–253, 2004. [29] G. T. M. Hult, D. J. Ketchen, and E. L. Nichols, “Organizational learning as a strategic resource in supply management,” J. Oper. Manage., vol. 21, no. 5, pp. 541–556, Dec. 2003. [30] J. E. Hunter and F. L. Schmidt, Methods of Meta-Analysis: Correcting Error and Bias in Research Findings. Thousand Oaks, CA: Sage Publications, 2004. [31] T. D. Jick, “Mixing qualitative and quantitative methods: Triangulation in action,” Administ. Sci. Quart., vol. 24, no. 4, pp. 602–611, Dec. 1979. [32] D. A. Kenny and D. A. Kashy, “Analysis of the multitrait-multimethod matrix by confirmatory factor analysis,” Psychological Bull., vol. 112, no. 1, pp. 165–172, 1992. [33] M. Ketokivi and R. G. Schroeder, “Perceptual measures of performance: Fact or fiction?” J. Oper. Manage., vol. 22, no. 3, pp. 247–264, Jun. 2004. [34] R. Klein, “Customization and real time information access in integrated eBusiness supply chain relations,” J. Oper. Manage., vol. 25, no. 6, pp. 1366–1381, Nov. 2007. [35] T. Kline, L. M. Sulsky, and S. D. Rever-Moriyama, “Common method variance and specification errors: A practical approach to detection,” J. Psychology, vol. 134, no. 4, pp. 401–421, Jul. 2000. [36] N. R. Kowtha, “Engineering the engineers: Socialization tactics and new engineer adjustments in organizations,” IEEE Trans. Eng. Manage., vol. 55, no. 1, pp. 67–81, Feb. 2008. [37] V. S. Lai and B. K. Wong, “The moderating effect of local environment on foreign affiliate’s global IS strategy—Effectiveness relationship,” IEEE Trans. Eng. Manage., vol. 50, no. 3, pp. 352–361, Aug. 2003. [38] C. E. Lance, B. Dawson, D. Birkelbach, and B. J. Hoffman, “Method effects, measurement error, and substantive conclusions,” Organizational Res. Methods, vol. 13, no. 3, pp. 435–455, Jul. 2010. [39] C. E. Lance, D. J. Woehr, and A. W. Meade, “Case study: A monte carlo investigation of assessment center construct validity models,” Organizational Res. Methods, vol. 10, no. 3, pp. 430–448, Jul. 2007. [40] J. R. Landis and G. G. Koch, “The measurement of observer agreement for categorical data,” Biometrics, vol. 33, no. 1, pp. 159–174, Mar. 1977. [41] M. K. Lindell and D. J. Whitney, “Accounting for common method variance in cross-sectional research designs,” J. Appl. Psychology, vol. 86, no. 1, pp. 114–121, Feb. 2001. [42] K. Linderman, R. G. Schroeder, and A. S. Choo, “Six Sigma: The role of goals in improvement teams,” J. Oper. Manage., vol. 24, no. 6, pp. 779– 790, Dec. 2006. [43] M. K. Malhotra and V. Grover, “An assessment of survey research in POM: From constructs to theory,” J. Oper. Manage., vol. 16, no. 4, pp. 407–425, Jul. 1998. [44] N. K. Malhotra, S. S. Kim, and A. Patil, “Common method variance in IS research: A comparison of alternative approaches and reanalysis of past research,” Manage. Sci., vol. 52, no. 12, pp. 1865–1883, Dec. 2006. [45] R. Narasimhan and S. W. Kim, “Effect of supply chain integration on the relationship between diversification and performance: Evidence from Japanese and Korean firms,” J. Oper. Manage., vol. 23, no. 3, pp. 303–323, Jun. 2002. [46] K. A. Neuendorf, The Content Analysis Guidebook. Thousand Oaks, CA: Sage Publications, 2002. [47] V. L. Pace, “Method variance from the perspectives of reviewers: poorly understood problem or overemphasized complaint?,” Organizational Res. Methods, vol. 13, no. 3, pp. 421–434, Jul. 2010. [48] A. Pilkington and J. Meredith, “The evolution of the intellectual structure of operations management—1980–2006: A citation/co-citation analysis,” J. Oper. Manage., vol. 27, no. 3, pp. 185–202, Jun. 2009. [49] P. M. Podsakoff and D. W. Organ, “Self-reports in organizational research: Problems and prospects,” J. Manage., vol. 12, no. 4, pp. 531–544, Dec. 1986. [50] P. M. Podsakoff, S. B. MacKenzie, J. Y. Lee, and N. P. Podsakoff, “Common method biases in behavioral research: A critical review of the literature and recommended remedies,” J. Appl. Psychology, vol. 88, no. 5, pp. 879–903, Oct. 2003.
588
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 58, NO. 3, AUGUST 2011
[51] K. Popper, Conjectures and Refutations. London, U.K.: Routledge and Kegan Paul, 1978. [52] G. Prendergast, P. Liu, and D. T. Y. Poon, “A Hong Kong study of advertising credibility,” J. Consum. Marketing, vol. 26, no. 5, pp. 320–329, 2009. [53] A. Rego, S. Leal, M. P. Cunha, J. Faria, and C. Pinho, “How the perceptions of five dimensions of corporate citizenship and their inter-inconsistencies predict affective commitment,” J. Bus. Ethics, vol. 94, no. 1, pp. 107–127, Jun. 2010. [54] T. Reisenwitz, R. Iyer, D. B. Kuhlmeier, and J. K. Eastman, “The elderly’s internet usage: An updated look,” J. Consum. Marketing, vol. 24, no. 7, pp. 406–418, 2007. [55] H. A. Richardson, M. J. Simmering, and M. C. Sturman, “A tale of three perspectives: Examining post hoc statistical techniques for detection and correction of common method variance,” Organizational Res. Methods, vol. 12, no. 4, pp. 762–800, Oct. 2009. [56] M. J. Rungtusanatham, T. Y. Choi, D. G. Hollingworth, Z. Wu, and C. Forza, “Survey research in operations management: Historical analyses,” J. Oper. Manage., vol. 21, no. 4, pp. 475–488, Jul. 2003. [57] M. H. Safizadeh, J. M. Field, and L. P. Ritzman, “An empirical analysis of financial services processes with a front-office or back-office orientation,” J. Oper. Manage., vol. 21, no. 5, pp. 557–576, Dec. 2003. [58] M. D. Santoro and A. K. Chakrabarti, “Corporate strategic objectives for establishing relationships with university research centers,” IEEE Trans. Eng. Manage., vol. 48, no. 2, pp. 157–163, May 2001. [59] D. P. Schwab, “Research methods for organizational studies,” in Measurement Foundations: Validity and Validation, 2nd ed. Evanston, IL: Routledge, 1999, pp. 31–48. [60] E. Siemsen, A. Roth, and P. Oliveira, “Common method bias in regression models with linear, quadratic, and interaction effects,” Organizational Res. Methods, vol. 13, no. 3, pp. 456–476, Jul. 2010. [61] D. Straub, “Validating instruments in MIS research,” MIS Quart., vol. 13, no. 2, pp. 147–169, Jun. 1989. [62] D. Straub, M. C. Boudreau, and D. Gefen, “Validation guidelines for IS positivist research,” Commun AIS, vol. 13, no. 24, pp. 380–427, 2004. [63] Z. Tang, P. M. Kreiser, L. Marino, and K. M. Weaver, “Exploring proactiveness as a moderator in the process of perceiving industrial munificense: A field study of SMEs in four counties,” J. Small Bus. Manage., vol. 48, no. 2, pp. 97–115, Apr. 2010. [64] L. M. Valenzuela, J. P. Mulki, and J. F. Jaramillo, “Impact of customer orientation, inducements and ethics on loyalty to the firm: Customers’ perspective,” J. Bus. Ethics, vol. 93, no. 2, pp. 277–291, May 2010. [65] J. D. Werbel and P. L. Henriques, “Different views of trust and relational leadership: Supervisor and subordinate perspectives,” J. Managerial Psychology, vol. 24, no. 8, pp. 780–796, 2009. [66] L. J. Williams, J. A. Cote, and M. R. Buckley, “Lack of method variance in self-reported affect and perceptions at work: Reality or artifact?” J. Appl. Psychology, vol. 74, no. 3, pp. 462–468, Jun. 1989. [67] L. J. Williams, N. Hartman, and F. Cavazotte, “Method variance and marker variables: A review and comprehensive CFA marker technique,” Organizational Res. Methods, vol. 13, no. 3, pp. 477–514, Jul. 2010. [68] R. Yee, A. Yeung, and T. C. Cheng, “The impact of employee satisfaction on quality and profitability in high-contact service industries,” J. Oper. Manage., vol. 26, no. 5, pp. 651–668, Sep. 2008.
Christopher W. Craighead received the Ph.D. degree from Clemson University. He joined the Smeal College of Business, The Pennsylvania State University, University Park, PA, in 2008. He is the author or coauthor of papers published in Journal of Operations Management, Production and Operations Management, Decision Sciences, Journal of Business Logistics, among others. He serves as an Associate/Area Editor of the Journal of Operations Management, Decision Sciences and Operations Management Research, and also serves on the Editorial Review Boards of Production and Operations Management, Journal of Business Logistics, and Production and Inventory Management Journal. His primary research interests include strategic sourcing and supply management, with a focus on global supply chain disruptions/risk and resilience.
David J. Ketchen, Jr. received the Ph.D. degree in business administration from The Pennsylvania State University, University Park, PA. He is currently a Lowder Eminent Scholar and a Professor of management at Auburn University, Auburn, AL. He is also the Executive Director of the Lowder Center for Family Business and Entrepreneurship. His current research interests focus primarily on entrepreneurship and franchising, methodological issues in organizational research, strategic supply chain management, and the determinants of superior organizational performance.
Kaitlin S. Dunn graduated summa cum laude and received the B.S. degree in finance from the University of Florida, Gainesville, and the M.S. degree in information and telecommunication systems for business from Johns Hopkins University, Baltimore, MD. She is a doctoral candidate in Supply Chain and Information Systems in the Smeal College of Business, The Pennsylvania State University, University Park, PA. Her research interests include supply chain knowledge and learning, supply chain risks and disruptions, and food supply chains. Kaitlin’s research has been published, or is forthcoming, in Journal of Business Logistics and IEEE Transactions on Engineering Management.
G. Tomas M. Hult received the Ph.D. degree from the University of Memphis, TN. He is the Eli Broad Professor of marketing and international business and the Director of the Center for International Business Education and Research in the Eli Broad College of Business at Michigan State University, East Lansing. He is also the Executive Director of the Academy of International Business. He is the author or coauthor of papers published in Academy of Management Journal, Strategic Management Journal, Journal of Marketing, Journal of the Academy of Marketing Science, Journal of International Business Studies, Journal of Operations Management, and Decision Sciences, among others. He is currently an Editor of the Journal of the Academy of Marketing Science.