Problems With Formative And Higher-Order Reflective ...

5 downloads 5915 Views 513KB Size Report
quote without express permission of the authors. Problems With ... (March 2012). Send correspondence to Nick Lee, Marketing, Aston Business School, Aston University,. Birmingham, B4 7ET, Great Britain, telephone: +44 121 2043152 (Email:.
© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

Problems With Formative And Higher-Order Reflective Variables

Nick Lee, Aston University John W. Cadogan, Loughborough University

Accepted for publication in Journal of Business Research (March 2012)

Send correspondence to Nick Lee, Marketing, Aston Business School, Aston University, Birmingham, B4 7ET, Great Britain, telephone: +44 121 2043152 (Email: [email protected]) or John W. Cadogan, Marketing, School of Business and Economics, Loughborough University, Loughborough, Leicestershire, LE11 3TU, Great Britain, telephone: +44 1509 228832 (Email: [email protected]).

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

Abstract Cadogan and Lee (this issue) discuss the problems inherent in modeling formative latent variables as endogenous. In response to the commentaries by Rigdon (this issue) and Finn and Wang (this issue), the present article extends the discussion on formative measures. First, the article shows that regardless of whether statistical identification is achieved, researchers are unable to illuminate the nature of a formative latent variable. Second, the study clarifies issues regarding formative indicator weighting, highlighting that the weightings of formative components should be specified as part of the construct definition. Finally, the study shows that higher-order reflective constructs are invalid, highlights the damage their use can inflict on theory development and knowledge accumulation, and provides recommendations on a number of alternative models which should be used in their place (including the formative model).

Keywords: Measurement, Reflective, Formative, Second-Order Factor, Higher-Order Construct, Unidimensional, Multidimensional, Dimensionality.

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

1. Introduction The invited commentaries by Rigdon (this issue), and Finn and Wang (this issue), on the paper Improper Use of Endogenous Formative Variables (IFV), are greatly appreciated. Both comments throw important light on significant problems in contemporary understanding of the formative model and in current measurement practices in business research. The current authors share with Rigdon and with Finn and Wang a desire to expand understanding of measurement theory, and to make a positive impact on the application of measurement by practicing business researchers. Nevertheless, the commentaries contain a number of important points that require counter comment or elaboration. The goals of the present paper are threefold. First, to reexamine the original intentions behind IFV, and in so doing, place a variety of Rigdon’s comments in context, demonstrating that he is in agreement with the current authors in many places. In particular, the current paper clarifies why one can never know how a formative latent variable varies – regardless of statistical identification issues. Second, building on Rigdon’s comments, the current paper demonstrates that, for formative variables to have utility in theoretical models, the loadings of the formative indicators should be specified as part of the construct definition prior to any analysis. Third, while the current authors share a number of important views with Finn and Wang, the authors show that the idea of a higherorder reflective construct makes no conceptual sense, and that the latter’s use impedes theory development efforts and knowledge accumulation.

2. Response to Rigdon Rigdon’s commentary on IFV provides some important points which help make the case IFV presents even stronger. In fact, in essence, Rigdon agrees entirely with the key message underpinning IFV – that researchers should not model antecedents to formative

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

variables at the construct/aggregate level. Instead, antecedents should be modeled at the individual formative item level. There are, however, a number of aspects of Rigdon’s commentary that require clarification.

2.1. Equations versus pictures First, Rigdon argues that IFV exhibits a ‘lack of application of basic mathematics’, since the arguments could be expressed using mathematical equations. The authors agree with Rigdon, to the extent that a purposeful attempt is made to ensure that IFV is accessible to those who are not mathematically inclined. For better or worse, many applied researchers – those who go about the day-to-day business of creating and using measures – are more comfortable with diagrammatic representations of models.

2.2. It’s too obvious, so why bother? Second, Rigdon implies that the key thesis of IFV (that it is wrong to use formative endogenous variables) is self-evident, and so questions the need to explain the problem. Yet, in practice and in prestigious journals, applied researchers commonly model formative variables as endogenous (e.g., Dowling 2009; Hoetker and Mellewigt 2009; Jarvis, MacKenzie, and Podsakoff 2003; Klein and Rai 2009; van Riel Berens and Dijkstra 2009; Verhoef and Leeflang 2009). As such, whether or not the illogical nature of formative endogenous variables is ‘obvious’, there is clearly a need to clarify the issues for researchers.

2.3. Can one ever know how a formative latent variable varies? Third, Rigdon suggests that IFV ‘errs’ in some way ‘when discussing issues of identification’. Rigdon states that the authors ‘assert repeatedly that if a formatively measured construct has a structural error term with a free structural variance and no reflective

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

indicators, then the construct’s total variance is not identified’. Rigdon notes that such a model would be statistically identified with the addition of two or more exogenous constructs. Of course, Rigdon is correct: such a model would be identified. However, the possibility of statistical identification should not be taken to invalidate the claim in IFV that one can never operationally use a formative latent variable in an empirical study. Researchers should not be misled into thinking that achieving statistical identification allows one to obtain information about the variance of a formative latent variable. Accordingly, the authors take the current opportunity to reframe their logic using a fictitious example to demonstrate the impossibility of determining the variance of a formative latent variable. Figure 1a shows a formative latent variable model and, following Rigdon, socio-economic status (SES) is chosen as the variable of interest. In the example, SES (η1) is defined by three formative indicators: education (x1), job prestige (x2), and income (ζ1). Note that the formative variable is latent because in a fictitious data set, data on income is not available. Thus, the conceptual model of SES contains two observed formative indicators (job status and education) and one unobserved formative indicator, income (which is represented by an error term, ζ1). In IFV the authors explain that the formative latent model is useful as a conceptual tool, potentially, but operationally it is incomplete: that is, the ζ1 term represents an operational error (variable(s) that have not been measured) and, as a consequence, one cannot accurately model variance in SES using the data at hand (job status and education). Only the model presented in Figure 1b is empirically testable. Figure 1 about here In Figure 1b, the two measured indicators, job status and education, completely define a composite variable, η2. Thus, η2 contains information on the variance of job status and education, but contains no information on the variance of income. Accordingly, η2 is not the

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

same thing as SES, and information on η2 does not necessarily provide information on SES. Under the logic of the formative model, η2 is error free, and so in Figure 1b, the error term (ζ2) has a magnitude of zero. Drawing on the arguments in IFV, the authors suggest that, in cases where formative measures are used in empirical studies, one cannot draw conclusions about how SES relates to anything, since one does not know the variance of SES (information on income is missing): one can only draw conclusions about how η2 is related to other variables. What happens if one follows Rigdon’s advice and adds some endogenous variables to the model? Will this provide information about the variance of the formative latent SES construct (i.e., η1)? In order to know about the variance of SES, information on x1 and x2 is needed (and is available in the fictitious data set), and on ζ1, which is missing from the fictional data set. Rigdon’s suggested addition of endogenous variables to the model initially seems to offer some assistance on this front. For example, Figure 1c uses fertility (y1) and life expectancy (y2) as potential endogenous variables of latent variable η3 (SES is believed to be associated with health-related outcomes such as these: Bollen, Glanville and Stecklov 2007; Burström, Johannesson and Diderichsen 2005). Adding such variables identifies the model, and provides an error estimate, ζ3. Accordingly, it appears that the addition of the endogenous variables in Figure 1c provides information on the variance of the formative latent SES construct, since the researcher has information on x1, x2 and ζ3. However, it would be a mistake to assume that the estimate of ζ3 provides any information on ζ1, or any information on ζ1’s relationship with SES (η1). Instead, on adding the endogenous outcomes, η3’s meaning changes from that of η1. That is, η3 now is not a variable whose meaning is grounded in the xs and a ζ term: with the addition of the endogenous variables, η3 becomes a latent variable that has its meaning and variance

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

grounded in the covariance of (i.e. the common factor underpinning) fertility and life expectancy (Howell, Breivik and Wilcox 2007). The point being made becomes obvious when one considers that, in Figure 1c, the conceptual content of η1 has no impact on the value of ζ3 that is returned upon statistical analysis. The following example demonstrates this point. Imagine that a researcher decides to change the conceptual definition of SES, so that it contains a fourth component, access to non-financial resources. However, the researcher does not collect new data and so only has information on job status and education: income and access to non-financial resources are unmeasured in the fictional dataset. Under this scenario, and in order to accommodate the change in the conceptual meaning of SES in Figure 1a, the value of ζ1 changes to include both income and access to non-financial resources. Yet, when one ripples this change in the definition of SES, and the consequent change in the value of ζ1, through to Figure 1c, the numerical solution produced by running Figure 1c on the fictional data remains constant, regardless of whether SES’s definition contains or excludes access to non-financial resources: none of the model estimates (including that for ζ3) change. This is because the value of ζ3 is not dependent on the meaning of SES, and because ζ3 does not provide information on ζ1. In fact, ζ3 merely represents the unexplained variance that would be observed as a result of using x1 and x2 to predict the common factor underpinning y1 and y2. The upshot is that, while the addition of endogenous variables to Figure 1a may produce a new model that is statistically identifiable, the new model (Figure 1c) is not a model that can claim to measure SES. As such, one is still no closer to knowing anything about the variance of a formative latent variable.

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

2.4. The weighting is the hardest part Rigdon, in discussing the issue of formative indicator weighting, implies that researchers have the choice of either estimating weights via the operation of some statistical algorithm (e.g., PLS) on a data set, or fixing the weights to what he terms the ‘true value’. Problematically, such a perspective implies that a formative latent variable must represent some kind of ‘real entity’, that this entity has a ‘true score’, and that this true score can be estimated through analysis of some data. The authors disagree with Rigdon on this front. Formative latent variables are simply composites, defined by researchers. As Law, Wong and Mobley (1998, p. 743) explain, a formative construct “does not exist at a deeper conceptual level than its dimensions”. In short, a formative construct is merely a set of dimensions combined using some heuristic that is part of the construct definition (Hardin, Chang, Fuller & Torkzadeh 2011). By implication, it is the job of the researcher to define the relative weightings of the formative dimensions at the time that the researcher decides to create a formative variable. There is no single ‘true’ weighting profile to be discovered by empirical research – weighting allocation is something the researcher is responsible for. Indeed, the literature provides guidance on this front, arguing that where sufficient theoretical rationale exists, the “weights of each dimension in forming the final construct” should be specified a priori by the researcher, and that in cases where “theories concerning the construct are not detailed enough to prescribe the exact algebraic relation between the multidimensional construct and its dimensions”, researchers should develop the necessary theory to come up with weightings a priori (Law et al. 1998, p. 751). Continuing with Rigdon’s example of SES, the contribution that the components of SES make to the construct should be part of the definition of SES (Hardin et al. 2011). If a researcher decides that, for example, the income component of SES is more important in determining SES than education and job prestige, they must defend this definition against an

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

alternative definition, perhaps that SES is a simple equally-weighted linear composite of its constituent elements. These decisions are in the hands of the research community, and should not be decided by a statistical algorithm which blindly allocates a set of weights to the formative components. Allowing an algorithm to determine formative indicator weightings is dangerous because it reduces the ability of the research community to compare research findings across studies: the weights obtained “are often context dependent, [and] will likely change when estimated in a different [setting]. In some instances, indicator weights that are significant in one setting may be insignificant in another, making results across settings difficult to compare” (Hardin et al. 2011, p.297). It is entirely possible that the weightings obtained via the analysis of any individual data set may make no sense in terms of the theoretical definition of the formative composite. At the extreme, for instance, analysis of a data set may return zero or near zero regression weights for education and job prestige as formative dimension of SES, leaving only income as a component of SES: in this case, the formative variable is no longer SES – it is just income (Howell et al. 2007). Further, in the case of a formative construct that is less well understood or researched (e.g., export coordination – see Diamantopoulos and Siguaw 2006), the researcher may be tempted to exclude formative indicators that return zero weightings, and this would lead to the problem of different studies containing formative variables that have the same name but that have different content. Non-zero formative weightings that differ in magnitude across studies pose a similar problem. Continuing the SES example, analysis may return a standardized weighting for income in one study of 0.20, but in another study the weighting may be 0.80. As such, comparable scores on the dimensions that define SES would result in different SES scores across the studies, again rendering cross-study comparisons meaningless (Hardin et al. 2011).

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

In conclusion, it is important to understand that the only true scores for formative weightings are those defined by the researcher, not those ‘discovered’ by various statistical analyses, since the latter will differ across studies. It is important, therefore, that researchers understand the construct they are studying sufficiently, so that they are able to explicitly define the construct in terms of predetermined weights for the construct’s formative dimensions (Hardin et al. 2011). Yet, what should researchers do with studies that have statistically estimated formative indicator weightings in the past? How should reported findings be interpreted? One option is to simply reject prior research findings if they have allowed formative weightings to be determined by chance in this way. Yet this does seem rather extreme. An alternative is to: (a) derive a set of weights that can be applied to the formative composite each time it is used, either using theoretical criteria to decide the constant weighting profile, or perhaps using Hardin et al.’s (2011) meta-analytically derived weighting approach, or even using exploratory research approaches (e.g., delphi techniques), and (b) assess past research in light of the derived weighting profile, to identify studies that have used weighting profiles that are similar to the derived weights. Studies that report weighting profiles that are closer to the derived weightings would therefore be more comparable.

3. Response to Finn and Wang The authors are also grateful for Finn and Wang’s commentary on their work, since it raises a number of key issues for business researchers. In particular, the authors strongly support Finn and Wang’s call for researchers to devote far more effort to construct conceptualization, and in turn to theorizing about the link between constructs and measures. An area of particular interest, mentioned repeatedly in Finn and Wang’s comment, concerns the idea of higher-order constructs. It is certainly true that sources of variation at the

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

construct and item level need to be carefully considered, and that consideration must also extend to second-order factors as well as first-order measures. In fact, while second-order factors appear to be a convenient way of dealing with certain data structures, they remain the subject of much confusion in the literature. As such, the authors appreciate the opportunity to provide clarification on the logical fallacies underlying the idea of higher-order reflective constructs in social science research.

3.1. Higher-order reflective constructs? No such thing Specifically, some researchers choose to model multiple constructs as reflecting a “higher-order” entity. That is, as Finn and Wang point out, if a set of reflective items that are purported to measure a single construct are not unidimensional (i.e. the items reflect more than one construct), a reflective first-order specification can only be maintained if a more complex second-order measurement model is specified. Finn and Wang go on to say that if the reflective first-order factors are correlated, the second-order structure can still be reflective (as in Figure 2a). On this issue, the authors are uncomfortable. Figure 2a about here Rather, the authors argue that higher-order reflective constructs are, at worst, misleading, and at best meaningless. Researchers should, therefore, avoid the use of higherorder reflective constructs. To explain why the authors make this statement, the meaning of reflective constructs is briefly explained. For a first-order reflective construct, the following equation expresses the relationship between the latent construct and an observed item (Bollen 2002):

yi = ηi + εi

(1)

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

where yi is the observed variable y for the ith individual, ηi is the value of the true score of the latent variable η for the ith individual, and εi is an error component. The items in a firstorder construct must be redundant, since they are measures of the same single construct (DeVellis 1991). Indeed, the notion that the measures of a construct all measure just one thing “is the most critical and basic assumption of measurement theory” (Hattie 1985, p. 49). As Edwards (2011, p. 4) says, “[r]eflective measures are assumed to represent a single dimension, such that the measures describe the same underlying construct, and each measure is designed to capture the construct in its entirety…[B]ecause they describe the same dimension, reflective measures are conceptually interchangeable, and removing any one of the measures would not alter the meaning or interpretation of the construct… [R]eflective measures exhibit what DeVellis (1991) calls useful redundancy, such that the items have the same meaning without relying on the same terminology or grammatical structure.” By implication, the unidimensional imperative underpinning reflective measurement means that a reflective measure does not contain multiple dimensions and, as such, reflective measures should not be designed to contain items that capture different ‘facets’ of the construct (there should only be the one facet or dimension underpinning a reflective measure). The idea that a reflective measure can have multiple facets is a significant (and sadly common) misinterpretation of the key premises of classical measurement theory, and may be the root cause of the use of higher-order reflective constructs. If a researcher believes that a construct contains multiple facets or dimensions, then they are making an error in defining the construct as reflective. With this clarification of classical measurement theory in hand, the notion of higherorder reflective constructs can be introduced. In the latter, the logic of first-order reflective constructs is shifted up one or more levels of abstraction (Law et al. 1998). Accordingly, one can rewrite equation (1) for a higher-order construct:

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

ηi = ξi + ζi

(2)

where ηi is a first-order construct score for the ith individual, ξi is the true score for the higher-order construct for the ith individual, and ζi is an error term. If the higher-order construct is reflective, and if it has multiple first-order constructs that measure it, the firstorder constructs (the ηs) must be identical in meaning to each other because they reflect the same construct (ξ). The unidimensional imperative does not disappear in the higher-order reflective model, and so a higher-order reflective measure must not contain multiple dimensions. If the various first-order constructs tap multiple dimensions (that is, if the ηs have different conceptual content), then the higher–order construction is not a valid reflective construct. Looking at Figure 2a, which provides a representation of a second-order reflective measure, if the first-order constructs conform to the laws of reflective measurement, then η1, η2 and η3 are interchangeable, and are redundant. Also, each of the measurement items (the ys) of the various ηs are conceptually interchangeable, and redundant, since they all measure the same construct (ξ). Simply substituting equation 2 into equation 1 demonstrates this fact:

yi = ξi + δi

(3)

where δi represents an error term (= ζi + εi). Equation 3 demonstrates that all observed items (the ys) of any first-order construct (η) are also measures of the second-order construct (ξ), and that there is no need to include the first-order construct in a model that contains ys and ξ. The appropriate way of modeling these items in a measurement model is shown in Figure 2b. There is no need for, or benefit gained from, inclusion of the ηs, and so a reflective higher-

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

order model is a needless, non-parsimonious measurement approach: if the measures are really reflective, a first-order model (as shown in Figure 2b) should account for the variance in items. According to this logic, researchers have no need to model higher-order constructs as reflective.

3.2. Alternatives to the higher-order reflective construct What about the case where the various first-order constructs contain distinct conceptual content, and so are non-redundant, and cannot be interchanged? A researcher may be tempted to say: “I need to use a higher-order reflective construct because my first-order constructs tap different facets of ξ1”. This scenario may well typify current approaches to the use of higher-order constructs. The problem here, of course, is that the first-order constructs do not meet the necessary defining conditions of a reflective measure: they are not conceptually identical. In the following, the alternative ways that a researcher might more correctly model their first-order constructs is presented. Looking again at Figure 2a, if the ηs are different constructs from each other, then the simplest model is one which assumes that the ηs are independent from each other (see Figure 2c). This approach is conceptually clean, and potentially adds richness to the research model relative to Figure 1a, since the antecedents and consequences of the three η constructs can differ. Of course, the approach presented in Figure 2c also adds complexity to the research model, requiring the generation of a greater number of hypotheses relative to a conceptual model containing a higher-order reflective ξ1 construct (Figure 2a). These increased demands in terms of theory generation may, perhaps, explain why researchers are tempted to seek ways of aggregating first-order constructs into higher-order models.

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

However, as demonstrated above, higher-order reflective models are not valid when the first-order constructs are not conceptually identical. In the latter case, if the researcher feels impelled to combine first-order constructs into a single “thing”, then the only logical way forward is to combine the dimensions formatively as in Figure 2d. Here, the researcher would use an aggregation heuristic (see the discussion on predetermined weightings above) so that the first-order reflective constructs ξ1, ξ2 and ξ3 define the formative latent composite, η1. The difference between the Figure 2a and 2d models is not trivial, and has a serious impact on how one would go about developing and testing theory. For example, if one adopts the higher-order reflective model (Figure 2a), and develops a theory about its antecedents, then one would automatically model the antecedent variable as a direct predictor of ξ1: one would not directly predict the ηs that are, supposedly, measures of ξ1. However, as IFV demonstrates, the antecedents to an endogenous formative variable cannot operate at the level of the formative construct (η1 in Figure 2d), but must operate by predicting the firstorder formative constructs, ξ1, ξ2 and ξ3. IFV also demonstrates that predicting the first-order constructs (ξ1, ξ2 and ξ3) directly can provide very different results, and very different research conclusions, relative to a model that predicts the higher-order construct directly. Researchers who use the higher-order reflective approach to aggregate their data, and then predict the higher-order reflective variable in their theory testing are, therefore, increasing their likelihood of drawing erroneous conclusions regarding relationships between model variables.

3.3. Missing antecedents and ‘ghost’ variables In some instances, the researcher may, through a process of exploratory analyses, observe that a higher-order reflective model appears to fit the data. The model, because of its apparent parsimony, may seem to be more appealing than a model such as that presented in

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

Figures 2c or 2d. However, and knowing that the reflective higher-order model is invalid, it is instructive to consider why the observed data points towards a higher-order reflective model. The answer resides in two broad classes of explanation, both of which should be of concern to the researcher. First, the ξ1 construct may be a proxy for an unmeasured common cause of η1, η2 and η3 (see Figure 1e): that is, ξ1 is an antecedent variable to the ηs. The problem here is that the unmeasured ξ1 construct does not have the same conceptual meaning as η1, η2 or η3 (i.e. η1, η2 and η3 do not act as measures of ξ1). Failure to recognize this possibility is a failure to recognize that the nomological network of ξ1 is not the same as the nomological networks of the ηs. Inevitably, theory testing under this scenario will produce misleading study conclusions. Second, ξ1might be a ‘ghost variable’, an empirical manifestation of the covariances between the first-order η1, η2 and η3 constructs. Specifically, the first-order factors may covary, not because they are identical in conceptual meaning (they are not), nor because they are outcomes of a common cause but, rather, because that is how the world is: they just happen to covary (Cattell, 1978). The meaning of ξ1 in this situation is hard to fathom, because the shared variance defining the ξ1 variable is not representative of any common theoretical construct. Using the higher-order reflective variable to do any theory testing would be ill-advised under these conditions, because the results one would obtain would not map back to a conceptually meaningful variable. And, of course, whenever there is covariance between ηs, then a higher-order reflective measure would appear to be present in the data, despite the fact that such measures are not logical.

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

4. Conclusions The authors are grateful to have the opportunity to provide a rejoinder to the comments Rigdon and Finn and Wang make on the IFV paper. Both commentaries provide key supports and suggested extensions to the core thesis underpinning IFV (that formative endogenous variables should not be predicted at the construct / aggregate level), and the current rejoinder uses these commentaries as points of departure into other critical areas of measurement theory, closely related to the original thesis. The main conclusions arising from the authors’ reflections on the commentaries are:

(a) One can never know how a formative latent variable varies, not even when endogenous variables are used to identify the model.

(b) Formative indicator weights should be specified as part of construct definition, and should not fluctuate study to study (e.g., as a result of ‘estimation’ through statistical analyses).

(c) A higher-order measurement model that is truly unidimensional and conforms to the reflective measurement model is entirely redundant. As a result, unidimensional higher-order measurement models should be modeled as first-order measurement models.

(d) When a higher-order measurement model is not unidimensional (i.e., its lower-order measures capture conceptually different latent variables), its lower-order measures do not conform to the demands of the reflective measurement model. The reflective measurement model should not be used in this situation, and instead, the lower-order constructs should be treated as separate variables, or aggregated in a formative model.

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

The authors remain concerned about the problems inherent in the measurement models of studies even in the finest business journals. No matter how topical, exciting, or original the research, or how technically accomplished the hypothesis-testing, empirical results are of limited meaning without good measurement. The authors hope the current exchange of ideas provides stimulus for greater interest in measurement issues, and are optimistic that the discipline is moving towards a state where measurement is not treated as an afterthought, but as a central concern, vital to the veracity of any conclusions which are drawn from an empirical analysis.

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

References Bollen K. Latent variables in psychology and the social sciences. Annual Review of Psychology 2002; 53: 605-634. Bollen K, Glanville JL, Stecklov G. Socio-economic status, permanent income, and fertility: A latent-variable approach. Population Studies: A Journal of Demography 2007; 61(1): 15-34 Burström K, Johannesson M, Diderichsen F. Increasing socio-economic inequalities in life expectancy and QALYs in Sweden 1980—1997. Health Economics 2005; 14(8): 831850. Cadogan JW, Lee N. Improper use of endogenous formative variables. Journal of Business Research; this issue. Cattell RB. The scientific use of factor analysis in behavioral and life sciences. 1978; New York, NY: Plenum Press. DeVellis RF. Scale development: theory and applications. 1991; London: Sage. Diamantopoulos A, Siguaw JA. Formative versus reflective indicators in organizational measure development: a comparison and empirical illustration. British Journal of Management 2006;17(4): 263-82. Dowling C. Appropriate audit support system use: the influence of auditor, audit team, and firm factors. The Accounting Review 2009; 84(3): 771-810. Edwards J. The fallacy of formative measurement. Organizational Research Methods 2011; 14(2): 370-388. Finn A, Wang L. Formative vs. reflective measures: facets of variation. Journal of Business Research; this issue.

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

Hardin AM, Chang JC-J, Fuller MA, Torkzadeh G. Formative measurement and academic research: In search of measurement theory. Educational and Psychological Measurement 2011; 71(2): 281-305. Hattie J. Methodology review: assessing unidimensionality of tests and items. Applied Psychological Measurement 1985; 9(2): 139-164 Hoetker G, Mellewigt T. Choice and performance of governance mechanisms: matching alliance governance to asset type. Strategic Management Journal 2009; 30: 10241044. Howell RD, Breivik E, Wilcox JB. Reconsidering formative measurement. Psychological Methods 2007; 12 (June): 205-18. Jarvis CB, MacKenzie SB, Podsakoff PM. A critical review of construct indicators and measurement model misspecification in marketing and consumer research. Journal of Consumer Research 2003; 30: 199-218. Klein R, Rai A. Interfirm strategic information flows in logistics supply chain relationships. MIS Quarterly 2009; 33(4): 735-762. Law KS, Wong CS, Mobley WH. Toward a taxonomy of multidimensional constructs. Academy of Management Review 1998; 23: 741-755 Rigdon EE. Comment on “improper use of endogenous formative variables”. Journal of Business Research; this issue. van Riel CBM, Berens G, Dijksta M. Stimulating strategically aligned behaviour among employees. Journal of Management Studies 2009; 46(7): 1197-1226. Verhoef PC, Leeflang PSH. Understanding the marketing department's influence within the firm. Journal of Marketing 2009; 73(2): 14-37.

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

Figure 1 Formative models and error term identification

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

Figure 2 A second-order reflective measurement model and its alternative model specifications

© John Cadogan and Nick Lee 2012. Manuscript in press at Journal of Business Research. Please do not cite or quote without express permission of the authors.

Suggest Documents