an overall validation of cognitive structures and (b) provides diagnostic feedback ... P(Xij = 1| j, k, wik) = exp exp. ,. K. = K. = j ik k k p j ik k k p w c w c. −. +. F. HG. I. KJ. L. N. MM ... 3 where X ij is a dichotomous observed variable taking the value of 1 if person ..... 1) requires the same three production rules plus PR7 (removing.
Multivariate Behavioral Research, 38 (1), 1-23 Copyright © 2003, Lawrence Erlbaum Associates, Inc.
Validation of Cognitive Structures: A Structural Equation Modeling Approach Dimiter M. Dimitrov Kent State University
and Tenko Raykov Fordham University
Determining sources of item difficulty and using them for selection or development of test items is a bridging task of psychometrics and cognitive psychology. A key problem in this task is the validation of hypothesized cognitive operations required for correct solution of test items. In previous research, the problem has been addressed frequently via use of the linear logistic test model for prediction of item difficulties. The validation procedure discussed in this article is alternatively based on structural equation modeling of cognitive subordination relationships between test items. The method is illustrated using scores of ninth graders on an algebra test where the structural equation model fit supports the cognitive model. Results obtained with the linear logistic test model for the algebra test are also used for comparative purposes.
Determining sources of item difficulty and using them for analysis, development, and selection of test items necessitates integration of cognitive psychology and psychometric models (e.g., Embretson, 1995; Embretson & Wetzel, 1987; Mislevy, 1993; Snow & Lohman, 1984). The cognitive structure of a given test is defined by the set of cognitive operations and processes, as well as their relationships, required for obtaining correct answers on its items (e.g., Riley & Greeno, 1988; Gitomer & Rock, 1993). Knowledge about cognitive and processing operations in a model of item difficulty prediction allows test developers to (a) construct items with difficulties known prior to test administration, (b) avoid piloting of individual items in study groups and thus reduce costs, (c) match item difficulties to ability levels of the examinees, and (d) develop teaching strategies that target specific cognitive and processing characteristics. Previous studies have integrated cognitive structures with (unidimensional and multidimentional) item response theory (IRT) models allowing prediction of item difficulty from cognitive and processing operations that are hypothesized to underlie item MULTIVARIATE BEHAVIORAL RESEARCH
1
D. Dimitrov and T. Raykov
solving (e.g., Embretson, 1984, 1991, 1995, 2000; Fischer, 1973, 1983; Spada, 1977; Spada & Kluwe, 1980; Spada & McGaw, 1985). The validation of hypothesized cognitive components with IRT models is focused on the accuracy of item difficulty prediction. While such tests are important for prediction purposes, they do not tap into cognitive relationships among items that logically relate to the validity of a cognitive structure. Therefore, the purpose of this article is to discuss a method that (a) furnishes an overall validation of cognitive structures and (b) provides diagnostic feedback about cognitive relationships among items. It should be noted that this method and previously used IRT methods do not compete, since they address the validation of cognitive structures from different perspectives. To illustrate this, a brief description of the IRT linear logistic test model (LLTM; Fischer, 1973, 1983) is provided below. The LLTM is suitable for such an illustration due to its relative simplicity and frequent use in previous IRT studies on cognitive test structures (e.g., Dimitrov & Henning, in press; Embretson & Wetzel, 1987; Fischer, 1973; Medina-Diaz, 1993; Spada, 1977; Spada & Kluwe, 1980; Spada & McGaw, 1985; Whitely & Schneider, 1981). In the LLTM, the item difficulty parameter ␦i for each of n items (i = 1, ..., n) is presented as a linear combination of cognitive operations (Fischer, 1973, 1983): p
(1)
@i = ∑ wik = k + c, k =1
where p ⱕ n-1, ␣k represents the impact of the kth cognitive operation on the difficulty of any item from the target set of items, wik (weight of ␣k) adjusts the impact of ␣k for item i, and c is a normalization constant (k = 1, ..., p). The matrix W = (wik) is referred to as the “weight matrix” [i = 1, 2, ..., n, k = 1, ..., p; in the remainder, the symbol (.) is used to denote matrix]. The LLTM is obtained by replacing the item difficulty parameter, ␦i, in the Rasch measurement model (Rasch, 1960) with the linear combination on the right-hand side of Equation 1:
LM F w cI OP MN GH ∑ = + JK PQ , L F IO 1 + exp MK − G ∑ w = + cJ P MN H K PQ p
exp K j −
(2)
P(Xij = 1|j, ␣k, wik) =
ik
k
k =1
p
j
ik
k
k =1
2
MULTIVARIATE BEHAVIORAL RESEARCH
D. Dimitrov and T. Raykov
where Xij is a dichotomous observed variable taking the value of 1 if person j correctly solves item i and 0 otherwise, and P(.|.) denotes the probability of a person with ability j to answer correctly the ith item with difficulty ␦i as obtained from (1) (i = 1, ..., n, j = 1, ..., N; N being sample size). In an IRT context, the term ability connotes a latent trait that underlies the persons’ performance on a test (e.g., Hambleton & Swaminathan, 1985). The ability score, , of a person relates to the probability for this person to answer correctly any test item. Ability and item difficulty are measured on the same scale. The units of this scale, called logits, represent natural logarithms of odds for success on the test items. With the Rasch model, log[P/(1-P)] = - ␦, where P is the probability for correct response on an item with difficulty ␦ for a person with ability . Thus, when a person and an item share the same location on the logit scale ( = ␦), the person has a .5 probability of answering the item correctly. Also, using that exp(1) = e = 2.72 (rounded to the nearest hundredth), a difference of one point on the logit scale corresponds to a factor of 2.72 in the odds for success on the test items. The LLTM relates test items to cognitive operations inferred from a theoretical model of knowledge structures and cognitive processes. For example, Embretson (1995) proposed seven cognitive operations required for correct solution of mathematical word problems by operationalizing three types of knowledge (factual/linguistic, schematic, and strategic) defined in the theoretical model of Mayer, Larkin, and Kadane (1984). It should be noted that the LLTM has been found to be very sensitive to specification errors in the weight matrix, W (Baker, 1993). This implies the necessity of careful and sound validation of the cognitive model that underlies an LLTM application. It is important to note that the LLTM is a Rasch model with the linear restrictions in Equation 1 imposed on the Rasch item difficulty parameter, ␦i. Therefore, testing the fit of an LLTM requires two steps: (a) testing the fit of the Rasch model (which includes testing for unidimensionality) and (b) testing the linear restrictions in Equation 1 (Fischer, 1995, p. 147). While there are numerous tests for the Rasch model (e.g., Glas & Verhelst, 1995; Smith, Schumacker, & Bush, 1998; Van den Wollenberg, 1982; Wright & Stone, 1979), testing the restrictions in Equations 1 is typically performed with the likelihood ratio (LR) test for relative goodness-of-fit of the LLTM and Rasch models (e.g., Fischer, 1995, p. 147). The pertinent test statistic is asymptotically chi-square distributed with degrees of freedom equal to the difference in the number of independent parameters of the two compared models. There is also a simple graphical test for the relative goodness-offit of the LLTM and Rasch model (Fischer, 1995). The logic of this test is that if the LLTM fits well, the points with coordinates that represent estimates of item difficulty with the LLTM and the Rasch model, MULTIVARIATE BEHAVIORAL RESEARCH
3
D. Dimitrov and T. Raykov
respectively, should scatter around a 45° line through the origin (see Figure 4). As Fischer (1995) noted, the LR test very often turns out significant in empirical research, thus leading to rejection of the LLTM even when the graphical goodness-of-fit test indicates a good match between Rasch and LLTM item difficulties. Medina-Diaz (1993) applied the quadratic assignment technique for analysis of cognitive structures with the LLTM, but her approach similarly failed to provide good agreement with this LR test, and in order to proceed required data on the response of each examinee on each cognitive operation – information that is difficult or impossible to obtain in most testing situations. While LLTM tests are important for accuracy in the prediction of Rasch item difficulty from cognitive operations and processes, they do not tap into cognitive relationships among items that are logically inferred from the cognitive structure. The idea underlying the method proposed in this article is to reduce the cognitive weight matrix, W, to a diagram of cognitive subordination relationships between items, and then test the resulting model for goodness-of-fit using structural equation modeling (SEM; Jöreskog & Sörbom, 1996a, 1996b). Triangulations with logical fits in the diagram of cognitive subordinations among items can provide additional heuristic evidence in the validation process. It is worthwhile stressing that the LLTM tests and the proposed SEM method do not compete, because they target different aspects of the validation of cognitive structures. LLTM tests relate to the accuracy of prediction of Rasch item difficulty from cognitive operations, while the SEM method relates to the validation of cognitive relations among items for a hypothesized cognitive structure. The proposed SEM method can be used to justify, not replace, applications of LLTM or other models for prediction of item difficulty such as multiple linear regression (e.g., Drum, Calfee, & Cook, 1981) or artificial neural network models (Perkins, Gupta, & Tammana, 1995). The information about cognitive relations among items provided by the proposed SEM method can be used also for purposes other than prediction of item difficulty (e.g., task analysis, content selection, teaching strategies, and curriculum development). A Structural Equation Modeling Based Procedure for Validation of Cognitive Structures Oriented Graph of Cognitive Structures Given a set of m cognitive operations required for solution of n items comprising a considered test, the weight matrix W represents a two-way 4
MULTIVARIATE BEHAVIORAL RESEARCH
D. Dimitrov and T. Raykov
table with elements wik = 1 indicating that item Ii requires cognitive operation k, or wik = 0 otherwise; (i = 1, ..., n; k = 1 , ..., m). In this article, an item Ij is referred to as “subordinated” to item Ii if and only if the cognitive operations required for the correct solution of item Ij represent a proper subset (in the set-theoretic sense) of the cognitive operations required for the correct solution of item Ii. For purposes of symbolizing the subordination relationships between items, the W matrix can be reduced to a matrix of item subordinations, denoted S = (sij), with elements sij =1 indicating that item Ij is subordinated to item Ii and sij = 0 otherwise (i, j = 1, ..., n ). We note that two arbitrarily chosen test items need not necessarily be in a relation of subordination; if they are not, then sij = 0 holds for their corresponding entry in the matrix S. One can now represent graphically the matrix S as an oriented graph where an arrow from Ij to Ii (denoted Ij → I i in the graph) indicates that item Ij is subordinated to item Ii. Figure 1 presents a hypothetical weight matrix W, its corresponding S matrix, and the oriented graph associated with them. An examination of the matrix W shows, for example, that item I1 is subordinated to item I4 as the set of cognitive operations required by item I1 (viz. operations O1 and O3) is a proper subset of the cognitive operations required by item I4 (viz. operations O1, O3, and O4). Therefore, we have s41 = 1 in the matrix S and an arrow from item I1 to item I4 in the graph diagram. In the matrix S, one can also see that all entries in the column that corresponds to item I2 equal 0. This is because item I2 is not subordinated to any of the other items. Indeed, the set of cognitive operations required by item I2 (viz. operations O2 and O3) do not represent a proper subset of the cognitive operations required by any other item. Thus, si2 = 0 (i = 1, 2, 3, 4) and there are no arrows from item I2 to other items in the graph diagram. Similarly, one can check other entries in the matrix S. The transitivity of the subordination relationship holds when successive arrows in the oriented graph connect more than two items. For example, the three-item path I3 → I1 → I 4 indicates that (a) I3 is subordinated to I1, (b) I1 is subordinated to I4, and by transitivity (c) I3 is subordinated to I4 . Relationships of subordination by transitivity are not represented by arrows in a diagram in order to simplify the graphical representation. Relationship to Structural Equation Modeling The presently proposed method of cognitive structure validation makes the assumption that for each test item there exists a continuous latent variable, denoted (appropriately sub-indexed if necessary), which represents the ability required for correct solution of the item. Persons solve MULTIVARIATE BEHAVIORAL RESEARCH
5
D. Dimitrov and T. Raykov
an item correctly if they posses in sufficient degree this ability, that is, if their ability relevant to the item exceeds or equals a certain minimal level (denoted ) required for its correct solution. Thus, formally, the assumption is Xij = 1 if ij ⱖ i, 0 if ij < i ,
(3)
where ij denotes the jth person’s level of ability required for solving the ith item, and i is a threshold parameter above/below which a correct/incorrect response results for the ith item (i = 1, ..., n, j = 1, ..., N). We further assume that the n-dimensional vector of latent variables = (1, ..., n) is multivariate normal.
W matrix
S matrix
Cognitive Operation
Item
I1
I2
I3
I4
I5
Item
O1
O2
O3
O4
I1 I2 I3 I4 I5
1 0 0 1 1
0 1 0 0 0
1 1 1 1 0
0 0 0 1 0
I1 I2 I3 I4 I5
0 0 0 1 0
0 0 0 0 0
1 1 0 1 0
0 0 0 0 0
1 0 0 1 0
Graph Diagram
Figure 1 Graph Diagram of Item Subordinations for a Hypothetical W Matrix 6
MULTIVARIATE BEHAVIORAL RESEARCH
D. Dimitrov and T. Raykov
The assumptions in this subsection are identical to those made when analyzing ordinal data using structural equation modeling (SEM; Jöreskog & Sörbom, 1996b, ch. 7). Thus we are now in a position to evaluate the degree to which a set of cognitive subordination relationships among given items is plausible, by using the corresponding goodness of fit test available in SEM. That is, we can validate the cognitive structure represented by an oriented graph by applying the popular SEM methodology. Specifically, in this framework we consider the item subordination relationships to be the building blocks of a structural equation model relating the items. Accordingly, the model of interest is (4)
= B +
where B = (rs) is the n × n matrix of relationships between the abilities required for successful solution of the rth and sth items while is the vector of corresponding zero-mean residuals, each assumed unrelated to those variables that are predictors of its pertinent ability variable (r, s = 1, ..., n). Hence, in modeling subjects’ behavior on say the rth item it is assumed that up to an associated residual r the required ability for its correct solution, r, is linearly related to those of corresponding other items (r = 1, ..., n; existence of the inverse of the matrix In - B is also assumed, as in typical applications of SEM; e.g., Jöreskog & Sörbom, 1996b). Figure 2 represents the oriented graph of item subordinations for the weight matrix W from the example in the next section. The structural equation model corresponding to Equation 4 is represented in Figure 3. Its path-diagram follows widely adopted graphical convention of displaying structural equation models. In this diagram, each item is represented by a circle denoting the latent ability required for its correct solution. Other than this inconsequential detail for the following developments, the model in Figure 3 symbolically differs from the oriented graph in Figure 2 only in that residuals are associated with latent variables receiving one-way arrows that represent the cognitive subordination relationship of the item at its end to the item at its beginning. With the proposed SEM approach, this model is fitted to the n × n tetrachoric correlation matrix of the observed dichotomous variables Xi that each evaluate whether the ith item has been correctly solved or not (i = 1, ..., n; cf. Equation 3). Thus, with the SEM method used in this article, the validity of a cognitive structure is tested by the following three-step procedure. First, reduce the weight matrix W to a matrix S of cognitive subordinations. Second, as described in the previous section, represent graphically the cognitive subordinations among items in the matrix S as an oriented graph. Third, using MULTIVARIATE BEHAVIORAL RESEARCH
7
D. Dimitrov and T. Raykov
the ordinal data SEM approach, fit the latent relationship model based on the last graph (as its path diagram) to the tetrachoric correlation matrix of the items (Jöreskog & Sörbom, 1996b, ch. 7). The goodness of fit test available thereby represents a test of the plausibility of the originally hypothesized set of cognitive subordination relationships among the studied items. As a byproduct of this approach, one can also estimate the thresholds i of all items (i = 1, ..., n), which allow comparison of their difficulty levels. Evaluation of significance of elements of the matrix B in Equation 4 indicates whether, under the model, one can dispense with the assumption of cognitive subordination for certain pairs of items (viz. those for which the corresponding regression coefficient is nonsignificant). Further, in case of lack of model fit, modification indices may be consulted to examine if modeldata mismatch may be considerably reduced by introduction of substantively meaningful subordination relationships between items, which were initially omitted from the model. (This use of modification indices would initiate an exploratory phase of analysis; Jöreskog & Sörbom, 1996c.) The discussed SEM approach is based on the tacit assumption of subjects not guessing on any of the items. This is an important assumption made also in many applications of item response models and is justified by the observation that guessing would imply for pertinent item(s) more than one trait underlying their successful solution. Further, the approach is best used when the sample of subjects having taken all items of a test under consideration is large (Jöreskog & Sörbom, 1996b). Moreover, for this SEM approach, the assumption of an underlying multinormal latent ability vector is relevant, as well as that of linear relationships among its components as reflected in the underlying model Equation 4 that is tested in an application of the discussed SEM method (Jöreskog & Sörbom, 1996a). Triangulation with Logical Fits Logical fits in the path diagram also lend support to the validity of a cognitive structure. One possible logical fit, referred to here as “path-wise increase of item difficulty”, is based on the assumption that the difficulty of items increases with the increase of processing tasks required for item success. Under this assumption, the item difficulty is expected to increase down the arrows (path-wise) for any path of the oriented graph. With the example provided later in this article, the items are simple linear algebra equations with their difficulty logically increasing with the increase of production rules required for the correct solution of these equations (see Appendix). For this example, there was an increase of Rasch item difficulty down the arrows for each path of the oriented graph, thus providing 8
MULTIVARIATE BEHAVIORAL RESEARCH
D. Dimitrov and T. Raykov
triangulation support to the validation based on the SEM goodness-of-fit test. However, the assumption of path-wise increase of item difficulty may not be appropriate with other cognitive structures. Depending on the W matrix, cognitive subordinations among items may not translate into ordered difficulty values. In such cases, other logical fits may better reflect the cognitive relationships among items in the path diagram of the W matrix. One can even decide to modify the working definition of cognitive subordinations among items to accommodate the substantive context of the W matrix. The validation of cognitive structures is, after all, a flexible logical process of collecting evidence that may include, but is not limited to, an isolated statistical act. Example This example illustrates the proposed validation method for a cognitive structure on an algebra test. The test consisted of 15 linear equations to be solved (see Appendix) and was administered to N = 278 ninth-grade high school students in northeastern Ohio. The students were required to show work toward solution in order to avoid guessing on the test items. The cognitive structure of this test was determined by 10 production rules (PRs) required for the correct solutions of the 15 equations. An earlier version of the test and the PRs were developed in a previous study (Dimitrov & Obiekwe, 1998) in an attempt to improve the system of production rules for algebra test of linear equations proposed by Medina-Diaz (1993). Table 1 presents the W matrix of 15 items and 10 production rules. Table 2 gives the S matrix of item subordinations determined from W. The path diagram corresponding to the matrix S is given in Figure 2. As an illustration, item I2 is the algebra equation “5x - 3 = 7 + 4x”. The production rules (see Appendix) for the correct solution of this equation are: PR6 (balancing): 5x - 4x = 7 + 3, PR3 (collecting numbers): 5x - 4x = 10, and PR4 (collecting two terms): x = 10. On the other hand, the correct solution of equation 2(x + 3) = x - 10" (item I1) requires the same three production rules plus PR7 (removing parenthesis with a positive coefficient). Thus, the PRs required by item I2 represent a proper subset of the PRs required by item I1 (cf. Table 1). This indicates that item I2 is cognitively subordinated to item I1: I2 → I1 (Figure 2). The tetrachoric correlation matrix resulting from the data is presented in Table 3 (and obtained with PRELIS2; Jöreskog & Sörbom, 1996a). We note that in the general case of application of the method of this article, residuals are associated with each dependent latent variable unless theoretical considerations allow one to hypothesize lacking such for a certain latent variable(s). In the present case, in which an algebra test is used to illustrate the method, no residual term is assumed to be associated with the ability MULTIVARIATE BEHAVIORAL RESEARCH
9
D. Dimitrov and T. Raykov
Table 1 Matrix W for Fifteen Algebra Items and Ten Production Rules Production Rules Item I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 I11 I12 I13 I14 I15
PR1
PR2
PR3
PR4
PR5
PR6
PR7
PR8
PR9
PR10
0 0 1 1 1 1 1 0 0 1 1 1 1 1 1
0 0 1 0 0 1 0 0 0 0 0 0 0 1 0
1 1 0 1 0 1 1 1 1 1 1 1 1 1 1
1 1 0 1 0 1 1 0 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 1 0 0 1 1 1
1 1 1 1 0 1 1 1 0 1 1 1 1 1 1
1 0 1 1 0 0 1 0 0 0 0 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 1 1 1 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 1 0 0 0 0
Table 2 Matrix S for the W Matrix of the Algebra Test Item
I1
I2
I3
I4
I5
I6
I7
I8
I9
I10
I11
I12
I13
I14
I15
I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 I11 I12 I13 I14 I15
0 0 0 1 0 0 1 0 0 0 0 1 1 1 1
1 0 0 1 0 1 1 0 0 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 1 0 1 0 1 1 0 0 1 1 1 1 1 1
1 1 0 1 0 1 1 0 0 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
10
MULTIVARIATE BEHAVIORAL RESEARCH
D. Dimitrov and T. Raykov
Table 3 Tetrachoric Correlation Matrix of the Algebra Test Items (N = 278)
I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 I11 I12 I13 I14 I15
I1
I2
I3
I4
I5
I6
I7
I8
I9
I10
I11
1.00 0.78 0.52 0.54 0.55 0.51 0.58 0.54 0.56 0.56 0.41 0.63 0.55 0.29 0.41
1.00 0.48 0.53 0.48 0.45 0.41 0.58 0.51 0.53 0.29 0.66 0.50 0.34 0.47
1.00 0.63 0.56 0.77 0.44 0.31 0.44 0.56 0.32 0.68 0.43 0.56 0.46
1.00 0.56 0.63 0.40 0.57 0.50 0.64 0.26 0.65 0.47 0.56 0.60
1.00 0.71 0.51 0.33 0.51 0.50 0.54 0.48 0.49 0.45 0.39
1.00 0.57 0.60 0.55 0.61 0.53 0.64 0.55 0.61 0.55
1.00 0.49 0.66 0.53 0.34 0.53 0.47 0.51 0.45
1.00 0.61 0.58 0.53 0.72 0.51 0.33 0.58
1.00 0.60 0.48 0.68 0.57 0.44 0.66
1.00 0.33 0.71 0.46 0.64 0.74
1.00 0.44 0.25 0.21 0.41
I12
I13
I14 I15
1.00 0.82 1.00 0.81 0.34 1.00 0.74 0.59 0.57 1.00
Note. N = sample size.
corresponding to item I2. This is because the ability that underlies the correct solution of item I2 is determined by the abilities required for solving items I8 and I9 that are related to item I2 in the considered model. Indeed (see Table 1), the production rules required for the correct solution of item I2 (PR3, PR4, and PR6) are fully represented by those required for the correct solutions of item I8 (PR3 and PR6) and item I9 (PR3 and PR4). Similarly, no residual is assumed to be associated with the latent variable corresponding to item I13. Indeed, an examination of the solution of item I12 that is related to item I13 in the model suggests that the ability underlying correct solution of the latter is determined by those required to solve correctly item I12. This is because items I12 and I13 require the same PRs, with the exception that item I13 requires slightly higher frequency for two of these PRs (namely, “collecting terms” and “removing parentheses”). SEM Test for Goodness-of-Fit Figure 2 represents the path diagram of the initial structural equation model resulting from the matrix S of item subordination relationships. This model was MULTIVARIATE BEHAVIORAL RESEARCH
11
D. Dimitrov and T. Raykov
fitted to the tetrachoric correlation matrix of the dichotomously scored test items that is presented in Table 3, and found to be associated with a chi-square value 2 = 162.81 with degrees of freedom (df) = 84 and a root mean square error of approximation (RMSEA) = .058 with 90%-confidence interval (.045; .071). In this model, the relationship between items I14 and I15 turned out to be nonsignificant: 14,15 = -.28, standard error = .29, t-value = -.96. Inspecting the cognitive processes required for correct solution of the (model-assumed) related items I3, I13, I14, and I15, we noticed that the operations necessary for solving items I3 and I13 were in fact those required for correct solution of item I14 (see Table 1). In the context of correct solution of items I3 and I13, therefore, the cognitive operations underlying item I15 are inconsequential for correct solution of item I14. This suggests that we can dispense with the earlier assumed relationship between items I14 and I15. Indeed, fixing the path from item I15 to I14 in the model displayed in Figure 2 resulted in a nonsignificant increase of the chi-square value up to 2 = 163.89, df = 85, RMSEA = .058 (.044; .071) (⌬2 = 1.08, ⌬df = 1, p > .10). In addition, these model fit indices are themselves acceptable (e.g., Jöreskog & Sörbom, 1996b), and hence the last model version can be considered a tenable means of data description and explanation. The parameter estimates and standard errors associated with this final model are presented in its path-diagram provided in Figure 3. We comment next on the elements of this solution by comparing path estimates to their standard errors (the ratio of parameter estimate to standard error equals the t-value; Jöreskog & Sörbom, 1996b). First, moving from top to bottom in Figure 3 we note the relatively strong relationship between each of items I8 and I9 with item I2. This is not unexpected, given the similarity of these items in content and production rules they require (see Appendix). At the same time, we note the nonsignificant relationships between item I11 and each of items I2 and I5. One possible explanation is the lack of a linear relationship between the abilities to solve each of items I2 and I5, with that of item I11. Indeed, an inspection of the students’ test work on these items showed that those who did not solve correctly item I11 failed at the initial step, PR10 (removing denominators), because of difficulty with finding the least common denominator. Therefore, success of these students on the subsequent steps (production rules PR1, PR3, PR4, and PR6) did not lead to correct solution of item I11. When solving item I2, however, the successful application of PR3, PR4, and PR6 led to a correct solution because PR10 (removing denominators) was not required by item I2. Further, the weak relationship between items I11 and I5 can be explained by the fact that most students who failed on the least common denominator with item I11 avoided this problem with item I5 by applying the cross-multiplication rule. 12
MULTIVARIATE BEHAVIORAL RESEARCH
D. Dimitrov and T. Raykov
Moving to the lower parts of the model solution in Figure 3, we notice that item I2 is strongly related to items I1, I6, and I10. This indicates that the ability associated with item I2 plays a central role for solving each of items I1, I6, and I10. This is not unexpected because the production rules required by item I2 represent the major part of the production rules required by items I1, I6, and I10. Similarly, the ability associated with item I1 plays a central role for solving correctly items I4, I7, and I12 – the paths leading from item I1 into each of these three items are significant. In this context, the ability of solving item I10 is only weakly related to that of solving item I12. This indicates that there is no unique contribution of the ability underlying item I10 to the correct solution of item I12 after controlling for the contribution of the ability underlying item I1 to solving correctly item I12. The relationships of items I4 and I10 to item I15 are significant and thus indicate that, in the context of correctly solving item I4, the ability to correctly solve item I10 is also relevant (i.e., has unique contribution) for successful solution of item I15. An explanation of this finding is that removing parentheses, which is required by item I15, can be avoided in the solution of item I4 by collecting the two terms within parentheses: “5(2x - 5x) ... “ (see Appendix). Also, collecting more than two terms is required by items I10 and I15, but not necessarily by item I4. Focusing on the lowest part of the model depicted in Figure 3, it is stressed that there is a strong relationship between the ability underlying item I12 to that associated with item I13. Thus, according to the model, the ability to solve the latter item is essentially proportional to that for solving item I12. This is not surprising given the fact that the same production rules are required by both items I12 and I13 with the exception that “collecting terms” and “removing parentheses” is more frequently required with item I13. Last but not least, the relationship between items I3 and I14 is nonsignificant while that between items I13 and I14 is significant. Thus, in the context of the ability necessary for handling item I13, there is no unique (linear) contribution of item I3 to successful solution of item I14. This can be explained by the fact that, out of the eight production rules required for solving correctly item I14, seven are required also by item I13. There is only one production rule required by item I14 that is required also by item I3 but not by item I13 (PR2; see Table 1). For triangulation purposes, if one assigns to each item in Figure 2 its Rasch difficulty reported in Table 4, there is a path-wise increase of item difficulty for all paths. This can be illustrated, for example, with the four-item path I8 (-2.487) → I2 (-1.632) → I1 (-1.305) → I7 (0.793), where the Rasch difficulty (in logits) is given in parentheses. This logical fit is consistent with the results from the SEM goodness-of-fit test for the validation of the weight MULTIVARIATE BEHAVIORAL RESEARCH
13
D. Dimitrov and T. Raykov
matrix W in Table 1. In this example, the path-wise increase of item difficulty can be expected to occur due to the orderly structure of production rules required for the solution of simple linear algebra equations (see Appendix). However, as noted earlier in this article, this may not be true in the context
Figure 2 Graph Diagram of Item Subordinations for the W Matrix in Table 1 14
MULTIVARIATE BEHAVIORAL RESEARCH
D. Dimitrov and T. Raykov
of other cognitive structures. Yet, depending on the W structure, one may find other plausible heuristic triangulations of the SEM goodness-of-fit test for the validation of the hypothesized cognitive structure.
Figure 3 Path Diagram (final model) for the SEM Analysis with the W Matrix in Table 1 * Statistically significant beta estimates (p < .01) MULTIVARIATE BEHAVIORAL RESEARCH
15
D. Dimitrov and T. Raykov
The SEM treatment of the oriented graph of cognitive subordinations may provide useful validation feedback to subsequent applications of the W matrix in various measurement and substantive studies. In the next section, this is illustrated with an application of the LLTM for the test of simple linear algebra equations (see Appendix). The LLTM, if it fits the data well, provides estimates of the ␣k parameters in Equation 1 for the prediction of difficulty of simple linear algebra equations. The accuracy of this prediction is the LLTM criterion for validity of the W matrix. Evidently, the LLTM validation can complement, but cannot substitute, the validation of the same W matrix provided by the SEM method. Moreover, the latter will be the only validation perspective in this study should the LLTM fail to fit the data. LLTM Application The LLTM was conducted with the W matrix in Table 1 using the computer program for structural Rasch modeling LPCM-WIN 1.0 (Fischer & Ponocny-Seliger, 1998). As noted earlier, the LLTM can be used only if the Rasch model fits the data. The test of model fit [2(14) = 22.83, p > .05] indicates a good fit of the Rasch model. The estimates of LLTM parameters are provided in Table 4, with ␣k indicating the contribution of the kth production rule to the difficulty of any item that requires this production rule (k = 1, ..., 10). Estimates of the Rasch difficulty and the predicted (LLTM) difficulty for the algebra test items are also provided in Table 4. The graphical goodness-of-fit test (Fischer, 1995) in Figure 4 indicates very good fit, with a Pearson correlation of .975 between the LLTM and Rasch item difficulties. However, the LR difference in chi-square test does not lend support to the fit of the LLTM relative to the Rasch model [2(4) = 46.91, p < .01]. This is not surprising given the reported strictness of this LR test in the literature (e.g., Fischer, 1995, p. 147). Thus, while the graphical test supports the validation of the W matrix in terms of accurate prediction of Rasch item difficulty with the LLTM, the strict LR test does not. In this situation, the SEM validation of the W matrix in the previous section can “moderate” in support of subsequent applications of Equation 1 for the LLTM prediction of item difficulty in a test of simple liner algebra equations from 10 production rules (see Appendix). Discussion and Conclusion In previous research, validation of cognitive structures was addressed primarily with LLTM applications for predicting item difficulties. As is well known, LLTM goodness-of-fit tests focus on the similarity in fit between the 16
MULTIVARIATE BEHAVIORAL RESEARCH
D. Dimitrov and T. Raykov
Table 4 LLTM Analysis with the W Matrix of the Algebra Test Parameter Estimates PRs PR1 PR2 PR3 PR4 PR5 PR6 PR7 PR8 PR9 PR10
␣k (SE)a -0.735 ( 0.151)** 2.401 ( 0.205)** -1.536 ( 0.271)** -0.773 ( 0.175)** 0.426 ( 0.178)* -0.950 ( 0.115)** -0.314 ( 0.110)** -2.328 ( 0.145)** 0.388 ( 0.157)* 0.458 ( 0.138)**
Item Difficulty Item
LLTM
Rasch
I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 I11 I12 I13 I14 I15
-1.5659 -1.1484 1.0500 -0.4557 0.3042 0.7432 1.0813 -2.9167 -1.5659 -0.4245 1.9139 -0.3993 1.0828 1.8541 0.4469
-1.3050 -1.6323 1.1704 -0.2654 -0.1923 0.7265 0.7931 -2.4871 -1.6323 -0.1440 2.3913 -0.6997 0.9262 2.0282 0.3225
SE = standard error of the LLTM parameter ␣k (k = 1,..., 10). * p < .05. ** p < .01.
a
Figure 4 Graphical Goodness-Fit-Test for the LLTM with the W Matrix in Table 1 MULTIVARIATE BEHAVIORAL RESEARCH
17
D. Dimitrov and T. Raykov
LLTM and the Rasch model. However, the associated LR difference in chisquare test rather frequently turns out significant, thus leading to rejection of the LLTM even when the graphical test for goodness-of-fit indicates a good match between the estimates of the Rasch item difficulties and their LLTM predicted values (e.g., Fischer, 1995). The quadratic assignment test for the LLTM (Medina-Diaz, 1993) requires data on the subjects’ responses on each cognitive operation – information that is difficult or impossible to obtain in most testing situations. The LLTM tests are important for accuracy in the prediction of Rasch item difficulties but they do not tap into cognitive relationships among items for the validation of the hypothesized cognitive structure. In addition, the LLTM works under relatively strong assumptions: (a) the test is unidimensional, (b) the Rasch model fits the data well, and (c) the restrictions in Equation 1 hold. The SEM approach discussed in this article provides an overall goodness-of-fit test as well as valuable information about cognitive subordinations among test items. An item is referred to here as cognitively subordinated to another item if the cognitive operations required by the first item represent a proper subset of the cognitive operations required by the second item. With the proposed validation method, the W matrix of a cognitive structure is reduced to a matrix S of item subordinations. The model corresponding to the path diagram associated with the matrix S is then submitted to SEM analysis for goodness-of-fit by fitting it to the tetrachoric correlation matrix of dichotomously scored items. It should be emphasized that the SEM validation model is not tied to specific assumptions of the LLTM or other IRT models that focus on prediction of item difficulty from cognitive operations. Lack of unidimensionality for the test may preclude the use of the LLTM, but not applicability of the SEM method proposed in this article. Conversely, lack of fit of the SEM model has no direct implications for the Rasch model, but may question the substantive validity of Equation 1 with the LLTM. Indeed, just as accuracy of prediction with a multiple linear regression equation does not exclude specification problems, accuracy of item prediction with Equation 1 does not exclude possibility for misspecifications of cognitive components. The path diagram corresponding to the matrix S can also be used for additional substantive and logical analyses of cognitive structures. With the cognitive structure of some tests (e.g., orderly structured production rules in solving algebra equations), it is sound to expect that the more processing operations are required by the items, the more difficult the items become. Under this assumption, the increase of item difficulty along the paths of the diagram provides heuristic support to the validity of the cognitive structure. This was 18
MULTIVARIATE BEHAVIORAL RESEARCH
D. Dimitrov and T. Raykov
illustrated in the previous section by associating the items of the path diagram in Figure 2 with their Rasch difficulties. When the Rasch model does not fit the data, one can associate the items in the path diagram with their difficulty estimates produced by other IRT models that assume “no guessing” (e.g., the two-parameter logistic model; see Embretson & Reise, 2000, p. 70). With some cognitive structures, however, cognitive subordinations between items do not necessarily parallel the item difficulty estimates. In such cases, one may use other logical fits in the path diagram to triangulate the validation provided by the SEM goodness-of-fit test. As noted earlier, one can even modify the working definition of cognitive subordinations among items to accommodate the substantive context of the W matrix. Combining the SEM goodness-of-fit test with logical fits for the path diagram is a dependable process of collecting evidence about the validity of a hypothesized cognitive structure. It should be noted that the path diagram corresponding to the matrix S displays paths of subordinated items generated from the matrix W, but does not include partial overlaps of cognitive operations between items. Despite this limitation, the SEM goodness-of-fit test and triangulation logical fits of the paths have strong potential to provide evidence for the validity of a cognitive structure. As illustrated in the previous section with the LLTM for the algebra test, the strict LR difference in chi-square test can be soundly counterbalanced by consistent results of the LLTM graphical goodness-of-fit test, the SEM goodness-of-fit test, and logical fits for the path diagram. In addition, the joint analysis of cognitive substance and SEM coefficients of paths in the oriented graph can provide valuable feedback about linearity of relationships, redundancy, and uniqueness of contribution for abilities that underlie the correct solution of items. This was illustrated with the joint analysis of SEM path estimates and substantive implications for abilities that underlie the correct solution of items in the algebra test. Also, the information about cognitive relations among items provided by the path diagram and its SEM analysis can be used, for example, in task analysis, content selection, teaching strategies, and curriculum development. When using the method of this article, we emphasize that repeated modification of the initial structural equation model effectively represents elements of an exploratory analysis along with the confirmatory ones that are inherent in a typical utilization of SEM. In this type of application of the method, it is important that researchers validate the final model on data from an independent sample before more general conclusions are drawn. Similarly, such a validation is recommended also when the model that was fitted turns out to be associated with acceptable fit indices and presents substantively plausible means of description of the relationships between the analyzed items. Even in the latter likely rare cases in empirical work, it is highly recommendable to replicate the model in other studies aiming at cognitive validation of the same MULTIVARIATE BEHAVIORAL RESEARCH
19
D. Dimitrov and T. Raykov
test items. We also stress that the method of this article necessitates large samples (Jöreskog & Sörbom, 1996b). This essential requirement is justified on two related grounds. One, the analytic approach underlying the method is the SEM methodology that represents itself a large-sample modeling technique. Two, since the analyzed data is dichotomous (true/false solution of test items), application of the weighted least squares method of model estimation and testing is necessary, which requires as a first step the estimation of a potentially rather large weight matrix (Jöreskog & Sörbom, 1996b). Further, with other than large samples, the polychoric correlation matrix to which the structural equation model is fitted may contain entries with intolerably large standard errors, thus weakening the conclusions reached with the SEM approach. Since we are not aware of explicit, specific, and widely accepted guidelines as to determination of sample size in the dichotomous item case of relevance here, based on recent continuous variable simulation research reviewed and presented by Boomsma and Hoogland (2001; see also Hoogland, 1999; Hoogland & Boomsma, 1998) we would like to suggest that a sample size of at least several hundred is needed even with a relatively small number of items (up to dozen, say). In this regard, we would generally discourage use of the method of this article with a sample size of less than 200 subjects even with a smaller number of items; with more than a dozen of items say, we would view samples considerably higher than this as minimally needed, and perhaps higher than a thousand as required with larger models. In conclusion, the SEM method discussed in this article combines confirmatory and exploratory procedures in a flexible process of collecting statistical and heuristic evidence about the validity of hypothesized cognitive structures. References Baker, F. B. (1993). Sensitivity of the linear logistic test model to misspecification of the weight matrix. Applied Psychological Measurement, 17(3), 201-210. Boomsma, A. & Hoogland, J. J. (2001). The robustness of LISREL modeling revisited. In R. Cudeck, S. H. C. duToit, & D. Sörbom (Eds.), Structural equation modeling: Presence and future. A festschrift in honor of Karl Jöreskog. Chicago: Scientific Software International. Dimitrov, D. M. & Henning, J. (in press). A linear logistic test model of reading comprehension difficulty. In R. Hashway (Ed.), Annals of the Joint Meeting of the Association for the Advancement of Educational Research and the National Academy for Educational Research 2000. Lanham, Maryland: University Press of America. Dimitrov, D. M. & Obiekwe, J. (1998, February). Validation of item difficulty components for algebra problems. Paper presented at the meeting of the Eastern Educational Research Association, Tampa, FL. Drum, P. A., Calfee, R. C., & Cook, L. K. (1981). The effects of surface structure variables on performance in reading comprehension tests. Reading Research Quarterly, 16, 486-514. 20
MULTIVARIATE BEHAVIORAL RESEARCH
D. Dimitrov and T. Raykov Embretson, S. E. (1984). A general latent trait model for response processes. Psychometrika, 49, 175-186. Embretson, S. E. (1991). A multidimensional latent trait model for measuring learning and change. Psychometrika, 56, 495-516. Embretson, S. E. (1995). A measurement model for linking individual learning to process and knowledge: application to mathematical reasoning. Journal of Educational Measurement, 32, 277-294. Embretson, S. E. (2000). Multidimensional measurement from dynamic tests: Abstract reasoning under stress. Multivariate Behavioral Research, 35, 505-543. Embretson, S. E. & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Erlbaum. Embretson, S. & Wetzel, C. D. (1987). Component latent trait models for paragraph comprehension tests. Applied Psychological Measurement, 11, 175-193. Fischer, G. H. (1973). The linear logistic test model as an instrument in educational research. Acta Psychologica, 37, 359-374. Fischer, G. H. (1983). Logistic latent trait models with linear constraints. Psychometrika, 48, 3-26. Fischer, G. (1995). The linear logistic model. In G. H. Fischer & I. W. Molenaar (Eds.) Rasch models. Foundations, recent developments, and applications (pp. 131-155). New York: Springer-Verlag. Fischer, G. & Ponochny-Seliger, E. (1998). Structural Rasch modeling: Handbook of the usage of LPCM-WIN 1.0. Groningen, Netherlands: ProGAMMA. Gitomer, D. H. & Rock, D. (1993). Addressing process variables in test analysis. In N. Frederiksen, R. J. Mislevy, & I. Bejar (Eds.), Test theory for a new generation of tests (pp. 243-268). Hillsdale, NJ: Erlbaum. Glas, C. A. W. & Verhelst, N. D. (1995). Testing the Rasch model. In G. H. Fisher & I. W. Molenaar (Eds.). Rasch models. Foundations, recent developments, and applications (pp. 69-95). New York: Springer-Verlag. Hambleton, R. K. & Swaminathan, H. (1985). Item response theory: Principles and applications. Boston: Kluwer-Nijhoff. Hoogland, J. J. (1999). The robustness of estimation methods for covariance structure analysis. Unpublished doctoral dissertation. University of Groningen, The Netherlands. Hoogland, J. J. & Boomsma, A. (1998). Robustness studies in covariance structure modeling: An overview and a meta-analysis. Sociological Methods and Research, 26, 329-367. Jöreskog, K. G. & Sörbom, D. (1996a). PRELIS2: User’s reference guide. Chicago, IL: Scientific Software International. Jöreskog, K. G. & Sörbom, D. (1996b). LISREL8: User’s reference guide. Chicago, IL: Scientific Software International. Jöreskog, K. G. & Sörbom, D. (1996c). LISREL8: The SIMPLIS command language. Chicago, IL: Scientific Software International. Mayer, R., Larkin, J., & Kadane, P. (1984). A cognitive analysis of mathematical problem solving. In R. Stenberg (Ed.), Advances in the psychology of human intelligence (Vol. 2, pp. 231-273). Hillsdale, NJ: Erlbaum. Medina-Diaz, M. (1993). Analysis of cognitive structure using the linear logistic test model and quadratic assignment. Applied Psychological Measurement, 17(2), 117-130. Mislevy, R. J. (1993). Foundations of a new theory. In N. Frederiksen, R. J. Mislevy, & I. Bejar (Eds.), Test theory for a new generation of tests (pp. 19-39). Hillsdale, NJ: Erlbaum. MULTIVARIATE BEHAVIORAL RESEARCH
21
D. Dimitrov and T. Raykov Perkins, K., Gupta, L., & Tammana, R. (1995). Predicting item difficulty in a reading comprehension test with an artificial neural network. In A. Davies & J. Upshur (Eds.) Language testing, Vol. 12, No. 1 (pp. 34-53). London: Edward Arnold. Rasch, G. (1960). Probabilistic models for intelligence and attainment tests. Copenhagen: Danmarks Paedagogiske Institut. Riley, M. S. & Greeno, J. G. (1988). Developmental analysis of understanding language about quantities and of solving problems. Cognition and Instruction, 5, 49-101. Smith, R. M., Schumacker, R. E., & Bush, M. J. (1998). Using item mean squares to evaluate fit to the Rasch model. Journal of Outcome Measurement, 2, 66-78. Snow, R. E. & Lohman, D. F. (1984). Implications of cognitive psychology for educational measurement. In R. Linn (Ed.), Educational measurement (3rd ed., pp. 263-331). New York: Macmillan. Spada, H. (1977). Logistic models of learning and thought. In H. Spada & W. F. Kempf (Eds.), Structural models of thinking and learning (pp. 227-262). Spada, H. & Kluwe, R. (1980). Two models of intellectual development and their reference to the theory of Piage. In R. Kluwe & H. Spada (Eds.), Developmental model of thinking (pp. 1-32). New York: Academic Press. Spada, H. & McGaw, B. (1985). The assessment of learning effects with linear logistic test models. In S. Embretson (Ed.), Test design: New directions in psychology and psychometrics (pp. 169-193). New York: Academic Press. Van den Wollenberg, A. L. (1982). Two new statistics for the Rasch model. Psychometrika, 47, 123-139. Whitely, S. E. & Schneider, L. (1981). Information structure for geometric analogies: A test theory approach. Applied Psychological Measurement, 5, 38 3-397. Wright, B. D. & Stone, M. H. (1979), Best test design. Chicago: MESA Press.
Accepted April, 2002. Appendix Test of Algebra Linear Equations Item I1: 2(x + 3) = x - 10 Item I2: 5x - 3 = 7 + 4x Item I3: 2(x - n) = 5 Item I4: 5(2x - 5x) = 20x Item I5: 3x/7 = 10 Item I6: nx - a = 5a Item I7: 4[x + 3(x - 2)] = 10 Item I8: -7 + x = 10 Item I9: 20 = 5x - 5x Item I10: 2x + 7 - 10x + 3 = 12 - 2x Item I11: 4x/5 + 2 + 2x/3 - 10 = 0 Item I12: -5(8 - 2x) = 2x - 2
22
MULTIVARIATE BEHAVIORAL RESEARCH
D. Dimitrov and T. Raykov
Item I13: 5 - 2(x + 3) = x + 5(2x - 1) + 10 Item I14: x - 4(5x - 4) + 5x = 10 - n + 2n Item I15: 6(x - 4) + 2x = 5x - 10 Production rules (PRs) required for the correct solution of the algebra equations: PR1:
Solve for variable with only numeric coefficients Example: If 5x = 20 then x = 20/5 PR2: Solve for variable with nonnumerical coefficients Example: If nx = a + 5 then x = (a + 5)/n PR3: Collecting numbers Example: If 2x = 5 - 10 then 2x = -5 PR4: Collecting two terms Example: If 2x - 5x = 10 then -3x = 10 PR5: Collecting more than two terms: Example: If 2x - 5x + 8x = 10 then 5x = 10 PR6: Balancing Example: If 5 + 2x = 3 then 2x = 3 - 5 PR7: Removing parentheses with a positive coefficient Example: If 5(2x + 3) = 5 then 10x + 15 = 5 PR8: Removing parentheses with a negative coefficient Example: If - 5(2x + 3) = 5 then -10x - 15 = 5 PR9: Removing brackets: Example: If 4[x + 3(x - 2)] = 10 then 16x - 24 = 10 PR10: Removing denominators Example: If 3x/7 = 1/2 then 6x = 7 Note. The examination of items and PRs shows that (a) PR2 subsumes PR1, (b) PR4 subsumes PR3, (c) PR5 subsumes PR4, and (d) PR8 subsumes PR7. For example, PR4 subsumes PR3 because “collecting terms” (PR4) includes also “collecting numbers” (PR3) that represent the coefficients of the terms involved in PR4. Therefore, when an item requires PR4, this item “automatically” requires PR3 (see Table 1).
MULTIVARIATE BEHAVIORAL RESEARCH
23