Journal of Applied Psychology 2001. Vol. 86, No. 5, 1022-10.13
Copyright 2001 by the American Psychological Association, Inc. 0021-9010/01/S5.00 DOI: 10.1037//0021-9010.86.5.1022
Knowledge Structures and the Acquisition of a Complex Skill Eric Anthony Day
Winfred Arthur Jr. and Dennis Gettman Texas A&M University
Valparaiso University and The Ohio State University
The purpose of this study was to examine the viability of knowledge structures as an operationalization of learning in the context of a task that required a high degree of skill. Over the course of 3 days, 86 men participated in 9 training sessions and learned a complex video game. At the end of acquisition, participants' knowledge structures were assessed. After a 4-day nonpractice interval, trainees completed tests of skill retention and skill transfer. Findings indicated that the similarity of trainees' knowledge structures to an expert structure was correlated with skill acquisition and was predictive of skill retention and skill transfer. However, the magnitude of these effects was dependent on the method used to derive the expert referent structure. Moreover, knowledge structures mediated the relationship between general cognitive ability and skill-based performance.
Formal evaluation is essential to establishing the effectiveness of training programs. One commonly used criterion for evaluating training is measuring the amount of learning that has occurred (Kirkpatrick, 1976, 1996). Recently, much discussion has focused on the empirical relationships shared between different types of training criteria (Alliger & Janak, 1989; Alliger, Tannenbaum, Bennett, Traver, & Shetland, 1997), and more sophisticated techniques for operationalizing learning have been articulated (Kraiger, Ford, & Salas, 1993). Kraiger et al. (1993) provided a theoretically based model of training evaluation. The primary assumptions of the model are that learning outcomes are multidimensional and that a constructoriented approach should be taken in the development of evaluation measures. Regarding the first assumption, Kraiger et al. proposed that evidence of learning may be gleaned from changes in cognitive, skill, and affective capacities. Kraiger et al. maintained that learning outcomes should be examined within the framework of a nomological network in which the interrelationships among constructs are identified and empirically tested. Furthermore, hypotheses concerning the operationalization, or measurement, of learning outcomes should be similarly developed and tested.
The purpose of the present study was to examine the viability of knowledge structures as an operationalization of learning in the context of complex skill acquisition. Accordingly, we examined the relationship between trainees' knowledge structures and skillbased performance. Furthermore, we examined the extent to which measures of trainees' knowledge structures at the end of an acquisition period were predictive of skill retention and skill transfer. Consistent with Kraiger et al.'s (1993) call for researchers to better identify nomological networks of multiple concepts pertinent to training, including outcome measures, we also examined the extent to which general cognitive ability (g) was related to trainees' knowledge structures. Given that g has been well established as a valid predictor of job performance and training outcomes (Ree & Earles, 1991; Schmidt & Hunter, 1998), demonstrating an empirical relationship between knowledge structures and g can be considered additional evidence of the validity of knowledge structures as operationalizations of learning. We were particularly interested in the accuracy of trainees' knowledge structures. To measure accuracy, we used structural assessment and assessed the similarity of trainees' structures to an expert referent structure. Within this approach, we also examined the comparative efficacy of two different techniques for aggregating multiple expert structures into a single referent structure.
Eric Anthony Day, Department of Psychology, Valparaiso University and Department of Psychology, The Ohio State University; Winfred Arthur Jr. and Dennis Gettman, Department of Psychology, Texas A&M University. This research was funded by grants from the Office of Creative Work and Research and the Office of the Dean of Arts and Sciences, Valparaiso University. We thank Jason Grebasch, Robyn Sterrett, Dena Mirenic, and Amy Cavanaugh for their assistance in data collection. We especially thank Glen Westlund for his assistance in the design and construction of the complex skill acquisition laboratory at Valparaiso University. Correspondence concerning this article should be addressed to Eric Anthony Day, who is now at the Department of Psychology, The University of Oklahoma, 455 West Lindsey Street, Dale Hall Tower, Room 705, Norman, Oklahoma 73019-2007. Electronic mail may be sent to eday ©psychology.psy.ou.edu.
Learning and Knowledge Structures Knowledge structures are based on the premise that people organize information into patterns that reflect the relationships that exist between concepts and the features that define them (JohnsonLaird, 1983). Knowledge structures can be distinguished from declarative knowledge. Knowledge structures represent the organization of knowledge, whereas declarative knowledge reflects the amount of knowledge or facts learned. Measures of cognitive learning outcomes traditionally have involved achievement tests that assess declarative knowledge (Alliger et al., 1997; Goldsmith & Kraiger, 1997). However, current thinking in cognitive science suggests that, in addition to the amount of knowledge stored in 1022
RESEARCH REPORTS
memory, the organization of knowledge stored in memory is perhaps of equal or greater importance (Johnson-Laird, 1983; Kraiger et al., 1993; Rouse & Morris, 1986). Memory organization provides individuals with a responsive mechanism for organizing information and for retrieving information from long-term storage. The retrieval of information enables faster and more complete comprehension, enhanced capability to make inferences, prediction of future events, and determination of optimal actions that will influence current and future events in a desired way (Collins & Gentner, 1987). In addition to the term knowledge structures, several other labels have been attached to the construct of knowledge organization, including mental models, schemas, and conceptual frameworks (Dorsey, Campbell, Foster, & Miles, 1999). Likewise, there are many different techniques available for measuring knowledge structures, including (a) accuracy and time measures, (b) interviews, (c) process tracing or "think-aloud" protocols, and (d) structural assessment (SA; Rowe, Cooke, Hall, & Halgren, 1996). SA involves judgments of similarity among a set of critical concepts and is considered to be the modal technique for measuring knowledge structures (Kraiger et al., 1993). In the present research, we demonstrate how SA can be used to quantify knowledge structures. In addition, we illustrate how SA yields representations of organized knowledge that reflect comprehension of domain concepts and can be related to behavioral actions. Although tests of declarative knowledge traditionally have been used to assess learning after training, empirical evidence over the past 20 years suggests that measures of knowledge structures also hold considerable promise. In a training context, knowledge structures reflect the degree to which trainees have organized and comprehended the content of training. Indeed, research has indicated that measures of knowledge structures using SA can differentiate between experts and novices (e.g., Schvaneveldt et al., 1985), predict classroom learning and achievement (e.g., Goldsmith, Johnson, & Acton, 1991), reflect training manipulations designed to influence learning (Kraiger, Salas, & Cannon-Bowers, 1995), and predict transfer performance on tactical decision-making tasks (Kraiger et al., 1995). The present study extends this body of research by examining the relationship between trainees' knowledge structures and the acquisition of a complex skill that has both strong cognitive and psychomotor requirements. We expected the accuracy of trainees' knowledge structures to have a positive correlation with skill acquisition, retention, and transfer. As individuals develop expertise in a domain, their knowledge structures converge toward a true representation of that domain (cf. Acton, Johnson, & Goldsmith, 1994). Assuming that experts' organization and comprehension of domain knowledge are a close approximation of the true representation of that domain, then similarity to an established expert structure can be considered an indicator of skill development. Specifically, we tested the following hypotheses: Hypothesis 1: Trainees whose knowledge structures are more similar to an expert structure will have higher levels of skill acquisition than trainees whose structures are less similar to an expert structure. That
1023
is, there will be a positive correlation between knowledge structure accuracy and skill acquisition. Hypothesis 2: Trainees whose knowledge structures are more similar to an expert structure at the end of skill acquisition will score higher on tests of retention and transfer than trainees whose structures are less similar to an expert structure. That is, there will be a positive correlation between knowledge structure accuracy and skill retention and transfer.
General Cognitive Ability, Skill Acquisition, and Knowledge Structures It has been well established that g is a valid predictor of job performance (Barrett & Depinet, 1991; Ree & Earles, 1992; Ree, Earles, & Teachout, 1994; Schmidt & Hunter, 1998), training success (Ree, Carretta, & Teachout, 1995; Ree & Earles, 1991), and complex skill acquisition (Ackerman, 1992; Ackerman, Kanfer, & Goff, 1995; Kanfer & Ackerman, 1989; Rabbit, Banerji, & Szymanski, 1989). However, a review of the extant literature did not reveal any empirical investigations relating g to knowledge structures. Hence, in our treatment of knowledge structures as a type of learning outcome, we were especially interested in the relationship that g might share with trainees' knowledge structures at the end of skill acquisition. Given the weight of the evidence that has consistently demonstrated the importance of g to skill acquisition and learning in general, we expected g to be correlated with the accuracy of trainees' knowledge structures. Thus, we tested the following hypothesis: Hypothesis 3: Trainees who have higher levels of g will have knowledge structures that are more similar to an expert structure. That is, g will be positively correlated with knowledge structure accuracy.
Moreover, it has been demonstrated that g is related to job performance through the acquisition of job knowledge (Hunter, 1983; Ree et al., 1995; Schmidt & Hunter, 1992; Schmidt, Hunter, & Outerbridge, 1986). In other words, persons with greater g acquire greater comprehension of job-task knowledge, and this greater comprehension of job-task knowledge leads to greater performance. On the basis of this rationale, we also tested the following hypothesis: Hypothesis 4: Knowledge structures assessed at the end of acquisition will mediate the relationship between g and performance on tests of retention and transfer.
Consensus Versus Mechanically Derived Referent Structures One method for determining the accuracy of trainees' knowledge structures is to compare the trainees' structures with a referent structure that is considered to be a "true score." Typically, subject-matter experts (SMEs) are used to derive the referent structure. Although a number of studies have used a single SME to derive a referent structure (e.g., Goldsmith et al., 1991; Kraiger et al., 1995), others have used ratings from several SMEs (e.g., Acton
1024
RESEARCH REPORTS
et al., 1994; Dorsey et al, 1999). In fact, within the performance appraisal literature, judgments from several SMEs are frequently used to derive a true score for purposes of measuring rating accuracy (see Sulsky & Balzer, 1988). The assumed advantage to using multiple SMEs is that personal biases can be overcome through aggregation. In contrast, in using multiple SMEs to derive a referent structure, one can potentially be faced with substantial variability among expert opinions (Acton et al., 1994). One must then choose a method of aggregating expert opinions so as to eliminate this variability. Two methods for aggregating SME judgments include (a) having SMEs complete the knowledge structure measure together and reach consensus and (b) having SMEs independently complete the knowledge structure measure and then mechanically combining (averaging) the structures. The issue of expert consensus versus a mechanical combination of SME judgments is similar to discussions regarding whether trained experts' intuitive global predictions are better than a statistical combination, or averaging, of relevant predictors. Research has consistently indicated that experts' intuitive judgments are frequently outpredicted by a statistical combination (Dawes, Faust, & Meehl, 1989; Meehl, 1954; Sawyer, 1966). The principal factor underlying the superiority of statistical combination is the greater consistency in combining and optimally weighting (i.e., unit weights) multiple pieces of information (Dawes et al., 1989). Similar findings have been reported in the assessment center literature concerning the best method for integrating raters' judgments concerning ratees' performance. Despite the popularity of integrating raters' judgments through consensus, no single study has shown that the validity of ratings reached by means of consensus is superior to the validity of ratings derived through a mechanical combination (Pynes & Bernardin, 1992). However, judgments made in the measurement of knowledge structures are substantially different from those made by clinicians and assessment center raters. Measuring knowledge structures entails similarity judgments between concepts from a specified task domain. In contrast, clinical judgments and those made by assessment center raters involve diagnoses and predictions concerning future behavior based on observations of people. Accordingly, these differences warrant examinations regarding the best technique for aggregating judgments from multiple SMEs to derive referent knowledge structures. Therefore, we developed two indices of knowledge structure accuracy. For the first index, trainees' knowledge structures were compared with a referent derived through a mechanical combination of SME models. The second index involved comparing trainees' structures and a referent structure derived through SME consensus. On the basis of the extant literature, we tested the following hypothesis: Hypothesis 5: An expert referent structure derived through a mechanical aggregation will yield higher correlations with g and skill-based performance than a referent derived through expert consensus.
The training protocol used in this study was Shebilske, Regian, Arthur, and Jordan's (1992) active-interlocked modeling. Activeinterlocked modeling increases training efficiency by simultaneously training two people to achieve the same performance level as a single person with no increase in training time or machine cost (Arthur, Day, Bennett, McNelly, & Jordan, 1997; Arthur et al., 1995; Shebilske et al., 1992). Active-interlocked modeling has led
to training innovations for Israeli Air Force pilots, U.S. navigators, and Aer Lingus airline pilots (Johnston, Regian, & Shebilske, 1995; Shebilske, Goettl, & Regian, 1999). The performance task used in the present study was the video game Space Fortress (SF), which has a strong history as an excellent research tool in the area of complex skill acquisition (Donchin, 1989; Gopher, 1993; Mane & Donchin, 1989). SF includes important information-processing and psychomotor demands that are present in aviation and other complex tasks (Gopher, Weil, & Bareket, 1994; Gopher, Weil, & Siegel, 1989; Hart & Battiste, 1992). Method
Participants An initial sample of 92 right-handed male volunteers, who were from a small midwestern university and its community, were recruited by posted notices on campus, announcements made in classes, and advertisements in local newspapers. They were paid $65 to participate in a total of 4 days of training. Participants competed for three bonuses of $50, $30, and $20, which were awarded to trainees with the three highest total scores respectively. Participants who did not complete all training sessions were excluded from the analyses. The final number of trainees who participated in the study was 86. The mean age of the sample was 21.11 years (SD = 3.21 years).
Materials Raven's Advanced Progressive Matrices (APM; Raven, Raven, & Court, 1994). The APM is a measure of g with a low level of culture loading, which has led some experts to argue that it is the purest available measure of g or analytical (fluid) intelligence (e.g., Carpenter, Just, & Snell, 1990; Humphreys, 1984; Jensen, 1980; Raven, 1989). It consists of 36 matrix or design problems arranged in an ascending order of difficulty and is scored by summing the number of problems that are correctly answered. We used an administration time of 40 min. The test manual reports a test-retest reliability of .91. A Spearman-Brown odd-even split-half reliability of .86 was obtained for the present study. The APM was administered to all participants on the 2nd day of participation, before SF training. Pathfinder (Schvaneveldt, 1990; Schvaneveldt, Durso, & Dearholt, 1989). Pathfinder, a computerized SA technique that generates concept similarity maps, was used for the dictation and analysis of knowledge structures. Pathfinder is a network scaling procedure (Schvaneveldt, 1990)
1
Acton et al. (1994) compared an average expert referent structure with that of individual expert referent structures in predicting students' exam performance and discriminating between various levels of expertise (i.e., novice and advanced students). They indicated that, despite substantial variability between individual expert structures, the average expert referent structure provided predictive validities that were stronger than the average of the predictive validities for the individual expert referent structures and greater discrimination between novice and advanced students. They also indicated that the average referent structure provided more stable predictive validities than the validities of the individual expert referent structures. Furthermore, the individual expert structure that provided the strongest validities was not meaningfully superior to the average expert referent structure in predicting exam scores. They concluded that "experts do not have to agree that highly in an absolute sense to allow for averaging" (Acton et al., 1994, p. 310). Also, they advocated averaging individual expert structures to form a referent structure over searching for the ideal individual expert structure. However, Acton et al.'s investigation did not include a referent structure derived by means of expert consensus.
RESEARCH REPORTS that is used to summarize and graphically display relatedness ratings. The Pathfinder procedure can be contrasted with multidimensional scaling, which uses spatial representation to capture global relationships. Pathfinder renders network structures that capture local relationships among concepts. The resulting networks are rich representations that can be quantified and compared (Goldsmith & Davenport, 1990). Two parameters, r and q, determine how network distance is calculated and affect the density of the network. For the present study, the networks were derived with the parameters set to r equals infinity and q equals the number of concepts (or nodes) minus one. The literature suggests that Pathfinder networks represent the structure of conceptual domains better than multidimensional scaling (Acton et al., 1994; Cooke, Durso, & Schvaneveldt, 1986; Goldsmith et al., 1991). Kraiger and Wenzel (1997) also noted that Pathfinder networks have a higher validity than multidimensional scaling representations. Pathfinder generates a class of networks on the basis of estimates of distances between pairs of concepts. Each concept in the set is represented as a node in the network, and each link between nodes has a weight value that is determined by the distance between the two linked concepts. Thus, Pathfinder produces network structures that capture local relationships among concepts within a particular domain. Pathfinder initially links all concepts and then assigns to each link a weight that represents the proximity rating assigned to the concept pair. The Pathfinder algorithm then systematically removes direct links if a shorter path between the concepts can be found through an indirect route in order to obtain the most efficient linkage and the ultimate knowledge representation. As an example, "if the link weight between concepts A and C (A-C) is 6, but the sum of the link weights between A-B and B-C is only 4, then the direct A-C link is removed" (Acton et al., 1994, p. 305). See Figure 1 in Goldsmith et al. (1991) for an illustrative example. For a complete description of Pathfinder networks, see Schvaneveldt et al. (1989). For the present study, we used a set of 14 SF concepts (see Table 1) that were developed by first examining the concepts and principles based on Frederiksen and White's (1989) cognitive task analysis of SF. Next, these
1025
concepts were reviewed and revised with the assistance of three individuals who were considered to be SF experts. These SMEs worked in research labs that used SF. Each had more than 2 years of experience with the research tool and consistently scored between 4,750 and 5,000 points each time he played. In the administration of Pathfinder, trainees first read the instructions and then made similarity (i.e., relatedness) ratings on all possible pairs of the 14 SF concepts, which were presented sequentially and randomly to the trainees, resulting in a total of 91 ratings, n(n - 1)12 = 91. For each pair of concepts, trainees were asked to indicate the extent to which they were related by using a 9-point Likert scale ranging from 1 (not at all related) to 9 (highly related). To test for the accuracy of trainees' knowledge structures, we assessed the degree of similarity between the trainees' structures and an expert referent structure by computing closeness (C; Goldsmith & Davenport, 1990). C is roughly equal to the ratio of the number of links shared between two models divided by the total number of links. The values of C can range from 0 to 1, with 1 representing perfect similarity. Two types of C were assessed: one for an expert referent structure that was derived mechanically (CM) and the second for an expert referent structure that was derived through consensus (CC). For CM, SMEs completed Pathfinder independently from one another. Subsequently, the three structures were averaged within the Pathfinder program to yield one referent structure. After completing Pathfinder independently, SMEs completed a second Pathfinder assessment together. This time, SMEs were instructed to reach consensus for each similarity judgment (CC). Space Fortress. The primary objective of SF was to fly a ship in frictionless space while battling a space fortress and avoiding being damaged or destroyed by the fortress and foe mines. Trainees controlled a spaceship's flight path by using the joystick and shot missiles with a trigger on the joystick. A fortress was located in the center of the screen with two concentric hexagons surrounding it. An information panel at the bottom of the screen indicated the fortress vulnerability, which changed with each
Table 1 Space Fortress Concepts With Descriptions Concept 1. Control ship speed and distance 2. Change trajectory of ship 3. Correct press of IFF (mouse) 4. Incorrect press of IFF (mouse) 5. Select points bonus (mouse) 6. Select missiles bonus (mouse) 7. Friend or foe identifier (instrument panel) 8. INTRVL (instrument panel) 9. Shots counter less than 50 (instrument panel) 10. Shots counter more than 50 (instrument panel) 11. 12. 13. 14.
Recognize second $ Scoring or losing points Destroy or avoid mines VLNER (instrument panel)
Description Maintenance of proper speed and distance from the fortress Direction of the ship in frictionless space Accurate identification and resultant response to the appearance of a mine Inaccurate identification and resultant response to the appearance of a mine Choice of this bonus increases points score Choice of this bonus increases missile supply Preestablished mine identifiers, shown on the instrument panel Interval in milliseconds between mouse button presses indicating when a foe mine is vulnerable, shown on the instrument panel Important information regarding selection of bonus, shown on the instrument panel Important information regarding selection of bonus, shown on the instrument panel Important to the acquisition of bonuses Objective indicator of task performance Handling of friend and foe mines Number of hits the fortress has suffered that primes the player to apply a "double shot" to destroy the fortress, as shown on the display panel
Note. IFF = identify friend or foe; INTRVL = interval; VLNER = vulnerability.
1026
RESEARCH REPORTS
missile hit. Friend and foe mines flew in the space surrounding the fortress and were identified by a mine indicator in the information panel. To destroy foe mines, trainees were required to first push an "identify friend or foe" (IFF) mouse button at the correct time. Symbols appeared on the screen just below the fortress to indicate opportunities to gain bonus points or additional missiles by pushing either a "points" or "missiles" mouse button at the appropriate time. Also, the information panel showed the number of available missiles, the battle score, and component scores based on ship velocity, ship control, and the speed of dispatching mines. The screen displayed the total score, which was a composite of the others, at the end of each game. A more detailed description of SF can be found in Arthur et al. (1995).
Table 2 Summary of Training Procedures
Day and session
Activity
Number of practice games"
Number of individual test games
Tuesday
0 1 2
3
Instructional video Baseline Review video Acquisition Acquisition Acquisition
4 8 8 8
2 2 2
Acquisition Acquisition Acquisition
8 8 8
2 2 2
Acquisition Acquisition Acquisition Pathfinder
8 8 8
2 2 2
0 8 0 0
2 2 2 2
Wednesday
Design and Procedure
APM
Participation took place on Tuesday, Wednesday, and Thursday of the same week and Monday of the following week. The first 3 days served as acquisition, with trainees participating in three sessions per day. The 4th day served as a test of retention, reacquisition, transfer, and return from transfer. At the onset of the study on Tuesday, trainees signed a consent form and an employment contract. Trainees were then given 20 min of videotaped instructions that explained the rules of SF and strategies on how to play SF that were determined to be optimal from previous studies (e.g., Frederiksen & White, 1989). These strategies gave details on how to best control one's ship, use bonus opportunities, and handle friend and foe mines. All trainees then played, as individuals, four 3-min baseline SF games (Session 0) followed by a 5-min videotape review of the instructions and strategies. The nine acquisition sessions each involved 10 games and consisted of two parts: eight 3-min practice games and two 3-min test games. During each practice game, trainees practiced with a partner such that one trainee controlled all functions related to the mouse and the other trainee controlled all functions related to the joystick and trigger. Communication between trainees was encouraged. Trainees exchanged roles after every practice game; thus, each trainee controlled each set of functions only half of the time. After the 8 practice games, all trainees then performed 2 test games, as individuals, in which they controlled both joystick and mouse functions. Each of the nine acquisition sessions was identical. At the end of the ninth session, all trainees completed the Pathfinder program to assess their knowledge structures of SF. On the following Monday, trainees completed a 2-game retention session (Session 10), as individuals. Following the test of retention, trainees completed an individual 10-game reacquisition session (Session 11) and an individual 2-game transfer session (Session 12). The transfer task was a version of SF in which the joystick functions were replaced by keyboard controls. Next, individual trainees completed a 2-game return-fromtransfer session (Session 13) in which they played the standard, joystick, version of SF. Skill-based performance measures were the average scores from the four 3-min baseline games (Session 0), the 2 individual test games at the end of acquisition (Session 9), the 2-game test of retention (Session 10), the last 2 games from reacquisition (Session 11), and the 2-game tests of transfer and return from transfer (Sessions 12 and 13). Table 2 provides a summary of the training activities.
Results Referent Structures As shown in Figure 1, there appeared to be one primary similarity and several notable differences between the consensus and mechanical referents. Regarding similarity, both structures were partitioned into two sections. One reflected "scoring or losing points" (Concept 12), and the other reflected "destroy or avoid
4 5 6 Thursday
7
8 9
4-day nonpractice interval Monday
10 11 12 13
Retention Reacquisition Transfer Return from transfer
Note. APM = Advanced Progressive Matrices. a For Sessions 1 through 9, practice games were performed with a partner under the active-interlocked modeling protocol; for Session 11, practice games were performed individually. mines" (Concept 13). Within each section, both referents contained similar concepts. However, differences were noticeable. In the consensus structure, the concepts of "VLNER" (Concept 14), "control ship speed and distance" (Concept 1), and "change trajectory of ship" (Concept 2) were more closely linked to concepts pertaining to mines, whereas in the mechanical structure, these concepts were more closely linked to scoring or losing points. In the mechanical structure, relationships between bonus concepts were more distinct and reflected functional similarities (i.e., allocating bonuses). In the consensus structure, bonus concepts were more interrelated and reflected both functional and superficial similarities (i.e., mouse control and instrument panel). Overall, the most salient difference between the two referents was parsimony. The mechanical referent (13 links) was more parsimonious than the consensus referent (21 links). For example, "destroy or avoid mines" (Concept 13) had 7 direct links in the consensus referent and only 2 links in the mechanical referent. Likewise, in the consensus referent, "shots counter less than 50" (Concept 9), "INTRVL" (Concept 8), and "change trajectory of ship" (Concept 2) all had 4 direct links with other concepts, whereas each has only 1 direct link in the mechanical structure.
Relationships Between the Referent Structures and Trainees' g and Knowledge Structures Table 3 presents descriptive statistics and intercorrelations for all the variables of interest. Figure 2 shows the trainees' mean
RESEARCH REPORTS
1027
00
-S B a
§6
c o
-
1
S t; & w
C
o
ji 2 ju •5 ~
> 1
jji; o-i
II
c «
CO
s
§11 —
I -a-
'la 0)
>
^J
3 'S "8
u -a ,„ 05
S S c
£
3u -3 a.
g 'C o
s x
2
a s -8 c e s g.
'S
CD
'5 S o « o 8 u Q
•§