Metacomponential Development in a Logo ... - APA PsycNET

5 downloads 0 Views 1MB Size Report
Effects of a theoretically based Logo environment on executive-level abilities were investigated. Forty-eight third graders were tested to assess pretreatment ...
Copyright 1990 by the American Psychological Association, Inc. 0O22-O663/90/$O0.75

Journal of Educational Psychology 1990, Vol. 82, No. 1, 141-149

Metacomponential Development in a Logo Programming Environment Douglas H. Clements

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Graduate School of Education State University of New York at Buffalo Effects of a theoretically based Logo environment on executive-level abilities were investigated. Forty-eight third graders were tested to assess pretreatment level of achievement and were randomly assigned to one of two 26-week treatments: Logo computer programming or control. Posttesting with a dynamic interview instrument revealed that the Logo programming group scored significantly higher on the total assessment of executive processing. Features of the instructional environment, such as explicitness and completeness, help account for these effects. Structural coefficients were meaningful for three of four individual processes. The type of Logo environment used may have less effect on planning processes than on those processes that construct elaborated mental schemata for problems. Classroom tasks may provide substantial experience only with the former; in contrast, the Logo environment may have emphasized the expression of the latter in general, as well as domain-specific, terms.

Contradictory research results regarding the use of the Logo computer programming language to enhance higher-order thinking abilities appear in no small part attributable to differences in instructional environments. Unfortunately, these environments are not usually described in sufficient detail, based on a theoretical foundation, or closely linked to expected cognitive consequences. The purpose of this study was to investigate the effects of a theoretically based Logo environment on the executive-level abilities of third-grade children.

Sternberg has posited that cognitive development results to a large extent from the metacomponents' ability to adjust their functioning on the basis of feedback they receive from other components. They gather information about where, how, and especially when the various components might be best applied. The cognitive monitoring metacomponent plays a central role in this process.

A Theoretical Foundation

The proposal that certain Logo programming environments can strengthen metacomponential abilities is based on two complementary rationales (Clements, 1986b). The first is that Logo environments can serve as catalysts of (unconscious) componential use. The second is that the environments can encourage children's explicit reflection on their own problemsolving processes. The first rationale is based on the assumption that there are features of particular Logo environments that educe metacomponential processing. For example, both the theory on which Logo was based (Minsky, 1986; Papert, 1980) and Steinberg's (1985) theory attribute central importance to the role of cognitive monitoring in learning. Monitoring is prevalent when children actively infer consequences of causal sequences, enact instructions, and find and fix problems (cf. Markman, 1981). Logo programming involves operations of transforming incoming information in the context of constructing, coding, and modifying such causal sequences. Although the nature of programming errors ("bugs") and their rectification are often not palpable, Logo does provide aids for such activity in its graphic depiction of errors, explicit error messages, and simple editor. Certain educational environments can be expected to facilitate the implicitly required cognitive monitoring—for example, those in which teachers model the processes of debugging, encourage children to use Logo's aids in finding and correcting errors rather than to

Logo Environments and Metacomponential Functioning

A felicitous theory on which to base a Logo environment intended to promote metacognitive abilities would contain specifically identified components and a hierarchical organization that would facilitate its application and interpretation. Sternberg (1985) hypothesizes that different types of problemsolving processes are carried out by separate components of people's information-processing systems. Components are elementary processes that operate on internal representations of objects. Highest in the cognitive hierarchy are metacomponents—executive processes that control the operation of the system as a whole and plan and evaluate all information processing. They include deciding on the nature of the problem, choosing and combining performance components relevant to the solution of the problem, selecting a representation, and monitoring solution processes.

I gratefully acknowledge the cooperation of the students and staff of the Hudson Local School District and the comments of James Hiebert, Bonnie Nastasi, Steven Silvern, and two anonymous reviewers on a draft of this article. Correspondence concerning this article should be addressed to Douglas H. Clements, State University of New York at Buffalo, 593 Baldy Hall, Buffalo, New York 14260. 141

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

142

DOUGLAS H. CLEMENTS

quit, and elicit cognitive monitoring in its most general sense through questioning. An emphasis on turtle graphics allows children to pose problems of varying levels of complexity, generate ideas for their own projects, represent these as goals, and identify the specific problems involved in reaching these goals. In addition, these problems lie between those that are clearly formulated and amenable to solution with a known algorithm (and for which there is therefore no need to decide on their nature; e.g., simple mathematics word problems) and problems that lack both a clear formulation and a known solution procedure (and which thus have almost no constraint framework). Logo problems are embedded in a context in which the range of problems and solutions is constrained, but decisions regarding specific problem formulation remain the children's responsibility. Environments most likely to support the development of the pertinent metacomponent, deciding on the nature of the problem, are those in which children create a substantial proportion of their own projects and in which they are challenged to analyze and compare problem types. Turtle graphics problems allow representations of the problem goal, as well as partial solutions (and errors), in forms that are accessible to children, because such problems have analogues in children's noncomputer experiences, such as moving their bodies or drawing (Papert, 1980). Programming in turtle graphics also promotes representation of the solution process internally as an initial and a goal state (often expressed in pictures), as an intended semantic solution whose organization is frequently verbalized for others (e.g., when working in pairs), and as machine-executable code. Such opportunities for the development of the metacomponent of selecting a representation would be enhanced through encouragement and support for the use of a multiplicity of representations. Programming requires the explicit selection and ordering of instructions in solving problems. Logo's modular nature allows students to combine procedures that they develop in various ways to solve graphic problems. Therefore, Logo programming may support children in choosing and combining performance components, especially if they were encouraged to use analysis and advanced planning. The second rationale for the proposal that Logo might strengthen metacomponential processing is that these environments can foster componential cognizance. Children are normally not conscious of their own componential functioning. A unique claim is that Logo fosters explicit awareness of cognition. Papert (1980) maintains that while programming, children reflect on how they might do the task themselves and therefore on how they themselves think. It may be possible for children to learn simple notions about the components, then use that knowledge in solving problems, and finally begin to use the knowledge automatically, without conscious direction. Their use of these processes—initially unconscious and ineffective—may become first conscious and more effective (albeit slow), and ultimately, unconscious and expert. That is, metacognitive experiences fostered by learning about the metacomponents and by programming in Logo would provide declarative knowledge that would originally be interpreted by general procedures (Anderson, 1983;Minsky, 1986; Sternberg, 1985).

Characteristics of certain Logo environments may facilitate the occurrence of metacognitive experiences (Flavell, 1981): (a) children consciously solve problems using strategies unfamiliar to them; (b) they "communicate" their organization of the task and solution processes to each other (if working in pairs), to the teacher, and to a machine; (c) problems are often self-selected; children feel that they "own" Logo problems; and (d) errors are salient and frequent, but correctable (Clements, 1986a, 1986b). In addition, the isomorphism between the information-processing framework in which Steinberg's componential theory is embedded and Logo's computer science framework allows the act of procedural programming to serve as a metaphor for componential functioning. First, children's solutions in Logo have been externalized; they are now the turtle's solutions. Logo procedures can be used as metaphors for mental schemata representing solutions to problems; thus, the latter become "more obtrusive and more accessible to reflection" (Papert, 1980, p. 145) and more likely to encourage "thinking about thinking," or, in Piagetian terms, reflective abstraction. Second, this process-oriented use of procedures itself can serve as a metaphor for componential functioning; for example, "debugging" Logo procedures serves as a metaphor for cognitive monitoring. Educational environments would, of course, have to encourage such reflection, make metacomponential processing salient, and guide the construction and application of Logo programming/cognitive-processing metaphors. Previous and Present Research Although some research indicates that metacomponential functioning can be enhanced through programming, there is also evidence that such Logo environments may not affect all components equally. Across several studies using similar Logo environments, evidence is consistent only for enhancement of the metacomponent of cognitive monitoring (Clements, 1986a; Clements & Gullo, 1984; Lehrer & Randle, 1987; Miller & Emihovich, 1986; Silvern, Lang, McCary, & Clements, 1987). There is mixed support for the development of the metacomponents, deciding on the nature of the problem and selecting a representation (Clements, 1986a; Lehrer & Randle, 1987; Silvern et al., 1987) and little support for choosing and combining performance components (Clements, 1986a; Silvern et al., 1987). It may be that classroom tasks provide relatively more practice with the latter metacomponent. Research is needed that compares the efficacy of a single theoretically grounded treatment in developing different metacomponents. Weaknesses of the tests used in previous studies also need to be ameliorated. For example, most assessed a constrained application (e.g., comprehension monitoring) of a particular metacomponent (cognitive monitoring). Parallels between programming and such tasks indicate another potential limitation in generalizability, in that both Logo and the comprehension monitoring tasks involved sequences of directions. For these reasons, I investigated the effects of a Logo programming environment based on Sternberg's componential theory on the metacomponential abilities of third-grade children. It was hypothesized that these effects would be

METACOMPONENTIAL DEVELOPMENT IN A LOGO ENVIRONMENT

stronger for three metacomponents: deciding on the nature of the problem, selecting a representation, and monitoring solution processes. A new assessment of metacomponential functioning was used. Method

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Subjects Subjects for the study were 48 children from a middle-class school system. From a pool of all children who returned a parental permission form, 20 boys and 28 girls in the third grade (mean age = 8 years 9 months) were randomly selected from the classrooms of seven teachers. Children were randomly assigned to one of two conditions, Logo computer programming or control.

Procedure Scores from a standardized test administered schoolwide were used to determine pretreatment level of achievement. The computer activities were implemented over a period of 26 weeks (children came in 45 min shifts during the last period of the school day). At the end of these sessions, after a delay of 1 week, children were interviewed individually to determine their use of metacomponential processes. Interviews lasted from 60 to 90 min.

Instruments Metacomponential assessment. Clements and Nastasi (in press) designed a dynamic interview instrument to measure metacomponential functioning of third-grade children in problem-solving situations. The basic strategy is to use problems whose successful solution depends on intensive use of a single metacomponent. Children read each problem. They are allowed to solve it with no help. If they are unsuccessful, they are provided a series of five successively more specific prompts. Their raw score is the number of prompts required; these scores are transformed so that a higher score indicated that fewer prompts were required (i.e., 6 indicated success without the aid of prompts; 0 indicated lack of success after all prompts were provided). Transformed scores are summed over items assessing each metacomponent; these raw scores are also converted to z scores to facilitate interpretation. Items measuring each metacomponent are presented to children in random order. Two basic assumptions concerning the prompts are made. First, if children are successful on an item emphasizing a certain metacomponent or if they are successful given one or more prompts, then they are using that metacomponent. Second, the number of prompts needed with a given metacomponent is inversely related to the degree ofretrievabilityof that metacomponent. Two different measures are taken for each item: (a) utilization—the number of prompts necessary for the children to exhibit use of the metacomponent (i.e., to "get the idea")—and (b) correctness—the number of prompts necessary for the children to respond correctly. Scoring for correctness is straightforward: The score is the prompt number at which children give the correct answer. The criteria for scoring the utilization measure are as follows. For deciding on the nature of the problem (nature), children must exhibit a sign that they are asking the right question (or questions) to solve the problem and that they understand the structure of the specific problem (or type of problem). They may subdivide the problem, redefine goals in keeping with the problem, or start a correct solution process. For choosing and combining performance components relevant to the solution of

143

the problem (performance), children must start to choose and combine processes (i.e., two or more steps) that may lead to a correct situation. They must indicate a systematic strategy for combining selected processes. For selecting a representation (representation), children must show evidence of using a mental model related to the problem—for example, mental imagery or a drawn figure, or a semantic or arithmetic structure. For cognitive monitoring (monitoring), children must show evidence indicating their belief that something is wrong. Criteria for selecting items to measure each component were twofold. First, a logical criterion was established for each metacomponent. Second, empirical data had to demonstrate that children presented with each item benefited most from prompts directed at the metacomponent to be measured. (This was established during a pilot study.) The logical criteria, along with example items, follow. The criterion for nature was that items be difficult to solve "because people tend to misdefine their nature" (Sternberg, 1985, p. 44). For example, young children often use association instead of mapping relations in analogy tasks; that is, they misdefine the problem (cf. Clements, 1987; Sternberg, 1985). One analogy problem was: boy pulling wagon is to girl pushing child on swing as car pulling trailer is to (ski lift, bulldozer pushing dirt, horse pulling cart, dogs pulling sled). (Note these were presented to children in a pictorial, matrix format.) The prompts were as follows: (a) "What do you think I want you to do?" (b) "What kind of problem is this?" (c) "Look! These pictures are related, or go together, in a certain way." (d) "We give you these two pictures. You need to find what goes here so that these two (indicate bottom two, globally) go together in the same way as these two (indicate top two)." (e) "The boy pulling the wagon and the girl pushing the swing are related to each other; they are doing the opposite. The car pulling the trailer goes through in the same way with one of these (indicate answers)." The instrument included three items measuring this metacomponent. The criterion for performance was that items demand not just the choice of performance components but also the combination of these into a workable strategy. For example, several mathematics problems were included that required children to choose both the operations to use and the order in which to execute them. One problem asked, "John wanted to know how much his cat weighed. But the cat wouldn't stay on the scale unless he was holding it. How could he figure out the cat's weight?" The prompts were as follows: (a) "What could you do to solve the problem?" (b) "What plan would you use?" (c) "Could John get on the scale with the cat and get on again alone?" (d) "How would John find out how much the cat weighed alone?" (e) "Think of the difference between his weight with the cat and his weight alone." The instrument included six items measuring this metacomponent. The criterion for representation was that successful solutions be contingent on the construction of an appropriate representation. For example, syllogisms frequently are solved easily if there are but three elements. But an internal, or most probably external, representation such as a vertical or linear array is required by young children to solve those with more than three elements. One syllogism used was, "Bill is faster than Tom. Pete is slower than Tom. Jack is faster than Bill. Jack is slower than Fred. Who is fastest?" The prompts were as follows: (a) "What could you picture in your head or on paper to help solve the problem? Think of pictures or words that tell or show you which is the fastest." (b) "What picture or diagram could you make on the paper? How would you tell which one was the fastest?" (c) "Could you make a picture or a line for Bill? Could you put his name next to it to remember which child is which?" (d) "Could you make a line or picture for Tom? When someone is faster than someone else, would the picture or line be shorter or longer?" (e) "Make a picture of all the children. Each line will be a child with a name next to it. Longer lines are faster children."

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

144

DOUGLAS H. CLEMENTS

Another example is the problem, "Four children, A, B, C, and D call each other on the telephone. Each talks to every other child on the phone. How many calls are there?" The five prompts ranged from "What could you picture in your head or on paper to help solve the problem? Or, what numbers could you use?" to "Would it help to put the children in a square? Could you show all the calls with lines? Each pair of children make one call. Keep track of what you found in a chart or table if you like." The instrument included seven items measuring this metacomponent. The criterion for monitoring was that items induce errors. Children were purposely misled in some way by erroneous information. For example, "When Albert was 6 years old, his sister was 3 times as old as he. Now he is 10 years old and he figures that his sister is 30 years old. How old do you think his sister will be when Albert is 12 years old?" The prompts were: (a) "Do you have to watch out for mistakes when you do this problem?" (b) "Is there something in the problem that could trick you if you weren't careful?" (c) "Is Albert right when he figures that when he is 10 years old his sister is 30 years old?" (d) "Will his sister always be 3 times as old as Albert? Is that a mistake? Should you multiply or add years?" (e) "Don't make a mistake. When Albert was 6, his sister was 18. Twelve years older. What would his sister be 1 year later, when Albert was 7? One year later? (continue)." The instrument included five items measuring this metacomponent. Pretreatment assessment ofachievement. The California Achievement Test, Level 13 (CAT; CTB/McGraw-Hill, 1979) is a series of test batteries in reading, language arts, and mathematics. The mathematics scores of the CAT were recorded (K-R 20 reliability = .95) along with the total battery scores (r = .98) because items on the metacomponential assessment tended to be logical or mathematical.

Treatments Logo children met for three sessions per week for a total of 78 sessions. Children's absences ranged from 0 to 15 with a mean of 5.1 sessions. During each session, six pairs of children worked on six Apple computers under the guidance of one or two adults (the "teacher"—a graduate assistant experienced in teaching with Logo and other computer tools—and the author). Both adults were present for about two-thirds of the lessons; one or the other was present for the remainder. It was planned that sessions would consist of four phases: (a) An introduction included a review and discussion of the previous day's work, questions, and, about once per week, a "progammers' chair" (in which a pair of students presented a completed program); (b) a teacher-centered, whole-group presentation offered new information (e.g., a new Logo command) or a structured problem; (c) a phase in which students worked independently on either teacher-assigned problems (about 25%) or self-selected projects (included projects for which the teacher introduced "themes" but students were responsible for selecting the specific problem); and (d) a phase in which the teacher provided a summary and encouraged sharing with the whole group. The sessions commenced with an explanation of the purpose of the treatment (to develop problem-solving abilities) as well as the substance of the treatment (programming in Logo). Children played games (first off, then on, computers) that familiarized them with basic turtle movements and estimation of the measures of these movements. Children were challenged to determine the exact length and width of the screen in turtle steps and to create as many ways as they could to get to a given location. For all challenges, students discussed their solutions and other situations in which such strategies would be useful. Procedural thinking was introduced—first through discussions of children's experiences of learning and teaching new routines, ideas, and words, then through the notion of teaching Logo procedures. Children used a support program that allowed them to define a procedure and simultaneously watch it being executed while editing

whenever necessary (Clements, 1983/84). They were challenged to create a stairway via dramatizations, then paper-and-pencil, and finally with Logo, constructing a stair procedure as a subprocedure for stairway. Thus, problem decomposition and the use of procedures were introduced from the beginning. Different solutions were compared, and children were encouraged to construct and discuss variations of their procedures (e.g., What was altered in the procedure to create what effect?). In this way, there was an attempt to help children construct mappings between components of procedures and their effect. Finally, students were asked to plan what they could make with stairway (i.e., how stairway itself could be used as a subprocedure). These programs were in turn analyzed. At this point, children were introduced to the "homunculi," cartoon anthropomorphisms of the metacomponential processes. The homunculi were represented and introduced as follows: 1. The problem decider was a person thinking about what a problem means (via a "think cloud"). The problem decider often asked questions such as, "What am I trying to do?," "Am I doing what I really want to do?," "Have I done a similar problem before?," "How do the parts of the problem fit together?," and "What information do I have or need?" 2. The representer was an artist with her arm extended in front of her and thumb raised, looking off into the distance. She was surrounded by a piece of paper with a graph or chart, another piece of paper with writing, a drawing, and a three-dimensional model. These served as metaphors for various ways to represent a problematic situation. Specific representations were introduced when appropriate (e.g., drawing a diagram or picture). 3. The strategy planner was an intelligent-looking man with pencils and pens in his pocket, holding a notebook. Spaced over the remaining sessions, useful strategies in the strategy planner's repertoire were introduced, such as specific programming steps (described subsequently in this section), decomposing a problem, and guessing and testing (systematically). 4. The debugger was an exterminator—a metaphor for cognitive monitoring (which is more omnipresent in problem-solving than is "debugging" proper). To develop this more general cognitive monitoring, students were frequently asked, "What exactly are you doing" ("Can you describe it?") "Why are you doing it?" ("How does it fit into the solution?") "How does it help you?" ("What will you do with it when you're done?") "Does this make sense?" (from Schoenfeld, 1985). These homunculi were introduced as a part of the Logo-programming and problem-solving process. They aided four teaching methods: explication, modeling, scaffolding, and reflection. The goal of explication was to bring problem-solving processes to an explicit level of awareness for the children. The teacher used the homunculi to describe processes in which one had to engage to solve many types of problems. When the whole class solved problems, the teacher would use the homunculi metaphor to describe problem-solving processes and make them salient. The teacher would also model the use of the homunculi-based processes in solving actual problems. In scaffolding, students' independent work with Logo, the teacher would try to ascertain that process with which the student was having difficulty and would offer prompts and hints focusing on this particular process (e.g., "Might there be a pattern you could find that would help? What could you write down to try to find it?"). If necessary, the teacher would model the use of the process directly. Finally, reflection was used in the first phase of the lesson, as teachers elicited group discussion about a pair of students' use of homunculi in solving programming problems, and in the third phase, as students were asked to reflect their use of strategies in terms of the homunculi. A general programming strategy for the planner's repertoire was introduced briefly to the group, then elaborated as teachers worked with pairs: 1. Make a "creative drawing"—a free-hand picture of your project. Remember to keep it simple and label its parts.

145

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

METACOMPONENTIAL DEVELOPMENT IN A LOGO ENVIRONMENT 2. Make a planning drawing. Using a planning sheet (paper turned broadside with a turtle drawn at home), draw the turtle where it starts the procedure, draw and label each line, turn, or procedure, use a ruler and circular protractor for measurements, and have the turtle end in the same location and same heading at which it started (i.e., construct a "state transparent" procedure). For each new procedure, make a separate planning sheet (i.e., start at the beginning of Step 2 for each new procedure). 3. Have one partner read the instructions in order as the other records them at the right-hand side of the planning sheet to construct procedures. 4. Type these procedures into the computer. Use of the metacomponents was reviewed and encouraged as children worked on projects that emphasized basic genometric figures (e.g., square, equilateral triangle, and rectangle), variables and regular polygons; seasonal interests (e.g., writing valentine heart procedures using arcs); contests (e.g., writing the shortest, or most elegant, program for a "stacked rectangle pyramid" or duplicating given figures and using them as many ways as possible in the creation of a picture); list processing projects (e.g., writing a "madlibs" generator or a simple conversationalist program); and collaborative work on a mural. The purpose of the comparison group was to serve as a placebo control; that is, because children were volunteers treated as special by the school, there was a threat of a Hawthorne effect. Thus, these children also received computer experience under the same conditions as the experimental group (i.e., six pairs of children working with the same teachers), with two important differences. First, the content, designed to develop creative problem solving and literacy, included composition using Milliken's Writing Workshop (an integrated package of prewriting programs, a word processor, and postwriting, or editing, programs), as well as drawing programs. A composition process model (based on Calkins, 1986), including the process of prewriting (e.g., brainstorming), writing, revision (conferencing), and editing (e.g., checking spelling and grammar), served as a framework for instruction. Thus, several characteristics of the Logo treatment were paralleled with the control group, including selfselection of topics and interpersonal interaction; however, the integration of Logo programming and anthropomorphic instruction in metacomponential functioning was unique to the experimental group. The second difference between the groups was that the control group met only once per week for a total of 26 sessions (mean absences = 0.9). All children received minimal exposure to Logo (2 weeks at 20 min per day) as part of the regular school program. One control child acquired a computer equipped with Logo at home during the study.

Results Pretreatment Achievement Table 1 presents the means and standard deviations of the pretreatment scores of the two groups, Logo programming and control. Pretreatment mathematics achievement, as measured by the CAT's mathematics score, was nearly identical, f(46) = -.04, p = .96. There was also no significant difference on the CAT's total battery score, t(45) = -1.22, p = .23. Correlations between the total battery score and the total metacomponential score were moderate (r = .42, p < .01 for utilization; r = .61, p < .001 for correctness). Sharing a moderate amount of variance with a measure of academic achievement is typical of a measure of intelligence and provides evidence of criterion-related validity for the metacomponential assessment.

Table 1 Means and Standard Deviations for Treatment Groups on Pretreatment Achievement (CAT) Logo

Control

M Measure SD M SD Mathematics 692.46 38.20 692.92 32.29 Total Battery 696.78 26.76 706.29 26.67 Note. CAT = California Achievement Test; scores are standardized to a single, equal-interval scale from 000 to 999. For each group, n = 24.

Metacomponents Table 2 presents the means and standard deviations of the posttest scores of the two conditions, as well as reliability estimates. Results revealed high interrater agreement for all interview scores; internal consistency was high for total test scores, but lower for subtest scores (Table 2). Intercorrelations among the subtests were moderately high, indicating from 14% to 49% shared variance between pairs of subtests (Table 3). Differential use of metacomponents on items within the subtests was validated by categorizing according to metacomponent children's problem-solving behaviors (recorded during test administration). For each metacomponential category, the highest percentage of behaviors was elicited by the corresponding subtest (Table 4; see Clements and Nastasi, in press, for complete descriptions of these analyses). This provides evidence of the construct validity of the instrument and its potential for differentiating metacomponential processes. Moderate to low reliability on some subtests and moderate amounts of shared variance among the subtests, however, suggest one must exercise caution in interpreting results concerning individual subtests. To test differences between the groups on the four scores of the metacomponential assessment simultaneously, a multivariate analysis of variance (MANOVA) was performed on the

standardized scores for each measure, correctness and utilization. For correctness, analyses revealed a significant omnibus treatment effect, F(4, 43) = 3.15, p < .05, in favor of the Logo group. To identify specific variables on which the groups differed meaningfully and to indicate the relative contribution of subtests to the treatment effect, a stepwise discriminant analysis was performed (Pedhazur, 1982). This analysis is appropriate for mutually correlated variables in that one variable is included in the discriminant function at each step (this variable being the one that results in the most significant F value after variables already included in the model have been adjusted for it). This would indicate if a subtest becomes superfluous because of the relationship between it and subtests already in the model. Structural coefficients greater than or equal to .30 were considered meaningful (Pedhazur, 1982). Structural coefficients for the four correctness variables were monitoring, .58; representation, .43; nature, .26; and performance, - . 1 1 . On this basis, it was determined that only monitoring and representation had meaningful structural coefficients on the correctness measure.

146

DOUGLAS H. CLEMENTS

Table 2 Means, Standard Deviations, and Reliability Estimates for Treatment Groups on Metacomponential Measures Standard score

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Logo

Raw score

Control

Logo

Measure

M

SD

M

SD

Nature Performance Representation Monitoring

0.154 -0.057 0.222 0.296

1.088 0.938 0.818 1.048

-0.154 0.057 -0.222 -0.296

0.900 1.076 1.128 0.874

Nature Performance Representation Monitoring

0.365 0.106 0.343 0.284

0.915 0.943 0.625 0.906

-0.365 -0.106 -0.343 -0.284

0.964 1.064 1.186 1.027

SD

M

SD

Total possible raw score

Correctness 6.83 9.90 21.46 13.33

4.85 5.88 7.45 5.61

5.46 10.25 17.42 10.17

4.01 6.73 10.28 4.68

Utilization 11.33 23.08 36.75 16.58

4.33 7.62 3.45 4.98

7.88 21.38 32.96 13.46

4.56 8.60 6.56 5.64

M

Control

Reliability a

%

18 36 42 30

.29 .55 .73 .49

99 90 98 100

18 36 42 30

.36 .70 .62 .44

88 88 88 86

Note, a = Coefficient alpha; % = percentage of interrater agreement.

toring and representation had meaningful structural coefficients on the correctness measure. For utilization, analyses revealed a significant omnibus treatment effect, F(4,43) = 3.35, p < .05, in favor of the Logo group. Structural coefficients for the four variables were: nature, .71; representation, .66; monitoring, .54; and performance, .19. Thus, it was determined that nature, representation, and monitoring had meaningful structural coefficients on the utilization measure. To validate the implementation of the treatment, informal observations of the teacher's interaction with the children were conducted no less than once per week. It was found that the sessions followed the four-phase plan with one exception: The fourth phase, summary and sharing with the whole group, was seldom realized (less than 10 sessions), as students left for their classrooms or homes at different times. The observations also confirmed that children were encouraged to create their own problems, think through these problems on their own, and use metacomponential processing as an aid to posing and solving problems (via the "homunculi" and the teaching methods of explication, modeling, scaffolding, and reflection). For example, the teacher used the homunculi metaphor to make metacomponential processes salient by remarking, Table 3 Intercorrelations ofMetacomponential Interview Subtest Scores Performance

Representation

Subtest

Nature

Performance Representation Monitoring

.42* .50** .38*

.69** .66**

.67**

Performance Representation Monitoring

Correctness .40* .54** .69** .39** .52**

.53**

Utilization

**/>