THE ROLE OF CONTENT KNOWLEDGE AND PROBLEM ... - CiteSeerX

15 downloads 0 Views 255KB Size Report
their cognitive complexity. We also tested the relationship between mathematics con- ... length; the data indicated that the teachers tended to label short problems as less cognitively ... mathematical problem solving compared to teachers who routinely engage ..... Maria wants to make a long rectangular cake for her cousin's ...
Journal of Mathematics Teacher Education (2006) DOI 10.1007/s10857-006-4084-1

Ó Springer 2006

HELENA P. OSANA, GUY L. LACROIX, BRADLEY J. TUCKER and CHANTAL DESROSIERS

THE ROLE OF CONTENT KNOWLEDGE AND PROBLEM FEATURES ON PRESERVICE TEACHERS’ APPRAISAL OF ELEMENTARY MATHEMATICS TASKS

ABSTRACT. The objective of this study was to examine the nature of preservice teachers’ evaluations of elementary mathematics problems using the Mathematical Tasks Framework (MTF), a model designed to discriminate among tasks according to their cognitive complexity. We also tested the relationship between mathematics content knowledge and problem length on the preservice teachers’ evaluations. Twenty-six undergraduate students enrolled in an elementary mathematics methods course at a large urban university were introduced to the MTF and cognitive complexity during a class lecture and were subsequently required to sort 32 mathematics problems according to the framework. Results demonstrated that overall, the preservice teachers had more difficulty accurately classifying problems considered to represent high levels of cognitive complexity compared to problems that were less complex. Those with strong mathematics content knowledge, as measured by a standardized test, were able to sort the problems more accurately than those with weaker content knowledge. Two open-ended items assessing content knowledge were not related to sorting performance. Finally, the preservice teachers were influenced by the surface characteristic of task length; the data indicated that the teachers tended to label short problems as less cognitively demanding and long problems as more so. Implications for preservice professional development include an increased emphasis on mathematics content knowledge as well as expert modeling of the identification of deep conceptual principles at the heart of the mathematics curriculum. KEY WORDS: cognitive complexity, mathematics content knowledge, preservice teacher education, preservice teacher thinking, surface versus deep features of mathematics problems

Among the components of mathematics knowledge for teaching (Ball & Bass, 2003b; Hill, Schilling, & Ball, 2004) is mastery of the structure and processes of mathematics as a discipline (Ball, 1996; Fennema & Franke, 1992; Kilpatrick, Swafford, & Findell, 2001; Leung & Silver, 1997). A growing body of research indicates that the deeper teachers’ understanding of mathematics, the more they engage their students to think productively about the conceptual nature of the

HELENA P. OSANA ET AL.

subject (e.g., Fennema & Franke, 1992). It has been made clear by several scholars in the mathematics education community, however, that although knowledge of content is a necessary condition for effective teaching in mathematics, it is not sufficient (e.g., Kahan, Cooper, & Bethea, 2003; Kilpatrick, Martin, & Schifter, 2001). Shulman (1986) pointed to the importance of ‘‘pedagogical content knowledge,’’ which includes teachers’ knowledge of student conceptions and ways that ideas and principles can be represented for students to grow in their disciplinary understanding. Further, researchers have recently been paying closer attention to teachers’ conceptions of mathematics curricula and to the subsequent effects of these conceptions on instructional practice. A focus on teachers’ curriculum knowledge (Shulman, 1986) is valuable because, as Ball (2000) stated, ‘‘acquiring the ability to think with precision about mathematical tasks and their use in class can equip teachers with more developed skills in the ways they select, modify, and enact mathematical tasks with their students’’ (p. xii). Furthermore, case studies (e.g., Clark & Elmore, 1981; Clark & Peterson, 1986; Yinger, 1980) have shown how curricular materials and guides play a pivotal role in the instructional planning processes of elementary teachers (those who teach children in the primary grades). Making decisions about classroom practice—deciding what to teach and how to teach it by selecting content, texts, materials, and activities—is the ‘‘stuff of the teaching profession’’ (Hawthorne, 1992, p. 1), and Lappan (1993) claimed that a teacher’s selection or creation of tasks has the greatest impact on students’ learning and perceptions of mathematics. Recent research has shown that teachers who use more cognitively demanding tasks tend to engage their students in conceptually oriented mathematical problem solving compared to teachers who routinely engage students in low-level procedural tasks (Campbell, 1996; Silver & Stein, 1996; Stein & Smith, 1998). The type of task is not the final arbiter of cognitive engagement in the classroom, however; the ways students and teachers interact with the task ultimately shapes the type of mathematical thinking that occurs. Many times, teachers will lower the cognitive level of a task either by outlining specific steps or procedures or by taking over parts of the task (Doyle, 1988; Henningsen & Stein, 1997; Kilpatrick et al., 2001; Stein, Grover, & Henningsen, 1996). Nevertheless, a teacher’s selection of tasks appears to be a critical first step in providing students the opportunity to engage in highlevel mathematical thinking: while teachers’ actions and the classroom environments they create can turn a cognitively demanding task into a

ROLE OF CONTENT KNOWLEDGE AND PROBLEM FEATURES

low-level procedural one, low-level tasks are almost never conducive to high levels of student engagement (Henningsen & Stein, 1997; Stein et al., 1996).

STUDY RATIONALE Because of its central position in effective mathematics teaching, we argue that the ability of teachers to judge the appropriateness of elementary mathematics tasks for classroom use is a critical area for further study. As it stands, too little is known about teachers’ precise interpretations and perceived use of mathematics tasks, particularly in the planning stages of their practice, removed from the pressures and constraints within classroom and school contexts (Ball, 2002). Much of the research conducted to date has examined teachers’ interpretations of reform curricula as packages of philosophies and visions, specific classroom activities and intended objectives, instructional techniques, and ways of assessing whether targeted learning goals have been met. In so doing, researchers have drawn conclusions about the conditions under which teachers implement such mathematics curricula as intended (Lloyd, 1999; Manouchehri & Goodman, 1998; Reys, Reys, Barnes, Beem, & Papick, 1997; Romberg, 1997). Clearly, these researchers view curriculum as more complex than a collection of classroom tasks, which is in line with current thinking in this area of educational research (Bay-Williams, Reys, & Reys, 2003; McCaslin & Good, 1996). While understanding the factors that impact the likelihood that teachers will adopt reform initiatives is of considerable value, we argue that it is critical to attempt to isolate the different aspects of teachers’ understandings of elementary curriculum, such as task evaluation, so that a systematic and more fine-grained examination of mathematics knowledge for teaching is possible. Even less is known about prospective teachers’ thinking in the domain of task evaluation, and the present study attempts to shed some light on this area in mathematics education. In particular, we investigated the ability of preservice teachers to evaluate mathematics tasks according to the cognitive requirements they impose on elementary students. We propose that studying preservice teachers is a critical area of research for a number of reasons. In general, these are because teachers have few opportunities to refine their knowledge and skills once they are part of the workforce (National Commission of Teaching and America’s Future, 1997) and because their experiences during

HELENA P. OSANA ET AL.

their university training are likely to have the greatest impact on their development as mathematics teachers. Moreover, it appears that there are particular skills that should receive greater attention in preservice education programs. For example, studies of practicing teachers demonstrated that evaluating curricular tasks and activities is not a trivial exercise (Arbaugh & Brown, 2002; Kleve, 2005; Prawat, 1992). Kleve (2005) and Senger (1999) found that teachers had difficulty aligning their assessments of reform curricula with their attempts to implement activities in the classroom; as such, Kleve claimed that there is a pressing need for preservice education to focus on the interpretation of mathematical tasks as well as on ways that the tasks can be translated into effective practice (see also Noddings, 1990). Clearly, research on the teaching and learning of preservice teachers promises to yield significant implications for how to enhance existing teacher education programs. An additional focus of our investigation was to study some of the factors that affect preservice teachers’ evaluations of mathematical problems. In particular, we were interested in examining the relationship between preservice teachers’ mathematics content knowledge and their task evaluations as well as the impact of task features on their assessments. A teacher’s content knowledge in mathematics involves more than a mastery of procedures and algorithms; it also involves understanding the connections among key mathematical ideas and principles (Franke & Grouws, 1997; Ma, 1999). Ma (1999) found that, in contrast to the predominantly procedural nature of U.S. teachers’ knowledge, Chinese teachers possessed ‘‘knowledge packages’’ that contained important mathematical ideas that were richly interconnected and organized to cultivate these ideas in students’ minds. In the area of subtraction with regrouping, for example, the Chinese teachers’ understanding of the topic went beyond the isolated skill of ‘‘borrowing’’; their knowledge packages included nodes, or concepts, related to addition and subtraction within 10, addition and subtraction within 20, subtraction of numbers between 20 and 100, and composing and decomposing a higher value unit. Ma’s research demonstrates that expertise in elementary level content, including knowledge of algorithms and procedures, is necessary for effective mathematics teaching, but that a keen awareness of the ways in which concepts are interrelated and build on each other differentiates teachers who are effective from those who are not. Further, without a deep understanding of mathematics, a teacher cannot effectively engage in activities that are at the core of

ROLE OF CONTENT KNOWLEDGE AND PROBLEM FEATURES

expert teaching, such as selecting tasks that are appropriate for students, predicting students’ specific difficulties, and representing concepts in ways that will enhance their mathematical understanding (Ball & Bass, 2000b; Cohen, 1990; Heaton, 1992; Ma, 1999; Putnam, 1992). Ma (1999) found, for instance, that the Chinese teachers were more comfortable than the U.S. teachers in allowing their students to generate their own problem solutions and in orchestrating classroom discussions about the mathematical ideas common to the students’ strategies. Ma argued that the strong content knowledge of the Chinese teachers allowed them to recognize the mathematics in the students’ explanations and to direct them to more refined understandings of the concepts involved. In another study, Putnam (1992) found that the incomplete understanding of average hindered one fifth-grade teacher’s ability to recognize incorrectly computed averages. Similarly, Heaton (1992) found that one fifth-grade teacher was not able to ‘‘carry the mathematics’’ to her students because of her weak understanding of the subject matter. In an authentic activity in which her students were to create a park within a $5000 budget, the teacher encouraged her students to multiply the dimensions of a rectangle to find its perimeter and ignored the concept of proportion when presenting the idea of scale. Although the teacher in Heaton’s study engaged her students in a pedagogically sound activity, the activity itself could not make up for her lack of mathematical content knowledge. Another issue we addressed in the present study is the impact of task features on preservice teachers’ evaluations. In particular, we studied the issue of problem length on their perceptions of the tasks’ cognitive complexity. There is reason to believe that surface characteristics would interfere with the participants’ evaluations of the tasks’ underlying complexity. Previous research has shown that in the area of physics, novices tend to use surface features to sort problems (e.g., whether the problem contained pulleys), whereas experts use deep features such as the rules or underlying logic that allows the problem to be solved (e.g., F = ma) (Chi, Feltovich, & Glaser, 1981). Similarly, Reusser (1986) found that the presence of numbers that are irrelevant to the solution hindered the performance of both high school and college students when attempting to solve physics problems. Finally, Schoenfeld and Herrmann (1982) demonstrated that a course in problem solving assisted novices in sorting college-level mathematics problems more like experts, which entailed classifying the problems according to their solutions as opposed to surface features.

HELENA P. OSANA ET AL.

THEORETICAL FRAMEWORK Research on Academic Tasks in Mathematics Doyle (1988) made clear the importance of research on academic tasks in mathematics. He found that student learning and teacher expectations were to a large degree dependent on the types of tasks assigned in elementary mathematics classes. Doyle categorized mathematics tasks as ‘‘familiar’’ and ‘‘novel.’’ Familiar problems were those that required the student simply to recall a solution (such as a multiplication fact) or to use an immediately retrieved and previously mastered procedure for problem solution. Novel tasks, in contrast, required students to tap into the structure and conceptual underpinning of mathematics to make decisions about possible solutions. Doyle found that teachers most often use familiar tasks in their classrooms, mainly because work on familiar tasks results in smoothly functioning classroom activity, where students know precisely what is expected of them and can be seemingly productive for an entire class period, focusing on computational accuracy and fluency. Of course, if mathematics reform-oriented objectives are to be met, such an instructional approach is not adequate (Ball & Bass, 2003a; Fuson, 2003; Hiebert, 2003; Hiebert et al., 1997; National Council of Teachers of Mathematics, 2000). Novel tasks, if maintained as such by the teacher, require students to struggle with meaning, which is more conducive to genuine understanding of mathematical concepts and principles (Carpenter & Lehrer, 1999; Gardner, 1991; Hiebert, 2003). The Mathematical Tasks Framework (MTF) More recently, Stein and her colleagues (Stein & Smith, 1998; Stein, Smith, Henningsen, & Silver, 2000) developed the MTF, which constitutes an expansion of Doyle’s (1988) original framework (the MTF is presented in Table I). Like Doyle’s original conception, Stein and Smith’s (1998) four-level framework serves to classify mathematical tasks according to cognitive complexity—that is, according to the types of cognitive opportunities they afford the students. The two lowlevel types of tasks are named Memorization and Procedures without Connections. At the lowest level, Memorization tasks are unambiguous for the student and involve exact reproductions of previously known information or facts. Such tasks cannot be solved using a procedure either because no appropriate procedure exists to solve the problem or because a procedure would render the solution process too inefficient.

ROLE OF CONTENT KNOWLEDGE AND PROBLEM FEATURES

TABLE I Mathematical Tasks Framework (adapted from Stein, Smith, Henningsen, & Silver, 2000) Lower-level demands (Memorization): Involve reproducing or memorizing learned facts, rules, formulaes, or definitions. Cannot be solved using procedures because a procedure does not exist. Are not ambiguous. Such tasks involve the exact reproduction of previously seen material. Have no connection to the concepts underlying the facts or rules being learned or reproduced. Lower-level demands (Procedures without Connections): Are algorithmic. Use of the procedure either is specifically called for or evident. Require limited cognitive demand. Little ambiguity exists about what needs to be done. Have no connection to the concepts or meaning that underlie the procedure being used. Are focused on producing correct answers, not on developing mathematical understanding. Require no explanations or those that focus solely on describing the procedure that was used. Higher-level demands (Procedures with Connections): Focus on the use of procedures for the purpose of developing deeper levels of understanding. Suggest explicitly or implicitly pathways that are broad procedures with close connections to underlying concepts as opposed to algorithms that are opaque with respect to concepts. Usually are represented in multiple ways, such as visual diagrams, manipulatives, and symbols. Making connections among multiple representations helps develop meaning. Require some degree of cognitive effort. General procedures cannot be followed mindlessly. Students need to engage with the concepts that underlie the procedures to complete the task. Higher-level demands (Doing Mathematics): Require complex and nonalgorithmic thinking—a predictable, well-rehearsed approach or pathway is not explicitly suggested by the task, task instructions, or a worked-out example. Require students to explore the nature of mathematical concepts, processes, or relationships. Demand self-monitoring or self-regulation of one’s own cognitive processes. Require students to access and use relevant knowledge and experiences. Require students to analyze the task and actively examine task constraints. Require considerable cognitive effort and may involve some level of anxiety for the student.

Finally, it is not necessary to solicit the concepts or principles related to the problem facts in order to retrieve the solution. Stein, Smith, Henningsen, & Silver (2000) proposed, for example, that ‘‘What is the

HELENA P. OSANA ET AL.

decimal and percent equivalent for the fraction 1/2?’’ is a Memorization task. Tasks labeled Procedures without Connections are those that require previously learned algorithms for solution, either specifically called for or clearly evident in the task, and no ambiguity exists with respect to which algorithm to use. Such tasks do not require the student to make connections to the concepts that underlie the algorithms, nor do they require a great deal of cognitive effort for their solution. Stein et al. proposed that ‘‘Convert the fraction 3/8 to a decimal and a percent’’ is an example of a Procedures without Connections task. The two high-level tasks are labeled Procedures with Connections and Doing Mathematics. For both high-level tasks, the students are effortfully engaged in problem solving and investigate the interconnections among mathematical concepts. Procedures with Connections tasks focus the students’ attention on the necessary procedures, but these procedures are not immediately evident from the task. Such tasks require the student to engage with the concepts and principles that underlie the procedures, and, as such, they demand some degree of cognitive effort on the part of the student. An example of such a task, offered by Stein et al., is ‘‘Using a 10  10 grid, identify the decimal and percent equivalents of 3/5.’’ Finally, the highest-level task, Doing Mathematics, is complex and requires non-algorithmic thinking. Several solutions are possible for such tasks, and a previously learned pathway for solution is not explicitly suggested. For these reasons, heuristics are needed and mathematical concepts are often used in novel ways during problem solving. To arrive at a correct or reasonable solution, the student must not only be actively engaged in relevant mathematical concepts, but he or she must invoke metacognitive skills to examine task constraints and to handle the unpredictable nature of the solution process. The following represents an example of a Doing Mathematics task: Shade 6 small squares in a 4  10 rectangle. Using the rectangle, explain how to determine each of the following: (a) the percent of area that is shaded, (b) the decimal part of area that is shaded, and (c) the fractional part of area that is shaded (Stein et al., 2000, p. 13).

One solution described by Stein et al. for part (a) of this problem involves recognizing that one column (4 shaded squares) would be 10% of the whole, and an additional two squares would be half (5%), so the six squares in all would make up 15% of the entire grid. An alternative solution might entail seeing 10 squares in a column as a quarter of the

ROLE OF CONTENT KNOWLEDGE AND PROBLEM FEATURES

grid (25%) and then calculating each square as comprising 2.5% of the whole; six squares would then be 6  2.5%, or 15%. Stein et al. maintain that one of the key differences between this Doing Mathematics problem and the Procedures with Connections problem described earlier is the non-standard whole of 40 squares; the use of a grid that is not subdivided into 100 squares (i.e., 40 squares) requires students to think about fractions, decimals, and percents in novel ways. For example, once the six squares are shaded, students must think of ways in which these six relate to the total number in the grid, and there are several possible ways to approach this problem. It is important to note that Stein et al. (2000) caution teachers and teacher educators to beware of task features that may give the false appearance of demanding high levels of mathematical engagement. Tasks that require the use of diagrams or manipulatives or those that are couched in real-world contexts are not necessarily high-level tasks. Similarly, Stein et al. explain that the number of words in the problem or the number of steps or processes involved in finding a solution also do not in and of themselves determine cognitive level. Thus, features of the problem per se are not indicators of cognitive complexity; rather, the quality of thinking required of the students by the task (as outlined in Table I) is the essential criterion for determining its level. The Present Study The research cited above suggests that many teachers have difficulty interpreting curricular reforms and discerning the learning objectives of mathematical tasks. These findings were based largely on case-study methodologies with small groups of teachers, and many of the conclusions were derived from self-report data. Additionally, little of the research directly examined teachers’ interactions with mathematical tasks outside of the context of complex classroom practice. Isolating teachers’ thinking about tasks is critical, first, because it can reveal the difficulties teachers may have when interpreting the intended objectives of specific tasks, and second, because it can uncover the factors that influence their interpretations of mathematical problems, key elements in teacher planning (Clark & Peterson, 1986; Randi & Corno, 1997). Furthermore, previous researchers have identified several key elements that explain teachers’ conceptions, appraisals, and implementation of mathematics curriculum, such as content knowledge (e.g., Fennema & Franke, 1992), elements of the professional context (Porter et al., 1979; Reys, Reys, Barnes, Beem, & Papick, 1997), task

HELENA P. OSANA ET AL.

characteristics (e.g., Arbaugh & Brown, 2002), beliefs about thinking and learning (e.g., Keiser & Lambdin, 1996), and past experience (e.g., Pennell & Firestone, 1996), but their methodologies were such that they could not determine precisely which of the elements were involved in their evaluation of mathematical tasks nor to what extent the factors explained teachers’ behaviors. Finally, to our knowledge, researchers have yet to systematically study the conceptions that preservice teachers have of the elementary mathematics curriculum; most of the research in this area has been conducted with practicing teachers, many of whom were experienced, and mainly at grade levels beyond elementary school. The key factors involved in curriculum evaluation, for both prospective and practicing teachers, is thus an open question. Based on our review of the literature, we hypothesized that content knowledge and task characteristics are two important influences on preservice teachers’ ability to evaluate elementary mathematics curriculum. In this study, we attempted to test this relationship more directly than has been done to this point by examining (a) preservice teachers’ ability to assess mathematical tasks according to the MTF (Stein et al., 2000) and (b) the relationship between content knowledge and task characteristics (i.e., problem length) on their assessments. We measured the preservice teachers’ ability to use the MTF to judge the cognitive complexity of mathematical tasks by administering a card sort activity in which the participants were required to place each of 32 problems in one of four levels of complexity as specified by Stein et al. (2000). The technique of task classification was pioneered by Chi et al. (1981), who used this methodology to differentiate between the problem-sorting performance of expert and novice physicists. Schoenfeld and Herrmann (1982) and Silver (1979) used the same methodology to measure participants’ perceptions of mathematical problem similarity. Just as the sorting technique used by these researchers was used to assess participants’ ability to grasp the mathematical structure of problems, the same technique was used in our study to measure the participants’ ability to assess mathematical tasks according to their cognitive complexity. The prior success of this methodology justified our decision not to collect problem-solving data from the participants. Our specific research questions were the following (1) How well do preservice teachers classify mathematics tasks using the MTF? (2) What factors are involved in preservice teachers’ classification processes? In particular, is mathematics content knowledge related to the

ROLE OF CONTENT KNOWLEDGE AND PROBLEM FEATURES

accuracy of their classifications? In addition, does task length affect the preservice teachers’ ability to classify accurately the mathematics tasks?

METHOD Participants Participants were 26 preservice teachers enrolled in two separate elementary mathematics methods courses at a large urban university in Canada. The same instructor offered both courses, which were identical in curriculum, objectives, and organization. The first 11 preservice teachers were recruited from one section of the course, and this comprised 55% of the class. The remaining 15 participants were recruited from the same course offered three months later; these 15 constituted 63% of all the students in the class. Over all, then, 59% of the students agreed to participate in the study. All participants gave their informed consent to allow the researchers to use their work as data for the study. All undergraduate students in the university’s teacher education program take at least one mathematics course every year at the secondary level. Secondary school (also known as ‘‘high school’’) marks the final phase of compulsory education in Canada. In Que´bec, the Canadian province in which the study was conducted, students enter high school after six years of elementary (or primary) school and graduate from high school after they have successfully completed five years of study. In each year of high school, students are required to take and pass at least one year-long mathematics course, in which are covered fundamentals of number and operation sense, algebra (after the first year), geometry, and statistics and probability. In addition, for the study cohort taken together with its two predecessors, 56% of students had taken and passed at least one mathematics course at the postsecondary level (beyond high school), exclusive of science courses. In Canada, postsecondary education is either college- or university-level study. Mathematics courses at the college level are typically calculus, linear algebra, or statistics. Many students entering the teacher education program will have taken at least one introductory statistics course at the college level, in which topics such as tables and graphs, descriptive measures, relationships among variables, and sampling concepts are covered. Students having taken mathematics at the university level before entering the program typically will have enrolled in courses emphasizing such topics as algebra, functions, and statistics.

HELENA P. OSANA ET AL.

Materials Task classification. A set of 32 mathematical problems was created. Initially, we created a set that contained 8 cards at each of the 4 cognitive demand levels described by Stein et al. (2000) (e.g., Memorization, Procedures without Connections, Procedures with Connections, and Doing Mathematics). The process of constructing the problems for the card sort activity entailed several meetings of the members of our research team at which careful readings of the MTF and discussions about a large number of problems took place. The research team was required to examine carefully each level described in Table I, which included making constant references to the descriptions and examples in Stein’s publications (e.g., Stein & Smith, 1998; Stein et al., 2000). All 32 problems were independently corroborated by three experts: (a) a mathematician with 30 years of college-level teaching experience, (b) a physicist with 30 years of college-level teaching experience, and (c) a professor of mathematics education. Of the 32 problems, there were initial disagreements between the research team and the three experts (collectively) on 11 problems. After discussion, we reached agreement with the experts on all but 6 problems, which were subsequently re-classified. No problem was reclassified as higher or lower by more than one level. The final card sort set contained 9 cards at Level 1, 7 cards at Level 2, 7 cards at Level 3, and 9 cards at Level 4. The problems on the cards were appropriate for upper-elementary (grades 4–6) mathematics classrooms and covered four content areas: geometry, measurement, fractions, and ratio. The participants were highly familiar with these content areas. Sample cards for each level are presented in Figure 1. Card 3 is considered a Memorization task (Level 1) because the only requirement needed by the student to complete the task is to recall standard representations for ratios. Card 20 is a Procedures without Connections task (Level 2) because it involves the use of the ‘‘cross and multiply’’ algorithm for finding the missing variable. For this problem, there is little ambiguity about what procedure is required or how to perform it. Card 11 is a Procedures with Connections task (Level 3) because it requires some degree of cognitive effort and no algorithm is evident from the task itself. Furthermore, the person attempting to solve the problem can use a variety of strategies (e.g., use the perimeter of each square and subtract the overlapping sides: (60  3) ) (15  4) = 120, or calculate that each side is of length 15 and multiply by 8, the number of units around the entire cake). Nevertheless, once the structure of the problem is grasped, a clear-cut

ROLE OF CONTENT KNOWLEDGE AND PROBLEM FEATURES

CARD 20

CARD 3 Maria wanted to find the ratio of boys to girls at the local arena on Tuesday afternoon. She counted that there were 7 boys and 3 girls that afternoon. Which of the following would not represent the ratio she observed?

2 = x 3 15

a) 7 to 3 b) 7:3 c) 7 3 d) 7.3

20

3

CARD 11 Maria wants to make a long rectangular cake for her cousin’s birthday. All she has is a square cake pan, so she plans on making 3 cakes and putting them next to each other, as shown in the diagram. She wants to buy a strip of gold ribbon to go around the entire cake. If a strip 60 cm long could fit exactly around each square cake, what length of ribbon does she need for the larger, rectangular cake?

CARD 32 3 Originally, three-fourths ( ths) of the drama 4 club members were girls. Then, 2 more boys 5 joined. Now, five sevenths ( ths) of the 7 drama club members are girls. If there are

between 20 and 40 girls in the club to start with, how many members are in the club now? 11

32

Figure 1.

Sample card sort items.

procedure can be used to determine the solution. Finally, Card 32 can be considered a Doing Mathematics task (Level 4) because no predictable pathway is suggested by the task. Specifically, heuristics are needed to think through the problem, such as trial and error or using a similar problem as a model. In this case, problem solving is effortful and mathematical concepts related to fractions and proportions, as well as skills of estimation, are required. Finally, problem length was counterbalanced. Approximately half of the problems at each of the levels were short (4 lines or less) and the other half were long (between 5 and 11 lines)—that is, almost half of the problems at each level of cognitive complexity were short and the other half long. Most of the short problems consisted of equations or formulae presented in isolation of authentic contexts. In constructing the long problems, we lengthened shorter tasks by embedding mathematical quantities and relationships in real-world contexts. The extra sentences and phrases added to each of the long problems were not used to explain the problem itself; they were added only to provide a setting connected to real-world application. By creating the long problems in this way, we ensured that their length was as little as possible connected to their cognitive complexity.

HELENA P. OSANA ET AL.

Our initial set of 32 cards was created so that level of cognitive complexity and length were perfectly counterbalanced: 8 cards at each level of cognitive complexity with 4 long problems and 4 short problems at each level. After reclassification, there were 5 short and 4 long problems at Level 1, 3 short and 4 long problems at Level 2, 4 short and 3 long problems at Level 3, and 4 short and 5 long problems at Level 4. Mathematical content knowledge. The preservice teachers’ mathematical content knowledge was measured in two ways. First, we used the mathematics portion of the TerraNova, Second edition (CTB/ McGraw-Hill, 2001), a 40-minute, multiple-choice measure of basic mathematical skill in the grade range of 10.6–12.9. The TerraNova is a commercial standardized test that has undergone several rounds of rigorous development and norming procedures, resulting in an instrument that is reliable and valid (Schulte, Elliott, & Kratochwill, 2001). Assessment specifications were created and reviewed by educators across the United States, and a number of standards documents, textbooks, and curriculum guides were also consulted. In designing the items, the test developers controlled for vocabulary level, sentence structure, passage length, readability, and conceptual load. In both the art and text, careful attention was paid to the ways in which ethnicity, gender, disabilities, and age were represented. Pilot tests were conducted in classrooms across the United States, and feedback from teachers and students were taken into account. A national tryout study was then conducted with 100,000 students in several grades in both public and private schools, ensuring a large variety of student samples. Finally, a norming study was conducted with 350,000 students in the United States from Kindergarten to Grade 12. The sample used was selected to be representative of the various regions, communities, minorities, and socioeconomic groups in the nation. The mathematics portion of the TerraNova measures understanding of broad mathematical principles and real-world problem-solving skill and is aligned with the NCTM Standards (NCTM, 2000). The mathematics subtest measures basic performance and conceptual understanding in the following areas: number and number relations; computation and numerical estimation; operation concepts; measurement; geometry and spatial sense; data analysis, statistics, and probability; patterns, functions, and algebra; problem solving and reasoning; and communication.

ROLE OF CONTENT KNOWLEDGE AND PROBLEM FEATURES

Second, our research team created a 20-minute Doing Mathematics test that contained two open-ended items. Each item required the preservice teachers to describe their thinking by writing out a solution. The items, presented in Figure 2, were non-algorithmic problems that required the participants to combine and apply their understanding of several mathematical concepts. The first problem (the ‘‘Volume problem’’) was taken from Reys, Lindquist, Lambdin, Smith, and Suydam (2003) and combined concepts of geometry, measurement, and spatial reasoning. The second item (the ‘‘Pizza problem’’) was taken from the Concept Institute on Fractions (n.d.) and assessed the application of fractions. Our reason for adding the Volume and Pizza problems to our content knowledge measure was to ensure that we obtained an index of the participants’ ability to engage in the type of mathematical thinking advocated by the NCTM (2000), which can be characterized by nonroutine reasoning and problem solving that involves connections to important concepts and principles in mathematics. Both these problems were similar to those described by Romberg and Wilson (1992),

1) Volume Problem You can change one dimension of this rectangular solid by 1 cm.

Length = 8 cm

Height = 4 cm

Width = 3 cm

Which dimension would you change to change the volume: (a) the most? (b) the least?

2) Pizza Problem At a pizza party, there were 18 pizzas for 24 people. At the restaurant, the people were seated at tables of 4, 6, 6, and 8. How many pizzas should be served to each table so that the pizzas are fairly distributed?

Figure 2.

Doing mathematics test items (Volume and Pizza problems).

HELENA P. OSANA ET AL.

who illustrated several test items that were aligned with the NCTM Standards, and as such met the criteria of complex mathematical reasoning that required knowledge at the conceptual level. We thus argue that the Volume and Pizza problems rendered our content knowledge measure more comprehensive. The three experts who rated the 32 cards for the card sort task rated the Volume and the Pizza problems as Doing Mathematics. Moreover, our analysis of pilot data for both problems indicated that the pilot participants used a variety of strategies to solve each problem and that they spent considerable time analyzing each problem and actively examining the parameters that constrained the multiple strategies they generated. Thus, the mathematical knowledge necessary to successfully solve the Volume and Pizza problems goes beyond an understanding of isolated pieces of information (such as the formula to calculate volume) and procedural skill; solving the two Doing Mathematics problems necessitates an understanding of the ways in which key mathematical ideas are interrelated and an ability to reason through a complex problem space. Procedure The classification task took place during class time immediately following a 45-minute lecture on the MTF. The lecture included a description of each of the four categories in the MTF as well as a whole-class discussion on four problems exemplifying each of the four levels (the four problems were taken from Stein et al., 2000). During the discussion on each problem, the instructor modeled correct evaluations of cognitive complexity using level-appropriate features in the MTF, such as whether a known procedure was called for, whether the problems required a close examination of underlying concepts, and whether one or more strategies were appropriate for their solution. The instructor had included the MTF as part of the mathematics methods course in several semesters prior to the study and had established that 45 minutes is a sufficient amount of time for the students to understand and apply the MTF to elementary mathematics problems. This decision was based on past students’ responses on examination questions about the MTF as well as small-group and whole-class discussions with students about the framework. Each participant was then provided with the set of 32 randomly ordered cards with a mathematical problem presented on each. The instructor asked the participants to read each card carefully and to

ROLE OF CONTENT KNOWLEDGE AND PROBLEM FEATURES

evaluate its cognitive complexity; no directives to think about how to solve the problems were included in the instructions. The preservice teachers were also told to assess only the cognitive complexity of each problem and not the way in which the problem was presented on the card, such as vocabulary used or other non-mathematical features. Finally, the participants were told to focus on the cognitive complexity of each problem as if they were given to elementary students in grades 4–6. This point is particularly important because the prior knowledge of students solving mathematical problems is an important predictor of the level at which tasks are implemented in the classroom (Smith & Stein, 1998; Stein et al., 2000). Giving the participants a grade range for which to evaluate the problems defined the expected prior knowledge; if the participants should over-estimate the prior knowledge of the students for which the problems were intended, the result would be lower classifications of higher-level tasks. Because the problems were calibrated for children in fourth, fifth, and sixth grades, we did not ask the undergraduate students who took part in the study to solve the problems themselves, but merely to evaluate their complexity. Given the number of content courses the participants had taken in high school, and that over half of the students in the teacher education program generally take at least one mathematics course at the postsecondary level before enrolling in mathematics methods, we argue that they would have been able to solve the problems easily. On an answer sheet, the participants circled a number from 1 to 4 corresponding to the four levels of complexity that they believed best corresponded to each problem. No books or notes were permitted during the card sort task, except for a handout containing brief descriptions of each level, as covered in the lecture. There was no time limit for completing the task. The content-knowledge test was administered outside of class time. Both the mathematics portion of the TerraNova and the open-ended Doing Mathematics problems were administered in the same one-hour session. Participants worked independently and were not permitted to use any books or calculators. There was a time limit of 40 minutes to complete the TerraNova and a limit of 20 minutes for the two Doing Mathematics problems.

Coding and Scoring Card sort. Individual performances on the sorting task were computed in two different ways. First, a score based on the number of correctly

HELENA P. OSANA ET AL.

sorted problems was calculated for each participating preservice teacher. A second score based on the average distance from the agreed answer was calculated. Thus, for each participant, a correct answer was given a score of 0, an answer that was one level of complexity away from the correct answer (e.g., if the correct answer was 2 and the participant responded 1 or 3) was given a score of ) 1, an answer that was two levels away was given a score of ) 2, and an answer that was three levels away was given a score of ) 3. The benefit of this approach was that we were able to give credit to participants for being close to the target level for each problem. This technique thus yields a more sensitive measure of sorting accuracy. An average distance score per participant was then obtained by calculating the mean of these scores. For problems at Levels 1 and 4, the maximum distance is 3. For problems at Levels 2 and 3, the maximum distance is 2. Thus, the average maximum distance is (3 + 2 + 2 + 3)/ 4 = 2.5, yielding 0 to ) 2.5 as the possible range of scores. Content knowledge. The preservice teachers’ responses on the TerraNova standardized measure were scored according to the instructions in the guide that accompanied the test materials. There were 25 items on the test and scores were calculated by tabulating the number of correct answers. Each correct answer was awarded one point, and thus the maximum score on this measure was 25. A pilot study was conducted with 11 university students to construct a rubric for scoring each of the two Doing Mathematics problems. Two members of the research team met several times to discuss and compare codes for the pilot data; any necessary changes to the rubric were made at each round of code comparisons. When all discrepancies were resolved, a third member of the research team coded the pilot data with the rubric. Final revisions were made to the rubric when all disagreements were resolved among all three coders. One trained rater used the final rubric to code the responses of the 26 preservice teachers. Each solution was coded for the final answer, mathematical concepts necessary for solution, and strategy used. Cai, Lane, and Jakabcsin (1996) argued that two important criteria for judging students’ responses to open-ended mathematical tasks are mathematical (or conceptual) knowledge and strategic knowledge. Thus, because a major component of the rubrics for the Volume and Pizza tasks included these two forms of mathematical knowledge and did not take students’ linguistic or argumentation skills into account, we argue that the approach we adopted with respect to rubric design was conceptually appropriate.

ROLE OF CONTENT KNOWLEDGE AND PROBLEM FEATURES

The central concept for the Volume problem was the change in volume that resulted from one change in dimension. We noticed that some of the pilot participants did not take volume change into account when solving the problem. For example, one participant’s solution entailed reducing the width by one unit to ‘‘get the least amount of its volume’’ without considering the resulting change from the original volume. With respect to strategy, it is possible to arrive at a solution to the Volume problem by examining the cuboid and realizing that only the dimensions of length and width need to be considered for changes in volume of 12 and 32 cm3, respectively. We labeled this the ‘‘two calculation strategy.’’ A slightly less efficient strategy (the ‘‘three calculation strategy’’) involves calculating either the addition or subtraction of 1 cm from each of the three dimensions, knowing that the same change in volume would result from the additional three calculations. Finally, the least efficient strategy entails performing all six calculations. Although the concept of volume, whether represented in the more traditional ‘‘length  width  height’’ form or in less standard ways1, was critical to the solution of the problem, we did not include this as a separate component of the rubric because the concept was subsumed under the strategies portion of the rubric. A correct answer to the Volume problem (i.e., changing the width would affect the volume the most and changing the length would change it the least) was assigned one point. One additional point was awarded when the concept of volume change was used in the solution. The two-calculation strategy was awarded one point, the three-calculation strategy was awarded 0.667 points, and the six-calculation strategy was assigned 0.333 points. The maximum score for the Volume problem was 3. The key concept in the Pizza problem was the notion of equal distribution of pizza—that is, ‘‘equal distribution’’ in this problem refers to partitioning the pizzas so that each table would get proportionately the same amount. Although the problem specifically stated that the pizzas must be fairly distributed, we nevertheless noticed that some of the pilot participants would ignore this constraint as they struggled with distributing the pizzas among all 24 people. We coded for three specific strategies: person perspective, table perspective, and trial and error. The person perspective strategy can take different forms. One way to use the person perspective entails using ratios to calculate the fraction of pizza that each person should receive and then to multiply by the number of people at each table.

HELENA P. OSANA ET AL.

Another person-perspective strategy does not involve ratio in such a straightforward sense. Specifically, one can cut each pizza in 24 slices and from each of the 18 pizzas, each person receives one slice, which makes 18 slices in all for each person. Subsequently, for the table of 4, there would be a total of 18 + 18 + 18 + 18 slices (72), which is, in turn, 3 pizzas. The same calculation2 would apply for the tables of 6 and 8. Alternatively, the table perspective strategy involves calculating the proportion of the total number of people seated at each table and then multiplying by the total number of pizzas. Finally, participants who used trial and error made repeated guesses at the number of pizzas that should be distributed and checked to see if each person would then receive the same amount of pizza. A correct answer to the Pizza problem (i.e., 3, 4.5, 4.5, and 6 pizzas at each table, respectively) was awarded one point. The key concept of equal distribution was awarded an additional point. Finally, a person perspective strategy was awarded one point, a table perspective strategy was awarded one point, and trial and error was assigned 0.5 points. The maximum score for the Pizza problem was 3.

RESULTS AND DISCUSSION Descriptive Statistics Content knowledge measures. The mean score on the TerraNova was 16.81 (SD = 4.65). The mean score for the Volume problem was 1.91 (SD = .84), and the mean score for the Pizza problem was 1.94 (SD = 1.18). For the Volume problem, 14 of the 26 participants (54%) obtained the correct answers for both parts of the question (i.e., the width would change the volume the most and the length would change the volume the least). Of the participants who arrived at correct solutions, none used the two calculation strategy, 11 (79%) used the three calculation strategy, and 2 (14%) used the six calculation strategy. One participant who obtained the correct solution did not show her work. For the Pizza problem, 19% of the participants could not solve the problem nor did they understand the key concept of equal distribution (that each person should receive the same amount of pizza). In contrast, 42% of the participants solved it correctly using either the table or person perspective, and 8% arrived at a correct solution using trial and error.

ROLE OF CONTENT KNOWLEDGE AND PROBLEM FEATURES

TABLE II Frequencies of sorting responses by problem level Problem level

Response 1

1 2 3 4

172 36 15 16

2 (73.8%) (20.0%) (8.6%) (7.0%)

43 101 74 62

3 (18.5%) (56.1%) (42.3%) (27.0%)

15 32 67 99

4 (6.4%) (17.8%) (38.3%) (43.0%)

3 11 19 53

(1.3%) (6.1%) (10.9%) (23.0%)

Note. Percentages are calculated by dividing the number of actual observed responses at each level by the correct number of classifications for that level (e.g., for Level 1, the correct number of classifications would be 26 participants  9 cards = 234). Missing data slightly reduce row totals.

Sorting performance. We examined the preservice teachers’ ability to sort correctly the problems according to cognitive complexity. Of the 832 possible responses on the card sort task (32 card classifications for each of 26 participants), 2% were dropped because of missing or multiple answers. The valid data are shown in Table II. The overall classification performance was 48%. It can be observed that response accuracy decreased as the cognitive complexity of the problems increased. Level 1 problems were classified with an accuracy of 74%, followed by 56% for Level 2 problems. Accuracy continued to decrease for Levels 3 and 4; over 50% of the Level 3 problems were classified as either Level 1 or Level 2, and 77% of Level 4 problems were classified as either Level 1, 2, or 3. A chi-square analysis showed that this tendency was significant, v2(9, N = 818) = 408.67, p < .001. The observed pattern indicates that the participants had most difficulty identifying the complexity in the Procedures with Connections and Doing Mathematics problems. One plausible explanation for this result is that evaluating mathematical tasks according to cognitive complexity is a particularly difficult task for preservice teachers because of the lack of familiarity with children’s thinking in mathematics. Indeed, research has shown that both experienced and novice teachers have incomplete and fragmented knowledge of their students’ thinking in mathematics, often resulting in difficulty translating mathematical ideas into forms that are conducive to learning (Borko & Putnam, 1996; Carpenter, Fennema, Peterson, & Carey, 1988; Civil, 1993; National Center for Research on Teacher Education, 1991). By extension, we speculate that preservice teachers’ limitations in their

HELENA P. OSANA ET AL.

knowledge of children’s thinking may hinder their ability to classify accurately mathematical tasks, particularly those that are specifically designed to stimulate genuine mathematical thinking. Another possible explanation for this finding is that the preservice teachers who participated in our study had learned mathematics themselves, either at the K-12 or at the college level, through the application of low-level tasks and were rarely or never exposed to the structure or nature of genuine mathematical problem solving. This possible discomfort with more cognitively complex problems may have resulted in the observed reduced accuracy in their classifications. While our data cannot speak directly to the issue of the participants’ actual exposure to cognitively complex problems, it is likely that their experiences are related to their mathematical content knowledge, about which we can be more conclusive. In the next section, we address our major research question regarding the impact of the participants’ mathematical content knowledge on their sorting performance. The Relationship between Mathematical Content Knowledge and Sorting Performance The relationship between each measure of mathematical ability (i.e., TerraNova, Volume problem, and Pizza problem) and each measure of sorting performance (i.e., total number of correct card classifications and average distance from correct classification) was tested to assess the relationship between mathematical content knowledge and sorting performance. The intercorrelations are shown in Table III. When the standardized test was the measure of mathematical knowledge, the correlation with the number of correctly sorted problems was r(26)=.34 , p

Suggest Documents