State efforts to improve students' reading and

0 downloads 0 Views 1MB Size Report
Jan 28, 2010 - He was a great runner and the Rabbit was a great jumper, and the animals ... wants you to compare the wind to which of the following? A) A kite.
Reading Research and Instruction

ISSN: 0886-0246 (Print) (Online) Journal homepage: http://www.tandfonline.com/loi/ulri19

State efforts to improve students’ reading and language arts achievement: Does the left hand know what the right is doing? Samuel D. Miller , Colleen T. Hayes & Terry S. Atkinson To cite this article: Samuel D. Miller , Colleen T. Hayes & Terry S. Atkinson (1997) State efforts to improve students’ reading and language arts achievement: Does the left hand know what the right is doing?, Reading Research and Instruction, 36:4, 267-286, DOI: 10.1080/19388079709558244 To link to this article: http://dx.doi.org/10.1080/19388079709558244

Published online: 28 Jan 2010.

Submit your article to this journal

Article views: 21

View related articles

Citing articles: 2 View citing articles

Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=ulri20 Download by: [ECU Libraries]

Date: 05 June 2017, At: 09:58

Reading Research and Instruction Summer 1997, 36 (4) 267-286

State efforts to improve students' reading and language arts achievement: Does the left hand know what the right is doing? Samuel D. Miller, Colleen T. Hayes, and Terry S. Atkinson University of North Carolina at Greensboro

ABSTRACT This study examined officials' efforts to improve third grade students' reading and language arts performances with newly developed curriculum guides and assessment. We focused on (a) why officials developed their guidelines and assessment, (b) how they informed principals, curriculum directors, and teachers of their reform goals, and (c) practitioners' understanding of how they should change their instruction to prepare students for the assessment. State officials (n = 4), third-grade teachers (n = 21) from seven schools, their principals (n = 7), and building-level curriculum directors (n = 7) were interviewed. Interviews indicated that officials assumed that the assessment would change teachers' instruction, thereby improving students' performances. Prinicipals, curriculum directors, and teachers did not understand how they should prepare students for the assessment because officials failed to inform them of their new expectations. Those practitioners who attended the state's training workshops discounted their utility because they focused on the assessment's design and scoring procedures with little attention given to the specific types of instructional changes that teachers needed to make to prepare their students. Moreover, none of the practitioners received the newly revised curriculum guidelines prior to the first administration of the new assessment. State officials dismissed practitioners' negative reactions and discounted their requests for more assistance: officials did not think that teachers would change their practices until after the first testing. Our discussion focuses on the sincerity of the state's efforts to inform practitioners and questions the likelihood of their reform's success. State officials commonly use large scale assessments to document the success of their reading and language arts programs (Baker & Stites, 1991; Brown, 1992; Cohen & Spillane, 1992; Darling-Hammond & Wise, 1985; Hiebert & Calfee, 1992). As these programs switched their emphasis from "basic-skills" to "higherlevel thinking," many states developed evaluation instruments to assess their reform efforts (Darling-Hammond, 1994; Valencia & Pearson, 1987; Wixson, Peters, Weber, & Roeber, 1987). These instruments share many features: they include longer passages obtained from authentic texts, followed by multiplechoice and open-ended questions to assess students' content and metacognitive

268 Reading Research and Instruction Summer 1997 Vol. 36 No. 4

knowledge (Cohen & Spillane, 1992; Greer, Pearson, & Meyer, 1990; Roeber & Dutcher, 1989; Valencia & Pearson, 1987; Wixson, et al., 1987). Once states develop an assessment instrument, officials then need to inform practitioners of their new expectations. They traditionally have used one of two approaches (Cohen & Spillane, 1992; Darling-Hammond, 1988; Linn, 1986; Madaus, 1994; Porter, 1989; Porter, Archbald, & Tyree, 1991). The first, a policy directive approach, is the primary means by which state officials have held practitioners accountable for students' performances during the past fifteen years (Darling-Hammond, 1988; Porter et al., 1991). Consistent with this top-down approach, officials identify reform goals and design policies to monitor their implementation (Darling-Hammond, 1988; Porter, et al., 1991; Rowan, 1990). Examples of this approach include the endorsement of classroom texts and the development of curriculum guidelines and teacher assessment instruments. An alternative approach is for officials to require practitioners to participate in the development and implementation of any reform agenda. This approach avoids policy directive practices because they are viewed as obstacles to educational innovation and creativity (Porter et al., 1991; Rowan, 1990). This bottom-up approach views practitioners as professionals who need to exercise discretion and judgment if they are to respond to students' needs and interests. Examples of this approach include the use of site-based management principles and the endorsement of certification criteria to increase teacher autonomy. Despite differences between the two approaches, the goal is for various stakeholders to develop a consensus so that they can act in concert to implement common goals. This path from intent to practice, however, requires countless decisions, any one of which could undermine or limit a program's success (Cohen & Spillane, 1992; Darling-Hammond, 1988,1994; Darling-Hammond & Wise, 1985). Decisions influence a program's sucess by establishing norms of rationality-criteria which teachers and administrators use to define problems, ask questions, and talk with one another (Brown, 1989a, 1989b, 1992). We used this framework to examine state officials' decisions as they attempted to inform principals, curriculum directors, and third-grade teachers of their state's new reform. The study focused on how officials attempted to prepare practitioners for the new reform and whether practitioners understood their state's new expectations. State's accountability history

This southeastern state developed its first curriculum guidelines in reading and language arts in 1985. Its purpose was to assist educators in the planning, development, and implementation of various instructional activities. It defined expertise as the mastery of hierarchical skills and offered reading objectives for grades K-3, 3-5, 6-8, and 9-12. Districts used basal criterion-referenced tests and norm-referenced achievement tests to evaluate students' progress. Officials revised their curriculum document in 1992 to emphasize constructivist learning theories (Cambourne, 1988; Goodman, Watson, & Burke, 1987). The revised

State efforts to improve students' reading achievement 269

document had four goals: it recommended that all learners should use (1) strategies and processes to enhance communication skills development, and that learners should use language (2) for the acquisition, interpretation, and application of information, (3) for critical analysis and evaluation, and (4) for aesthetic and personal responses. Officials developed a new assessment to evaluate students' ability to meet these goals. This study occurred three months prior to the assessment's first administration and focused on (a) why officials revised their curriculum guides and developed their own assessment, (b) how state officials informed principals, curriculum directors, and teachers of their new agenda, and (c) principals', curriculum directors', and teachers' understanding of what they needed to do instructionally to prepare students. METHOD

Subjects

Four state department of public instruction officials participated: they represented the departments of testing and accountability (n = 1), communication skills (n = 2), and an advisory committee (n = 1). Practitioners included principals (n = 7), and building-level curriculum directors (n = 7), and third-grade teachers (n = 21) from 7 schools in a district that is located outside a major metropolitan area. The district generally scores in the top ten-percent of all districts on the state's yearly academic report card. Rankings on this report were based on students' performances on a yearly administered norm-referenced achievement test. The district demonstrated this performance across schools which vary by racial composition (6% to 35% African-American), parent education (19% to 73% with post-high school education), and size (50 to 132 third graders). We selected these schools because they consistently demonstrated academic excellence relative to other districts. We wanted to limit the possibility that any misunderstanding of the state's new agenda was caused by a lack of resources or commitment. This district displayed such qualities in previous years and there was no reason to assume that it would not continue to do so with the new reform. Also, because of the district's diversity, it allowed us to examine whether the reform placed unequal demands on some sites and not others. Materials

Interviews. An interview evaluated state officials', principals', curriculum directors', and teachers' understanding of the state's new reform measures and the significance they attached to its development. The framework we used to develop interview questions assumed that individuals operate under certain conditions, each of which affects how an individual responds to a problem; when responding, individuals may use one or more strategies, each of which has consequences for their subsequent interactions with other stakeholders (Strauss, 1987). We viewed this framework as appropriate because officials needed to successfully

270 Reading Research and Instruction Summer 1997 Vol.36 No. 4

coordinate their efforts if they were to reach the goal of improved academic performances. Moreover, any stakeholder's behaviors were not predetermined: each had options regarding how he or she implemented his or her responsibilities and communicated expectations to colleagues (Brown, 1989a). This framework allowed for an evaluation of the relationships among various decision making levels and their effects on daily practices. The officials' interview focused on the assessment's development ("Why was the new test developed?" "Is its format consistent with the basal and language arts texts teachers are using?" "How will test results be distributed?") and field testing ("What have you learned from your field testing?" "How many students are expected to end up in each of the four scoring levels?"). It examined how types of practitioners were prepared ("How were teachers, principals, and curriculum directors prepared for the new test?" "What should a teacher do to help students?" "What should a teacher tell a parent?") and their understanding of how they expected practitioners to respond to the assessment ("How do you think teachers and principals will respond to the new test?"). No reference was made to the new curriculum guidelines since practitioners had not received their copies at this point in time. The practitioners' interviews focused on their understanding of the new assessment ("What have you heard about the new state test?" "What kinds of information do you expect to receive?" "Are you in favor of end-of-grade testing?") and beliefs about whether the state's new reading and language arts texts were appropriate ("Will your commercial reading program help you to prepare students for this assessment?"). The interview then examined their reactions to the preparation they received ("What preparation have you received?" "Was it adequate?" "What preparation would be most appropriate?"). The last section focused on their reactions to the sample set of state-distributed assessment passages (a narrative and poem) and its questions (open- and closed-ended). Principals, curriculum directors, and teachers evaluated each passage and its questions ("What is your reaction to the story/poem?" "What did you think about the questions?" "What will you need to do to help your student to do well on this passage?"). They then examined the grading criteria for the open-ended items (a 4-point scale—0, 1, 2, & 3) and predicted how well students would do ("A student will receive one of four scores on this passage—0 [not related to the passage], 1 [general response], 2 [average response], or 3 [excellent response]. What percentage of your students will fall into each of the four categories?"). Copies of questions and sample assessment materials were sent to the respondents prior to the interviews. The first author conducted phone interviews with state officials, principals, and curriculum directors; the second author interviewed teachers individually at their schools. Responses were taped and later transcribed. Interviews were conducted during October and November; each lasted about forty-five minutes. Appendix A includes the sample assessment passages and graded questions.

State efforts to improve students ' reading achievement 271 CODING

We coded the interview by comparing individual responses to form inductive categories (Lincoln & Guba, 1985). Each author read half of the interviews and noted common responses to each question by state officials, principals, curriculum directors, and teachers. We then evaluated whether each individual identified the same set of responses. We discussed differences and developed coding categories to represent the responses. The second and third authors then read all of the interviews, placed individual responses on cards, and coded each according to the previously determined categories. Agreement among the researchers on the placement of responses into categories was greater than 90% with disagreements settled by consensus among the three researchers. RESULTS

We first examined whether or not differences existed among the schools on the interview questions. We found a strong consensus among schools regarding their perceptions of the expectations for the new assessment. This finding showed how schools responded similarly to state efforts to implement a new reform. Whenever differences existed, they primarily occurred between officials, principals and curriculum directors, and teachers. Unless otherwise stated, the quotations included in the following sections represent at least 90% of the respondents' opinions. State officials

The state's reform agenda was initiated by a legislative mandate to revise the first curriculum guides and develop a new assessment to match the new expectation—actions which are consistent with the policy directive approach. The new assessment represented a deliberate move away from the previously endorsed norm-referenced testing because officials believed they primarily assessed lowerlevel cognitive skills. The new instrument would evaluate students' ability to apply higher-level skills to authentic learning situations. I think we came to realize that our children were never able to put things together and come out with a real product. It (this assessment) goes against the idea of teaching isolated objectives . . . it's not that they (the teachers) are going to have to stop teaching and teach to the test. This is measuring what should be going on in classrooms. And it is hard to deny that! The need to change instructional practices so that they would be more consistent with the new higher standards was the underlying message. Students' performances would improve once teachers changed their practices. Officials cited poor performances on various state and national measures as evidence for their

272 Reading Research and Instruction Summer 1997 Vol. 36 No. 4

claim. The revised curriculum guidelines set the reform's parameters; the new assessment monitored its implementation. The revised curriculum guidelines endorsed a teaching model which contrasted the one that officials previously endorsed. The new document emphasized constructivist teaching practices, in that, students were required to apply reading knowledge to authentic situations. Officials assumed that teachers would find this model to be difficult to implement. If implemented correctly, however, students' performances would improve. I always believed that children can read. I never believed in teaching skills in isolation. I probably was more of a holistic person right from the beginning. I believe in depth rather than breadth. This is sort of my personal philosophy . . . I guess I thought most people thought that way and I found out they didn't. My personal bias is that we finally have a curriculum that is difficult for teachers to use because it's not sequential. Others stated: It (the test) will tell us a lot about how students are actually performing rather than how they answer multiple choice tests. It will tell us whether they can apply the skills we are teaching, whether they can solve problems, whether they can elaborate on the information they already have, whether they can analyze, as opposed to can they recognize the right answer out of four choices. What we've been doing isn't working because we're last in practically every indicator in the nation, in terms of achievement. I feel it will give us a much clearer picture of what the students can do in terms of their comprehension—a more realistic picture because it matches the type of reading we would like them to do in real life. I would say it is more holistic, more of a constructivist model. The selection of state-approved reading texts occurred in isolation from decisions which were related to the assessment's design and implementation. While officials viewed the texts as consistent with the revised course of study, they thought the new assessment was more demanding. "I think the open-ended questions go beyond the basais, actually into a more in-depth discussion than perhaps they do." These texts were not selected to help teachers prepare students for the new assessment. One official said: It's (new assessment) more consistent with the basais the teachers are using now than in the past.. . but it has longer passages, more authentic texts; it's more consistent with the NAEP passages. Basais were not a primary consideration when the tests were developed.

State efforts to improve students ' reading achievement 273

While state officials intended to report within a few weeks the results from the multiple-choice tests for individual students, they did not plan to provide such feedback on the open-ended questions for a few years. They planned to delay their feedback on the open-ended items to individual students because they had not developed adequate scoring procedures. Until they developed reliable procedures (estimated to be two to three years), officials planned to report openended test scores by district. As a result, teachers would not know how individual students had performed on these items. State officials did not think students would do well on the open-ended questions. They assumed, as did teachers, the students were not accustomed to thinking in this manner. With the open-ended questions we do expect the scores for the first year to be quite low. I think we will see very few at the top score point. The children are not accustomed to this type of responding. They are not accustomed to explaining their answers in writing. Pilot results (1991-1992) confirmed their prediction; less than one percent of students received the highest score and over thirty percent received the lowest score on the open-ended items. One official said it was unlikely to expect that more than ten percent of the students would ever achieve the highest score. Another stated that in a few years from 60 to 70 percent of students would achieve the highest score. None of the officials would offer specific predictions about the percentage of students who would score into each of the 4 scoring categories. Teachers expressed much concern about how well students would do on the open-ended items and were not aware that officials would not provide feedback for individual students. State officials also used a new taxonomy to develop the assessment's items; principals, curriculum directors, and teachers also were not aware of this decision. Officials informed principals, curriculum directors, and teachers of the state's reform goals by asking districts to send representatives to workshops. The training focused on the test's design and scoring rubric. Practitioners read sample passages and scored sample answers with a state-designed rubric. The attendees were expected to return to their districts to share their training with colleagues. One of the things we've tried to do this summer is to train some of these local people with institutes . . . We explained the scoring system to them and gave them sample questions so they'd see the format of the test and they could see the types of things the test was asking. I think that the best staff development that we have done was when we brought teachers in to score the field test and actually trained them on the rubric, let them read the passages, let them score them, and actually

274 Reading Research and Instruction Summer 1997 Vol.36 No. 4

get the hands-on experience. It was a real eye opener for those teachers. I think if they (teachers) had more of that it would be very helpful because I think in analyzing the papers, as far as scoring goes, you start understanding writing and what makes good writing. We questioned the utility of this approach by sharing one principal's comments; he said the training showed him how to score the test, but he still didn't know what to tell his teachers about how they should prepare students. A state official responded: I would say that he needs to get his teachers to read the new curriculum and to go to workshops, go back to school, read the philosophy in the front sections of the standard course of study, to read the literature in the bibliography, and to try to learn something new. It's there! This comment placed responsibility for change at the building level. If teachers needed assistance they should read the new curriculum guidelines. Officials primarily offered few instructional recommendations when asked how teachers should prepare students: they should simply offer more opportunities for students to read and write. The few instructional recommendations came from the sample activities in the revised curriculum guidelines, e.g., use reading logs or story maps. They also recommended that teachers should use practice assessment passages and questions to prepare students. Their comments focused on the need for teachers to increase the amount of time students read and wrote prose without any specific suggestions as to how this should be done. They offered similar recommendations to parents: Turn off the television set. Take the video control out of their hands, and have them read. Take them to the library, give them books, have books in the house, have magazines that they're interested in, talk to them about reading. Get them hooked on reading, some kind of book that they are interested in. Read, read, read, read. There's no substitute for that. We asked officials why they didn't delay the assessment to give teachers more time to prepare students. They believed the teachers would not take advantage of any further assistance or additional time. A few teachers might benefit from further training, but the majority needed to experience the first assessment in order to change their instructional practices. The assessment would provide the necessary impetus for instructional change. No one would pay any attention to it if it weren't for the test. I hate to be that blunt, but curriculum documents sit on the shelf and no one bothers to open them unless in some way they are to be measured . . . I think it's a very positive force for motivation and for evaluating whether or not we are following the curriculum.

State efforts to improve students' reading achievement 275

I don't want a teacher who doesn't think. I think thinking teachers produce thinking students and if you can't do that then you should find another job. They (teachers) haven't received a lot of preparation. I think until you let them go through the first testing, it doesn't matter. We could give them five weeks and they'd still feel it wasn't enough. Officials realized the teachers were reacting poorly to news of the assessment; however, they believed the teachers knew the change was a step in the right direction. They viewed their negative reactions as unavoidable. I've had feedback from people and what I am hearing is that finally we can't fake what is really important. I do know that initially, until people get comfortable, there is going to be apprehension, as you would expect . . . the bottom line is, is this the type of information, the type of thing we want our students to do? We had teachers in to score our open-ended field test and I met with a group of superintendents and I said, "Well what did they (the teachers) think?" and they said, "Well, they have come back and created near hysteria in our schools," yet when you sit down with individual teachers and you say, "Should our children be able to do this? Isn't this what our children should be doing?" They say, "Yes." In sum, officials developed a new set of curriclum guidelines at the request of a legislative mandate with the hopes that it would promote higher academic standards. They offered training sessions for selected teachers who were expected to share the training with colleagues. The training focused on how to score students' responses, with minimal emphasis placed on any recommended instructional changes. Officials realized the students were not prepared for the new assessment; they did not believe it would be beneficial to offer more training since most teachers would not change their practices until after the first assessment. Principals & Curriculum Directors

Principals and curriculum directors favored the new reading assessment because it was aligned with the new curriculum guidelines. A major problem with the previous course of study was that it was not aligned with the norm-referenced achievement test that students took at the end the school year. While they welcomed this alignment, principals and curriculum directors offered only a tenuous endorsement of the new assessment because they did not fully understand its implications. They knew it had longer passages and included open-ended questions, yet they were unsure about other specifics. One director said, "I know it is coming and it is strange. It's a new and different kind of animal. We haven't had any examples of it yet. It's very holistic and broad." A principal confirmed her uncertainties:

276 Reading Research and Instruction Summer 1997 Vol.36 No. 4

We've heard very little, other than what we have heard at various workshops—there has been very little training for principals or teachers, as a matter of fact. They (state officials) were going to change the format and they were going to use some open-ended questions. We were going to move away from the California Achievement Test—they were going to come up with a test that would match the curriculum . . . I'll be in favor of it once they have all the groundwork down and the test is really ready. At this point, I think they are just shooting from the hip. Their lack of understanding about the state's agenda caused several confusions. Principals and curriculum directors lacked knowledge about how the open-ended items would be reported. They expressed much anxiety about these items and believed officials would compare schools on these items. Principals and curriculum directors frequently used the behavioral terminology of the state's previously endorsed behavioral skills-based approach to explain their reactions. For example, they wanted the new assessment to tell them if teachers had covered the curriculum and if students had mastered its content. The following statement reflects their confusion: I think the CAT test is such a horrible thing because it's not matching the curriculum in our classrooms. I really want some kind of standardization (from the new assessment) to clarify to everyone how our students compare with norms. Where I happen to work right now, we're big on comparisons, and egos suffer greatly if they can't make those comparisons. Principals and curriculum directors received little training about how teachers should prepare students for the assessment. Those few principals or curriculum directors who attended the training discounted its utility because it offered no specific instructional suggestions. As a result, they believed teachers lacked adequate preparation for the new assessment. One director stated: I have a few papers here and there . . . but as far as them sending me somewhere and saying, "OK, curriculum director, here is everything you need to know about the new state testing," I have received almost nothing. A principal agreed: One of my main concerns is that we're headed at it full steam ahead, but the people at where the rubber meets the road—we're getting the trickle down effect. We're heading just wide-open into this testing process without really the proper preparation. That is kind of scary!

State efforts to improve students ' reading achievement 111

The principals and curriculum directors believed the students needed more opportunities to read and write extended prose. When asked how teachers should design such activities, they offered general recommendations; teachers should emphasize critical thinking skills, engage students in thought-provoking discussions, or use divergent questions. Their suggestions mirrored those offered by state officials; teachers needed to increase the number of opportunities students had to read and write. They did not offer any specific suggestions as to how teachers might actively engage students in these activities. They offered similar advice to parents: Parents should continue to read with children and listen to them read. Ask them comprehension questions of all kinds, not just literal questions . . . get the kids to solve their own problems—don't do so much for them! Principals and curriculum directors expressed negative reactions to the sample deer and poetry passages. (See Appendix A.) They viewed the open-ended questions as too difficult, assumed students would have a different interpretation (from adults) of the story's theme, and questioned the state's ability to objectively score students' responses. They thought the poetry passage was easier because it had multiple-choice questions, but questioned whether third-grade students could understand its hidden meanings. Overall principals and curriculum directors believed the best preparation was to give students frequent opportunities to read similar passages and to answer similar assessment questions. Principals turned responsibility for teachers' preparation over to curriculum directors. Directors described how they had planned a meeting to discuss how teachers should prepare students for the new assessment. When asked how much time they had to provide assistance to teachers, one curriculum director said that her position was designed initially to meet this need, yet with each year she had to assume more and more administrative duties. As a result, she had little time to work with teachers. Table 1 lists principals', curriculum directors', and teachers' predictions for students' performances on the open-ended items. Students would receive one of four possible scores (0, 1, 2, or 3, ranked lowest to highest) on these items. Separate one-factor ANOVA's were conducted for each set of predictions by scoring category, followed by Fisher post-hoc tests (p.