system of unequal access to grade-level content, and 3) despite their best intentions, they typically serve the ... The LPF strives to put educational assessment where it .... framework will have a greater chance of securing teacher buy-in as something ..... A cluster of standards within any one of these domains, crossed with ...
LEARNING PROGRESSION
MONITORING & EVALUATING GROWTH INTERPRETATION
TASKS & ITEMS
Using a Learning Progression Framework to Assess and Evaluate Student Growth April 2015
CENTER FOR ASSESSMENT DESIGN RESEARCH AND EVALUATION (CADRE) UNIVERSITY OF COLORADO, BOULDER
NATIONAL CENTER FOR THE IMPROVEMENT OF EDUCATIONAL ASSESSMENT (NCIEA) DOVER, NEW HAMPSHIRE
Derek C. Briggs Elena Diaz-Bilello Fred Peck Jessica Alzen Rajendra Chattergoon Raymond Johnson
Foreword The complex challenges of measuring student growth over the course of a school year have become familiar to anyone working to improve educator evaluation systems. Recent legislation in many states has postponed or scaled back the use of student growth or outcomes within accountability frameworks, suggesting that they are not ready to attach these measures to high-stakes decisions regarding educator effectiveness. Put simply, we don’t have it all figured out yet. While we struggle to measure student growth in a valid and reliable way for the tested subjects, such as math and literacy, we face even more significant challenges in the non-tested subjects, such as the arts and physical education. In the absence of state-mandated tests for the latter, many states and districts have begun to implement Student Learning Objectives (SLOs), a process of developing subject-specific learning goals and measuring the extent to which each student has mastered them. There is no single way to design and implement SLOs, no sure-fire way for SLOs to solve the problems of measuring student growth—hence the variety of models in use across the country. The authors of this paper, however, present what might be called an alternative model of SLOs, a Learning Progression Framework (LPF), which aims not simply to fill a gap in accountability systems, but also to offer promising approaches to the improvement of day-to-day classroom practices. As the authors of this paper demonstrate, while some SLO models appear more promising than others, their designs share unfortunate limitations: 1) they offer little more than spurious claims about student growth, 2) they can incentivize the setting of different expectations for different students, arguably creating a de facto system of unequal access to grade-level content, and 3) despite their best intentions, they typically serve the needs of accountability for outcomes without offering a means to improve everyday instruction. In light of these limitations, the authors demonstrate how the LPF model acknowledges the reality that intended outcomes cannot, on their own, improve teaching and learning. In the Denver Public Schools, we are implementing the LPF model based on a commitment to the idea that improvement of teaching and learning will depend on the way teachers meet students’ individual needs along the path toward standards mastery. At the risk of indicating the obvious: formative assessment and appropriate adjustments to instruction play an indispensable role in any attempt to improve student learning. However, while standards define end-ofcourse outcomes for student learning, they haven’t helped us much in figuring out what should happen along the way. Without guidance along the way, how can a teacher assess, with confidence, whether students are on track to master the content and skills of a given course? This is where the LPF offers much promise. In Denver, we have been working over the past two years to develop and implement an SLO model based on learning progressions. At its core, the model is meant to be authentic to everyday cycles of formative assessment and instructional shifts, even as we intend to use it for summative accountability purposes. We know that this approach will take some years to refine, as it relies on long-term professional learning, effective structures and systems for teacher collaboration, distributed leadership, and a careful balance of teacher autonomy and quality assurance for the larger system. And yet, we have seen some early successes worth celebrating, which include: teachers’ deeper knowledge of the standards in their subjects, the development of shared goals and more consistent collaboration among teams of teachers, and more intentional uses of assessment to support instructional planning and student learning. A deeper understanding of assessment is critical at a time when standardized and summative tests are becoming increasingly prevalent and controversial. The LPF strives to put educational assessment where it belongs: close to the curriculum, the instruction, and the student. When we think in terms of learning progressions, we are reminded that assessment need not consist of—and with few exceptions should not
Using a Learning Progression Framework to Assess and Evaluate Student Growth
i
consist of—an isolated event or a traditional test. Rather, assessment is integrated naturally within the fluidity of instruction; as such, it includes not only tests, but also mid-lesson checks for understanding, rigorous tasks assigned to students throughout the school year, and systematic observations of student performances—to name only a few. The key point is that good instruction entails good assessment as one of its necessary conditions. As we work to refine our SLO model in Denver, we communicate explicitly that our use of SLOs is a choice, and that we would promote their use in classrooms even if there were no legislation requiring student growth measures. This is because SLOs, when implemented within the Learning Progression Framework, capture the essence of high-quality instruction, pushing us to ask fundamental questions like the following: What is most important for my students to know and be able to do? What do my students know right now, and how do I know that? What should I do to meet my students’ needs? If we can answer these questions with increasingly greater confidence, we will position ourselves to develop growth measures that meet the needs of accountability and, most importantly, of students in their day-to-day learning. Michael I. Cohen, Ed.D. Assessment Support Manager Department of Assessment, Research & Evaluation Denver Public Schools Denver, CO
Using a Learning Progression Framework to Assess and Evaluate Student Growth
ii
I. Overview The metaphor of growth is central to conversations about student learning. As students advance from grade to grade, one naturally expects that what students know and understand about the world and about themselves is becoming more sophisticated and mature. Of course, there are many factors that could influence this process, such as family resources, peers, community, culture and a student’s school environment. Over the past decade, empirical evidence has reaffirmed what most parents and children have believed intuitively for decades: the quality of teaching a student experiences can have a significant impact on growth (Aaronson, Barrow & Sander, 2007; Chetty, Friedman & Rockoff, 2014; Gordon, Kane & Staiger, 2006; Kane & Staiger, 2008). But how does one know both what and how much students have learned over a delimited period of time in some content area? Although a well-designed assessment may help to answer the question of what students seem to know and understand, no single test can support inferences about how much this has changed over time. Furthermore, even when students are assessed periodically, inferences about growth may remain elusive if the content on the assessments is also changing. Finally, if these assessments are not well aligned to what is taught as part of a school’s curriculum, the picture of student growth being presented can easily be distorted. In other words, although the metaphor of growth is conceptually appealing, finding a way to characterize growth numerically in a manner that leads to valid and reliable inferences is extremely challenging. This challenge is becoming all the more apparent as a rapidly growing number of states and school districts in the United States seek to incorporate evidence of student growth into formal evaluations of teachers under the auspices of educational accountability (c.f., Dougherty & Jacobs, 2013). Although a great deal of research and debate has surrounded the use of statistical models for this purpose, only about one third of classroom teachers teach students for whom state-administered standardized tests are available as inputs into these statistical models (Hall, Gagnon, Schneider, Marion, Thompson, 2014). Hence, for a majority of teachers, other evidence is needed to support inferences about student growth. In more than 30 states, this evidence is being gathered through the development and evaluation of student growth through “Student Learning Objectives” or SLOs (Lacireno-Paquet, Morgan & Mello, 2014). SLOs typically involve a process in which teachers establish measureable achievement goals for their students, assess students at the outset of an instructional period and then establish targets for student growth over the duration that period. A central impetus for this report is the belief that SLOs will only be able to support sound inferences about student growth if they have been designed in a way that gets educators attuned to the right motivating questions. • What do I want my students to learn? • What do my students know and understand when they arrive in my classroom? • How is their knowledge and thinking changing over time? • What can I, and other teachers at the school, do to help them learn? • What evidence do I have that my students have demonstrated adequate growth? Inferences about student growth that account for such questions need not only learning objectives but a framework that structures objectives into a progression of student learning. In this report we introduce a learning progression framework (LPF) that applies innovative thinking about educational assessment to better support valid and reliable inferences about student growth. A principal benefit of this framework is that it directly anticipates the questions posed above. In addition, it can support a process that is much more encompassing than the de-facto use of SLOs for teacher evaluation. Indeed, in this report we show that SLOs can be cast as a special case within an LPF. Importantly, an LPF has three features that are always present irrespective of the content domain. First, a critical condition for operationalizing the framework is to have teachers work collaboratively to identify a Learning Progression (LP) within and, ideally, across grades or courses. Collaboration in this matter requires that teachers clearly establish what it means to say that a student has shown “adequate” growth in a Using a Learning Progression Framework to Assess and Evaluate Student Growth
1
criterion-referenced sense. Second, an LPF emphasizes growth toward a common target for all students. Students are not differentiated with respect to what they should know and be able to do by the end of course but only with respect to differences in their preparedness at the start of the course. Third, the nature of teacher collaboration is explicitly oriented toward the analysis of student work. This work might range from multiple-choice answers on a standardized test, to responses on an open-ended essay, or to videos of students carrying out a classroom project. Teachers use this evidence to document the variability in how students respond to assessment activities, make distinctions among students with respect to the sophistication of their responses, and come up with teaching strategies that can best support the students with less sophisticated responses and further challenge the students with more sophisticated responses. An LPF focuses attention on student reasoning and the sometimes significant space between “not getting it” and “getting it.” This means that teachers who think in terms of learning progressions are often just as interested in the process a student uses to solve a task as they are in whether an answer to the task is correct. This also means that there is a focused interest in understanding the space between the two points of “getting it” and “not getting it” as a means for locating and understanding student misconceptions and strengths in solving a task. In this report we will argue that learning progressions are a valuable framework for characterizing and elucidating the objectives, targets, or goals of instruction. The terms objectives, targets and goals are all convenient synonyms educators use—often interchangeably—to help organize the classroom activities and assignments. In this sense, there is nothing new about the concept of a student learning objective; these have existed since the first teacher-student relationship was formed. The only thing novel about a (capitalized) “Student Learning Objective” is its formalization and standardization as a process that is intended to meet two different needs: (1) to help teachers monitor student growth for formative purposes, and (2) to help the public monitor this growth for accountability purposes. There is a tension between these two needs, as the high-stakes nature of accountability consequences has the potential to undermine the use of SLOs for teaching and learning (Campbell, 1976). Our theory of action is that if an LPF has been put in place first using the collaborative processes we describe in what follows, any subsequent SLO that is generated from this framework will have a greater chance of securing teacher buy-in as something that is authentic to what they value in the classroom and something that they can control. In the next section we provide specific examples of SLOs used as part of teacher evaluation systems in two states. We do so in order to highlight common threats to the validity of SLOs. We then present the LPF as a possible solution to these validity threats.
II. Student Learning Objectives Due in large part to federal requirements that mandate evidence of growth in student achievement to be included in educational accountability systems, a total of 30 states who received Race to the Top (RTTT) funding and/or submitted an Elementary and Secondary Education Act waiver now use SLOs as a key component of their educator evaluation systems (Lacireno-Paquet et al., 2014). Originally viewed as a solution by RTTT states and districts for measuring student growth for non-tested subjects because only a minority of teachers teach in subjects for which state-administered standardized tests are available, SLOs in many places now apply to both tested- and non-tested subjects and grades Hall, Gagnon, Marion, Schneider & Thompson 2014). Common elements of the SLO process are: • Specification of a goal as part of an “objective statement” that defines the content students are expected to learn. • Specification of the interval of instruction over which the learning is expected to occur (e.g., semester or year-long). • Identification of assessments to be given to students at the beginning and end of the instructional interval. • Specification of distinct growth targets for individual students or for groups of students. • An evaluation of each teacher based on the proportion of the teacher’s students who have met their growth targets by the end of the instructional period.
Using a Learning Progression Framework to Assess and Evaluate Student Growth
2
The implementation of these common SLO elements varies across states and districts depending upon the degree of centralization and comparability desired in the set of learning goals used across all teachers, the set of data sources used to support the process, and the methodology used to specify student growth targets and to compute teacher ratings (Lachlan-Hache, Cushing & Bivona, 2012a). To illustrate key differences found in the implementation of common features to the SLO process, we turn to examples from two states, Georgia and Rhode Island. We consider these two states because they used contrasting approaches and policies in designing the SLO process. In Georgia, school districts, also known as local education agencies (LEAs), dictate which assessments should be used by teachers and how student growth targets should be set. The state provides guidelines on how teacher ratings are assigned based on the extent to which growth targets are met by students. In Rhode Island, the state allows teachers to select their own assessments to evaluate students on their learning objective and to set the growth target expectations for their students. Although Rhode Island provides guidance on two approaches for scoring each SLO, school districts have the right to modify either approach or devise something new. The state defines how SLOs are used to form a final teacher rating to fulfill accountability requirements. Although the two approaches differ in terms of the constraints they place upon teacher enactment of SLOs, the SLO process in both states involves all the same elements. After presenting the SLO process for each state separately, we call attention to what we view as problematic aspects of these common elements.
GEORGIA
In Georgia, only teachers in non-tested subject areas are eligible to submit an SLO. For any given SLO, options for learning objective statements, assessments, and growth targets are established by grade and content area by staff and/or working committees formed in each school district. Evidence of student growth is required to come from the administration of a “pre-test” at the beginning of the school year and “post-test” near the end of the year. In most cases the same test is given twice, but this is not a mandated requirement. The state gives school districts considerable discretion in choosing assessments that would be eligible for use as a pre-test and post-test. These may range from commercially developed tests to locally developed performance tasks. The Georgia Department of Education’s SLO guidance manual (2014) provides two publicly available examples of approaches that could be used to set growth targets, one that establishes individualized student targets, and another that establishes rubric-based student targets. In the first approach, each school district picks expected growth targets in every course and grade. This is illustrated in Figure 1 for an SLO in reading comprehension set for a hypothetical classroom of grade 3 students. Here each student’s target is defined relative to their “potential growth” which is the difference between 100 and the student’s pre-test score expressed in a percentage-correct metric. In this example, a student will have demonstrated expected growth if the post-test score is 35% of potential growth, and high growth if the post-test score is 60% of potential growth. In the second approach, illustrated in Figure 2, a growth target is defined as movement of one performance level along a rubric, with high growth defined as movement across more than one performance level. In principle, a teacher would use the first approach with assessments that lend themselves to a set of multiple-choice or short open-ended test items that can be readily scored as correct or incorrect. Teachers would use the second approach with assessments that consist of performance-based tasks or a portfolio of student work that is scored holistically. In either of these two approaches, a student’s performance from pre- to post-test is subsequently placed into one of three categories: did not meet growth target, met growth target, exceeded growth target. The frequency counts of students in each category are then tabulated for the teacher of record and converted into a percentage out of total. This forms the basis for scoring teachers according to an “SLO Attainment Rubric,” an example of which is provided in Figure 3. These rubrics are used to categorize teachers into one of four effectiveness levels on the basis of the percentage of their students who meet or exceed their growth targets. The choice of thresholds that determine teachers’ placements in each category is set by the state and remains the same, irrespective of a teacher’s school district, grade or subject.
Using a Learning Progression Framework to Assess and Evaluate Student Growth
3
FIGURE 1: SAMPLE SLO USING INDIVIDUALIZED TARGETS SLO Statement Example: From August 2014 to April 2015, 100% of third grade reading students will improve their knowledge of vocabulary and comprehension skills as measured by the Mountain County Schools Third Grade Reading SLO Assessment. Students will increase from their pre-assessment scores to these postassessment scores as follows: The minimum expectation for individual student growth is based on the formula which requires each student to grow by increasing his/her score by 35% of his/her potential growth. Pre-Assessment Score + [(100 – Pre-Assessment Score) * Expected Growth] = Target Example using 40 on a Pre-Assessment: 40 + [(100 - 40) *.35] 40 + [(60) * .35] 40 + [21] = 61 A score of 61 is the expected growth target for the post-assessment. Students increasing their score by at least 60% of their potential growth would be demonstrating high growth. A score of 76 or above is the high growth target. Source: Georgia Department of Education, 2014 [see 2014-2015 SLO Manual at http://www.gadoe.org/School-Improvement/Teacher-and-Leader-Effectiveness/Pages/SLO-Resources-and-Tools.aspx]
FIGURE 2: SAMPLE SLO USING RUBRIC-BASED TARGET Sample SLO for Grade 6 Intermediate Chorus: From August 2014 to April 2015, 100% of sixth grade Intermediate Chorus students will improve their sight reading and noting music skills as measured by the Mountain County Schools Intermediate Chorus Performance Task. Students will increase from their pre-assessment scores to these postassessment scores as follows: Level 1 will increase to Level 2 Level 2 will increase to Level 3 Level 3 will increase to Level 4 Level 4 will maintain. Students who increase one level above their expected growth targets would be demonstrating high growth. Students with a pre-assessment score of Level 4 or an expected growth target of Level 4 may complete a developmentally appropriate project or assignment based on the SLO assessment’s content. Source: Georgia Department of Education, 2014 [see 2014-2015 SLO Manual at http://www.gadoe.org/School-Improvement/Teacher-and-Leader-Effectiveness/Pages/SLO-Resources-and-Tools.aspx]
Using a Learning Progression Framework to Assess and Evaluate Student Growth
4
FIGURE 3: SAMPLE “SLO ATTAINMENT RUBRIC” FOR EVALUATING TEACHERS
LEVEL IV
LEVEL III
LEVEL II
LEVEL I
The work of the teacher results in exceptional student growth.
The work of the teacher results in appropriate student growth.
The work of the teacher results in minimal student growth.
≥90% of students demonstrated expected/ high growth and ≥30% high growth on the SLO.
65-89% of students demonstrated expected/ high growth on the SLO.
The work of the teacher does not result in appropriate student growth.
In addition to meeting the requirements for Level III
Level III is the expected level of performance
OR
50-64% of students demonstrated expected/ high growth on the SLO.
< 50% of students demonstrated expected/ high growth on the SLO.
≥90% of students demonstrated expected/ high growth and