Jul 25, 2012 - Several years ago, the state of South Carolina' turned to experts in early childhood education to ... Downloaded by [MUSC Library] at 08:20 14 February 2016 .... protocol upon which primary schools' report cards would be ...
Childhood Education
ISSN: 0009-4056 (Print) 2162-0725 (Online) Journal homepage: http://www.tandfonline.com/loi/uced20
An Authentic Approach to Assessing PreKindergarten Programes: Redefining Readiness Nancy Freeman & Mac Brown To cite this article: Nancy Freeman & Mac Brown (2008) An Authentic Approach to Assessing Pre-Kindergarten Programes: Redefining Readiness, Childhood Education, 84:5, 267-273, DOI: 10.1080/00094056.2008.10523023 To link to this article: http://dx.doi.org/10.1080/00094056.2008.10523023
Published online: 25 Jul 2012.
Submit your article to this journal
Article views: 247
View related articles
Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=uced20 Download by: [MUSC Library]
Date: 14 February 2016, At: 08:20
Nancy Freeman Is Assoclate Professor and Mac Brown Is Professor, Department of lnstructlon & Teacher Educatlon, College of Educatlon. Unlverslty of South Carolina, Columbia.
pm-
Downloaded by [MUSC Library] at 08:20 14 February 2016
Redefining Readiness
I
n the wake of No Child Left Behind (NCLB), academic achievement, including school readiness, has come to be redefined as children’s ability to earn a passing score on required standardized tests. By relying on test results to tell us if children are ready for school, it is easy to explain achievement gaps and low test scores as being caused by poverty, family circumstances, or other outside factors. In short, families and children may be ”blamed” for children not being ready for school. Several years ago, the state of South Carolina’ turned to experts in early childhood education to address the school readiness issue for its K2 schools. Rather than asking, “Is this child ready for school?,” the state created a program assessment system that reframed the question of school readiness by asking, ”Is this school ready for all children?” This approach avoided on-demand tests, and focused instead on a school’s ability to meet research-based criteria shown to enhance children’s growth, development, and learning-that is, their chances for school success. As a result, the state’s kindergarten and 1st-grade performance-based authentic assessment instrument, which is based on the Work Sampling System (Meisels, Jablon, Marsden, Dichtelmiller, & Dorfman, 2001), has been left intact and uncompromised. This article begins with a short history of assessment, focusing particularly on issues related to the assessment of young children and their school readiness. It then describes the Conditions of Learning assessment system and the results from its initial implementation. We conclude with recommendations about how this approach might be replicated to protect children from readiness assessments that expose them to inappropriate tasks that tell us little of value about what children know and are able to do.
Early Childhood Educators’ Long History of Resisting Standardized Testing Early childhood educators have steadfastly objected to the proliferation of high-stakes, on-demand testing, particularly the testing of young children conducted in the name of program accountability. They have buttressed this position with their knowledge of child development, their understanding
South Carolina turned t o experts in early childhood educa-
,
tion t o address the school readiness issue for its K-2 schools. Rather than asking,
.
“Is this child ready for
school?,” the state created a program assessment system that reframed the question of school readiness by asking, “Is this school ready for all children?”
Annual Theme 2008 / 267
Downloaded by [MUSC Library] at 08:20 14 February 2016
of the strengths and weaknesses of currently available psychometric instruments, and their commitment to social justice. They know that young children’s growth is episodic and uneven; understand that great variability exists among and within typically performing children; and appreciate that tests do a poor job of measuring the complex social, emotional, cognitive, and physical competencies that children from birth to age 8 need to succeed in school. Experts in early childhood assessment also realize that such tests often can be culturally biased. They are highly correlated to mothers’ educational level, family income and social/economic status, home language, and other family characteristics. In short, they realize standardized tests do a better job describing what children bring to a program than what they takefrom it (Kamii, 1990; National Research Council, 2001; Scott-Little & Niemeyer, 2002). . Early childhood educators have expressed these concerns in position statements released by three well-known professional organizations representing early childhood educators in the United States: the Association for Childhood Education International (ACEI), National Association for the Education of Young Children (NAEYC), and Southern Early Childhood Association (SECA). ACEI’s 1991and 2007 position papers on assessment trace the history of reliance on standardized tests to evaluate young children, reporting that in spite of political pressures to test young children, standardized, on-demand assessments were infrequently used in early childhood programs prior to 1965, despite political pressure to do so (ACEI, 1991, 2007). That moratorium on testing young children had lifted by the mid-1980s to such an extent that the authors of NAEYC‘s 1987 Position Statement on Standardized Tests assumed that tests were a necessary part of the accountability equation. NAEYC focused on reminding educators to use assessments only for their designated purposes and only when their use improved services and outcomes for children. NAEYC’s Positioiz Statement oiz Sclzool Readiness (NAEYC, 1990/1995) took a stronger and more proactive stance by asserting that young children are, by their very nature, poor test takers; therefore, researchers’ attempts to determine an instrument’s reliability and validity (i.e., attempts to achieve standardization) are fruitless. This position statement also highlighted the normal variability among children of the same age and cautioned professionals about the dangers of using standardized instruments to make high-stakes decisions, specifically in determining children’s eligibility to enter school. SECA’s position paper on assessment (1990/1996/2000) goes one step beyond NAEYC‘s by stating that testing itself is a fruitless exercise. SECA’s authors insist 268 \ Childhood Education
that testing be held to the same high standard as other components of the curriculum. It is not enough that testing do no harm; assessment activities must contribute to young children’s growth, learning, and development. NAEYC‘s most recent position statement on assessment was developed in collaboration with the National Association of Early Childhood Specialists in State Departments of Education (NAECS/SDE) (NAEYC & NAECS/SDE, 2003). It expresses the concern of these organizations that curriculum, assessment, and program evaluation dimensions of quality programming have been addressed in a “disconnected and piecemeal fashion.” Through this position statement, NAEYC and NAECS/SDE remind early childhood educators and policymakers that high-quality early childhood programs assess children’s progress and program effectiveness in coordinated, connected, and continuous ways, rather than through on-demand tests that are disconnected and separate from children’s experiences in high-quality classrooms (NAYEC & NAECS/SDE, 2003). The growing threat posed by inappropriate testing is also evidenced in NAEYC‘s Code of Ethical Conduct and Statement of Commitment. Assessment issues were not mentioned in the first version of the Code or in its 1992 and 1997 revisions (Feeney & Kipnis, 1989,1992, 1997). The 2005 version of the NAEYC Code, which has been endorsed by ACEI, includes nine items addressing assessment and testing. These items reflect the field’s current concerns about the pressure on teachers to assess children in inappropriate ways, and provide early childhood educators guidance as they strive to honor the Code’s first principle: ”Above all, we shall not harm children” (NAEYC, 2005).
Resisting TestingWelcoming Accountability: South Carolina’s Proactive Response South Carolina enacted the Educational Accountability Act in 1998, several years before the passage of NCLB. This state law requires all public schools to publish annual report cards to: Inform parents and the public about the school’s performance Assist in addressing the strengths and weaknesses within a particular school Recognize schools with high performance Evaluate and focus resources on schools with low performance. (EOC, 2006a, Appendix C) Report cards of elementary, middle, and high schools rely, in large measure, on results from state-mandated
achievementtests. Because SouthCarolina’slegislature has supported early childhood educators’ position that testing young children is imprecise and unethi-’ cal, however, mandated standardized tests are not administered until 3rd grade. That means another assessment strategy was needed to evaluate the 20or-so schools serving only young children between kindergarten and 2nd grade. In 1999, an Early Childhood Study Group, made up of representatives from all stakeholderconstituencies,developed an evaluation protocol upon which primary schools’ report cards would be based. The yardsticks selected to measure program quality fall into four categories: 1) measures of school climate, such as attendance and class size; 2) measures of teacher quality, such as certificationlevels and professional development opportunities; 3) measures of the extent and quality of parent involvement; and 4) external evaluation of program quality. These structural and programmatic criteria are being used in various forms and combinations to evaluate primary school programs in other U.S. states as well as in several western European nations (Literature Review, 2005). The proposal to evaluate schools’ readiness for children garnered support by relying on study group members’ informed professional opinions as well as national experts’ teachings and advice (Darling-Ham-
mond, 2004; Shepard, Kagan, & Wurtz, 1998).
South Carolina’s Conditions of Learning Primary School Assessment Plan The Conditions of Learning assessment protocol had to rely on data already being collected by the State Department of Education, because no funding had been allocated for either the collection of additional information or for the creation of new assessment instruments.’ The state settled on these seven criteria that correlate to children’s later school success:
,
Student Attendance: Percentage of children in attendance, based on the number of children enrolled on the 45th day of school Pupil/Teacher Ratios: Calculated by dividing the number of students enrolled in the school on the 45th day of school by the total number of teachers assigned to the classroom full time on that date (Biddle &Berliner,2002; Bredekamp & Copple, 1997; Phillips, 1987) Parent Involvement: Calculated by dividing the number of students whose parent(s)/guardian(s) attend at least one individual parent conference during the school year by the total number of students enrolled on the 135th day of school (Eldridge, 2001; Epstein, 1995; Isenberg & Jalongo, 1997; Powell, 1998)
Downloaded by [MUSC Library] at 08:20 14 February 2016
’
Table 1 Criteria Used To Determine Absolute Ratings on Primary School Report Cards
I Criterion
I 5
4
98% or greater
96-97.99%
Pupil-Teacher Ratio
21 or less
22-25
Parent Involvement
90% or more
Student Attendance
I
~
Accreditation
Development
1
7549%
Points Assigned 3
I
I
I
I
9495.99%
26-30
I
2
I
92-93.99%
I
31-32
I
I
1
60-74%
I 1 Lessthan92% Greater than 32
1
1
29% or less
~~~
NAEYC or Montessori
SDE and SACSearly childhood
More than 1.5 days
l to 1-5days
I
1 day
Not pursuing accreditation
Conducting self-study
SDE
I
5to.9day
I
Lessthan .5 day
I
Calculate the Absolute Rating by adding the points assigned to each rating category and dividing the total points by the number of criteria used to calculate the ratings.
Annual Theme 2008 / 269
Downloaded by [MUSC Library] at 08:20 14 February 2016
A Portrait of South Carolina's Primary Schools
ExternalEvaluation: Accreditationby the Department of Education, the Southern Association of Colleges and Schools, NAEYC, and the American Montessori Societyare weighted (Bredekamp& Willer, 1996;Cost, Quality & Child Outcomes Study Team, 1995) Professional Development: The amount of early childhood-specificprofessionaltime devoted for teaching personnel and administrators (Ackerman, 2004) Professional Preparation*: The proportion of teachers with degrees and certifications in early childhood education (Snider & Fu, 1990) Environmental Measures of Classrooms: All classrooms, except those pursuing identified alternative accreditation, were to be evaluated using the Early Childhood Environment Rating Scale (ECERS) (Bryant, Burchinal, Lau, & Sparling, 1994;Burchinal et al., 2000; Cost, Quality, & Child Outcomes Study Team, 1995; Harms, Clifford, & Cryer, 1998).
Primary SchoolReport Cards have been issued in South Carolina each year since 2001. Schools are evaluated by five criteria: student attendance, pupil/teacher ratios, parental involvement, external accreditation, and professional development devoted to early childhood. The ratings were scheduled to factor in schools' ECERS scores in 2005, but political pressure applied during the 2004 legislative season scuttled those plans. Legislators were responding to concerns that some items, such as the requirement that classrooms have in-room, childsize sinks with hot water, might amount to unfunded mandates. Performance on each of the identified criteria is averaged to award an Absolute Rating of Excellent, Good, Average, Below Average, or Unsatisfactory. In addition, report cards indicateeach school's ImprovementRating Index, which is determined by applying a mathematical This strategy was appealing for two important reasons. formula that compares the current year's performance First, these assessment criteria are strongly linked to with the school's past performance. Adjustments are positive student outcomes, and, second, it leaves intact made for schools maintaining an Excellent Absolute and uncompromised the state's commitment to hold Rating for two consecutive years (EOC, 2006a). It is important to note that the assessment system off testing until 3rd grade.
Table 2 Primary School Report Cards Absolute Rating Scale for Years 2004 2014
I
Year
I
2005
I
3.5 and above
2004
I
Excellent
I
3.6 and above
Good
I
3.1-3.4
I
3.2-3.5
I
-
Average
Below Average
Unsatisfactory
2.7-3.0
2.3-2.6
Below 2.3
2.8-3.1
I
2.4-2.7
I
Below 2.4
3.8 and above
4.3 and above
4.4and above
2013
I
2014
I
4.5 and above
270 \ Childhood Education
4.0-4.3
I
4.1-4.4
3.6-3.9
I
3.7-4.0
3.2-3.5
I
3.3-3.6
Below 3.2
I
Below 3.3
Downloaded by [MUSC Library] at 08:20 14 February 2016
becomes more rigorous over time. For example, in 2004, a 3.5 average score on a 5-point scale earned an Excellent Absolute Rating. A score of 4.0 or above is required for an Excellent rating in 2009; by 2014, an Excellent rating will require 4.5. In 2014, a score of 3.2 will be considered Unsatisfactory, a far cry from the Good rating it would have earned in 2004 (2005-2006 Accountability Manual). Beginning in 2004, the Absolute Ratings of schools unable to demonstrate their ability to make Adequate Yearly Progress (AYP), as required by NCLB, will be decreased one rating category (2005-2006 Accountability Manual). To date, all schools (the number of K-2 schools has ranged from 18to 28 during this period) have achieved an Excellent Absolute Rating. Table 3 summarizes ratings earned, each school’s Improvement Rating, and their attainment of Adequate Yearly Progress. This table illustrates that with the addition of AYP the system began to show more variability and was better able to distinguish among schools’performance in meaningful ways.
primary schools, district curriculum coordinators,and representatives from the State Department of Education, Education Oversight Committee, and higher education. Their charge was to review implementation of the Conditions ofLearning assessment system to date. It recommended three modifications: the inclusion of several additional criteria, the elimination of one criterion, and the change in the operational definition of another. The three additional criteria, which require no additional data collection, were:
Refining the Assessment System
Progress Toward Accomplishing the Study Group’s Goals
The Primary Schools Ratings Advisory Group, a subcommittee of the Education Oversight Committee, was created in 2005. It was made up of principals of
Absolute Rating Excellent
Good
The amount of Prime Instructional Time Percent of teachers with advanced degrees Percent of teachers returning from the previous year.
Parent Involvement was re-defined and Student Attendance was deleted because it is a component of Prime Instructional Time. Unfortunately, Prime Instructional Time had not been explicitly defined and was not yet included in schools’ report cards (South Carolina Education Oversight Committee, 2006b).
When the Early Childhood Study Group originally convened in 1999, it identified two goals: to focus on
Improvement Rating* Excellent
Adequate Yearly Progress
Good
Yes
No
200 1
18 (100%)
0
5 (30%)
12 (70%)
NIA
NIA
2002
20 (100%)
0
10 (56%)
8 (44%)
NIA
NIA
2003
23 (100%)
0
4 (20%)
16 (80%)
7 (44%)
9 (56%)
2004
25 (100%)
0
11 (44%)
10 (40%)
13 (52%)
12 (48%)
2005
28 (100%)
0
7 (25%)
17 (61%)
12 (43%)
16 (57%)
*Total number of schools with Absolute Ratings does not equal total number of schools with Improvement Ratings, because each year’s results include some schools not previously rated and others that no longer have a K-2 configuration. South Carolina Education Oversight Committee. (2006a). 2006-2007 accountability manual. Columbia, SC: Author. Annual Theme 2008 / 271
Downloaded by [MUSC Library] at 08:20 14 February 2016
childhood programs because it provides incentives to invest in small classes, freedom to schedule sufficient unstructured time for the implementation of developmentally appropriate instruction, specialized training for early childhood teachers, and a school climate that will attract and retain early childhood professionals with advanced degrees and special expertise. Early childhood educators in South Carolina are confident that implementation of the Conditioizs of Learning approach has been effective in broadening the definition of quality. It takes a holistic view of children and offers a strategy for recording and interpreting quantitative data in a way that captures the diversity and uniqueness of individual children‘s educational experience. The long-term effectiveness of this evaluation plan will be demonstrated through the use of sophisticated tracking software that will make it possible to document young children’s performance throughout their K-12 education. We are hopeful this approach will prove to be a worthy one that serves children, families, and teachers well. We asked three members of the original study group to reflect on the success of the Conditions of Learning assessment system to date. Each reiterated his or her Conclusion There is a growing appreciation that assessing early support for its approach, but expressed concerns that childhood program quality is a complex task that re- a number of school districts had bowed to pressures quires a sophisticated approach. Taking a simplistic, for data on children‘s performance. These districts additive approach to program assessment ignores are supplementing these measures of program quality commitments to issues of equity, access, and equal with on-demand testing of young children. We realize opportunity and acknowledges the bias that cannot that challenges remain as we strive to bring educators, be avoided when program evaluation relies upon the legislature, and the public to the point where they standardized, on-demand assessments of children. will embrace this new definition of school readiness The Conditions of Learning approach to program as- that honors what we know about young children and sessment might serve as a roadmap toward improving how they learn and develop. opportunities for all young children. It may be most useful in communities struggling to fully fund early
programs’readiness forchildren rather than children’s readiness for school, and to apply the assessment system they would devise to evaluate all of the state’s programs serving young children. This Conditions of Learning assessment strategy has effectively accomplished that first goal. Young children continue to be protected from state-mandated, inappropriate high-stakes tests. The second goal is more ambitious and has not yet been reached. Applying the conditions of Learning protocol to K-2 programs situated within elementary schools would require legislative action modifying school report cards. The Primary School Report Card system has not yet been able to generate enough variability to warrant that step, nor has the state amassed sufficient data linking primary schools’ performance to children’s 3rd-grade test results to justify making this addition to elementary schools’ ratings. We remain encouraged, however, by state policymakers and their commitment to assessing a program’s readiness for young children, rather than trying to create an assessment to measure the elusive construct of children’s readiness for school.
Notes: Acknowledgment: With special thanks to Linda Mims, Assistant Professor of Early Childhood Education at University of South Carolina Upstate and former Director, Office of Early Childhood Education at the South Carolina Department of Education, who reviewed this manuscript during its development. She added a valuable perspective, offered helpful recommendations, and contributed pertinent information that improved the final product.
272 \ Childhood Education
‘The one exception was the inclusion of the Early Childhood EnvironmentRatingScale(ECERS)(Harms, Clifford, & Cryer, 1998).The state provided training for a cadre of early childhood experts to become reliable in using the ECERS in preparation for the inclusion of ECERS data in 2005 and subsequent Primary School Report Cards. 2This criterion was scheduled to be included in the evaluation formula beginning in 2004. It was eliminated because it was made redundant by NCLB regulations.
References Ackerman, D. J. (2004). What do teachers need? Practitioners’ perspectives on early childhood professional development.Journal of Early Childhood Teacher Education, 24(4), 291-301.
Association for Childhood Education InternationalIPerrone, V. (1991). O n standardized testing. An ACE1 position paper. Retrieved June 27, 2007, from www.acei.org/onstandard.htm Association for Childhood Education International/Solley, B. A. (2007). On standardized testing: A n ACE1 position paper. Retrieved March 2, 2008, from www.acei.org/ testingpospap.pdf Biddle, B. J., & Berliner, D. C. (2002). What research says about
small classesand theireffects. Education Policy Reports Project. Retrieved June 27,2007, from www.asu.edu/educ/epsl/
Downloaded by [MUSC Library] at 08:20 14 February 2016
EPRP/Reports/EPRP-0202-101/EPRP-0202-101.htm Bredekamp, S., & Copple, C. (Eds.). (1997). Developmentally appropriate practice in early childhood programs (Rev. ed.). Washington, DC: National Association for the Education of Young Children. Bredekamp, S., & Willer, B. (Eds.). (1996). NAEYC accreditation: A decade of leariiiiig and the years ahead. Washington, DC: National Association for the Education of Young Children. Bryant, D. M., Burchinal, M. R., Lau, L. B., & Sparling, J. J. (1994). Family and classroom correlates of Head Start children’s developmental outcomes. Early Childhood Resenrclz Quarterly, 9(3/4), 289-309. Burchinal, M. R., Roberts, J. E., Riggins, R., Zeisel, S. A., Neebe, E., & Bryant, D. (2000). Relating quality of center-based child care to early cognitive and language development longitudinally. Child Development, 71(2), 339-357. Cost, Quality, & Child Outcomes Study Team. (1995). Cost,
quality aiid cliild outcomes in cliild care centers: Public report. Denver, CO: Economics Department, University of Colorado at Denver. Darling-Hammond, L. (2004). From “Separate but Equal” to “No Child Left Behind. The collision of new standards and old inequalities. In D. Meier & G. Wood (Eds.), Many
clzildreti left behind: Hozo the No Child Left Behind Act is damaging our cliildren and our schools (pp. 3-32). Boston: Beacon Press. Eldridge, D. (2001). Parent involvement: It’s worth the effort. Young Cliildreiz, 56(4), 65-69. Epstein, J. (1995). Schoollfamily/coininunitypartnerships. San Francisco: Jossey-Bass. Feeney, S., & Kipnis, K. (19891199211997). Code of ethical conduct and statement of commitment. Washington, DC: National Association for the Education of Young Children. Harms, T., Clifford, R., & Cryer, D. (1998). Early clzildliood eiiviroiiineiit rating scale, revised. New York Teachers College Press. Isenberg, J. P., & Jalongo, M. R. (Eds.). (1997). Major trendsand
issues in early childhood education: Challenges, controversies, aiid insights. New York: Teachers College Press. Kamii, C. (Ed.). (1990). Achievement testing in tlieearlygrades: The games grown-ups play. Washington, DC: National
Association for the Education of Young Children.
Literature Review on the Measurement of Successful Primary Schools. (2005). Prepared for the Division of Accountability, South Carolina Educational Oversight Committee. Meisels, S. J., Jablon, J. R., Marsden, D. B., Dichtelmiller, M. L., & Dorfman, A. B. (2001). The work sampling system. New York: Pearson Early Learning. National Association for the Education of Young Children. (199011995).NAEYC’s position statement on scliool readiness. Retrieved June27,2007, from www.naeyc.org/about/positionsIPSREDY98.asp National Association for the Education of Young Children, and National Association of Early Childhood Specialists in State Departments of Education. (2003). Early
cliildhood curriculum, assessment, and program evaluation. Retrieved June27,2007, from www.naeyc.org/about/positions/pdf/pscape.pdf National Association for the Education of Young Children. (2005). Code of ethical conduct and statement of commitment. Retrieved June27,2007, from www.naeyc.org/about/positions/PSETH05.asp National Research Council. (2001). Eager to learn: Educating
our preschoolers. Committee on Early Childhood Pedagogy. Washington, DC: National Academy Press. Phillips, D. (1987). Quality in cliild care: What does research tell us? Washington, D C National Association for the Education of Young Children. Powell, D. R. (1998). Families and early childhood programs. Washington, DC: National Association for the Education of Young Children. Scott-Little, C., & Niemeyer, J. (2002). Assessing kindergarteii children: Wlzat school systems need to knozo. Greensboro, NC: SERVE. Center at the University of North Carolina at Greensboro. Retrieved June 27,2007, from wwwserve. org/Syc/parents.htm Shepard, L, Kagan, S. L., & Wurtz, E. (Eds.). (1998). Prin-
ciples aiid recommendationsfor early childhood assessments. Washington, DC: National Education Goals Panel. Snider, M. H., & Fu, V. R. (1990). The effects of specialized education and job experience on early childhood teachers’ knowledge of developmentally appropriate practice. Early Cliildhood Research Quarterly, 5(1), 69-78. South Carolina Education Oversight Committee. (2006a). 2006-2007 accountability manual: The 2006-2007 annual
scliool and district system for South Carolina public schools and scliool districts. Retrieved June 27, 2007, from www. sceoc.com/resources.htm South Carolina Education Oversight Committee. (January 23, 2006b). Executive summary, review of primary school ratings criteria and recommendationsfor their revision. Columbia, SC: Author. Southern Early Childhood Association. (1990/1996/2000). Assessing development aiid learning in young children. Retrieved June 27,2007, from wwwsouthernearlychildhood.org/position-assessment.htm1
Annual Theme 2008 / 273