debate about a national curriculum in Australia. As noted by Reid (2005), the quest for national curriculum collaboration, let alone a national curriculum, is beset.
Square pegs, round holes
Defending SchoolBased Assessment
by John McCollow
W
e are in the midst of a debate about a national curriculum in Australia. As noted by Reid (2005), the quest for national curriculum collaboration, let alone a national curriculum, is beset with a host of issues and concerns that need to be worked through. This article restricts itself to one issue in particular, highlighting the potential of moves to promote greater national consistency in curriculum and assessment to put at risk what has been a key feature of schooling in Queensland: school-based assessment. It will outline the virtues of school-based assessment (while acknowledging some limitations and problems) and describe how the push for national consistency in curriculum and assessment, if not handled appropriately, could have undesirable effects. There are different possible models of curriculum consistency not all of which entail curriculum uniformity and greater use of standardised, external testing at the expense of schoolbased assessment. Standardised testing regimes occur in countries with and without national curricula. Neverthe-
early 1970s. Queensland teachers may not be aware of the extent to which external, standardised testing dominates the curriculum in other localities, particularly in the United States and United Kingdom. Even within Australia, Queensland is seen as a leader in school-based assessment, using it longer and more extensively than other jurisdictions. It is perhaps worth revisiting the rationale for school-based assessment and reminding ourselves what it offers in contrast to alternative assessment schemes.
less, the particular context in which moves to national curriculum consistency in Australia are taking place is one of a federal government which shows little respect for the work of teachers; demonises “black arm band” approaches to history, critical literacy and “post-modernism”; and seeks to supplant them with an emphasis on traditional (white Anglo-Australian) values, reverence for the established canon and old-style teaching methodologies. It also has pressured the states to adopt forms of its simplistic “plain English” A-E report cards. Notwithstanding that the recent Australian Certificate of Education report (Masters et al. 2006) appears to reject this path, in such a context the possible use of standardised external testing to enforce national curriculum uniformity should not be discounted.
Meisels et al. (2006, p. 2) note that in the United States “the nation’s embrace of national educational goals” has led to “a heightened emphasis on high-stakes, group administered, decontextualised testing practices”.
In Queensland, notwithstanding the use of external tests such as the Queensland Core Skills Test and the Year 3, 5 and 7 tests, teachers have in the main practised in an environment of school-based assessment since the
While it is possible to argue that standardised, external tests should not have any role in a school system assessment regime, that is not the argument that is being put here. Rather, the focus is on the negative effects of a stand-
10 • QTU Professional Magazine October 2006
Standardised Testing Regimes
Square pegs, round holes ardised testing regime. As noted above, standardised external tests are used in Queensland. Queensland’s assessment regime, however, is not (at least not yet) a standardised testing regime. A standardised testing regime is one where “high stakes” standardised tests are used as the main tools for assessing student and school achievement or outcomes. As the old saying goes, “teachers teach to the test”. In standardised testing regimes, the tests drive curriculum and classroom instruction in particular directions. A standardised testing regime preempts important debates about what should be taught and learned, and it inevitably narrows the range of knowledge and skills that are seen as important. Kohn (2001) argues that, in the United States standardised tests “cannibalise the curriculum” forcing schools to eliminate activities and even whole areas of the curriculum that aren’t covered in the tests. Further, the very nature of external tests influences the ways in which classroom learning is organised. Content is privileged over process, knowledge over critical judgement. Standardised testing regimes encourage rote memory drills, basic comprehension and multiple-choice exercises at the expense of activities that engage students’ higher order thinking skills. The use of group work, oral work or hands-on experimentation in classrooms decreases. Standardised testing regimes are based on a view that the desirable outcomes of learning are readily identifiable (by policy makers) and that learning can be broken down into component elements which can be detached from their context and assessed. Teachers lose a good deal of their professional decision-making role as test contents are determined elsewhere and the capacity to tailor curriculum and pedagogy to local and individual needs is constrained. In fact, taking teachers out of the assessment picture is seen by standardised testing regime proponents as a positive feature. They approach the learning and assessment as separate matters. Putting aside the question of the epistemological legitimacy of such a stance, an implication that flows
from it is that standardised assessment is as much of teachers and schools as it is of students. Thus it aims, on the one hand, to teacher-proof curriculum and assessment and, on the other, to rate teachers, schools and school systems on how well they are delivering.
and linking it to learning.
Testing, to the extent that it is valid and reliable – and that it quite a qualification in itself – tells us only what is the case. In itself, it gives us no clues as to how we might effect improvement … or even what an improvement would consist of.
In school-based assessment, teachers have the capacity to adopt a range of formative and summative assessment strategies and instruments to meet the specific needs of students and to enhance learning. It provides the opportunity for assessment activities that involve students in cooperative work, in the preparation of portfolios, in oral presentations, performances, practical demonstrations, and real-life activities such as community work, environmental and vocational projects. Importantly, it provides a far greater possibility than standardised testing for teachers to construct assessment that engage students’ higher-order thinking skills.
As noted by Black and Wiliam (1998, p. 140), standardised testing regimes treat the classroom and what happens in it as a “black box”: no guidance is provided to teachers on how to improvement pedagogy or curriculum and no guidance is provided to students
School-based assessment takes as a premise that the major focus for improving learning is the work undertaken by teachers and students in the classroom. Because of teachers have access to a range and depth of assessment activities and because assessment
The separation of assessment and learning means, as noted by Skilbeck (1988, p. 30), that standardised tests tell us nothing about the learning process itself:
... in the United States standardised tests “cannibalise the curriculum” forcing schools to eliminate activities ... about how to maximise learning. In the worst cases, standardised testing regimes have fostered a “testing mania” in the name of accountability whereby nearly every educational decision and outcome is evaluated in terms of results on standardised tests and plotted on league tables, which have very serious implications in terms of funding. In the United States this has led to a range of educationally undesirable, and in come cases, unethical practices (see Kohn, 2001, 2002; Amrein and Berliner, 2002; Meisels et al. 2003).
School-Based Assessment Whereas in standardised testing regimes, there is commonly a disjuncture between assessment and the learning situation – with teachers given little role in determining the former – in school-based assessment teachers have a central role in developing assessment
is directly linked to the learning, it can foster improved pedagogical and curricular practice. For school-based assessment to work well, teachers must focus on the nature of learning and the ways in which assessment, curriculum and pedagogy interact. Unlike standardised testing regimes, which sideline teachers from the important assessment decisions, in school-based assessment teachers are treated as professionals with a central role to play. School-based assessment fosters contextualised judgements about student achievement and allows diagnostic feedback to be provided to students, including information on how they might do better. When done well it encourages students to plan their learning, assess the nature of the assessment tasks to be performed, develop learning strategies, and self-evaluate their performance. An important feature of school-based assessment is the capacity
QTU Professional Magazine October 2006 • 11
Square pegs, round holes to regularly update information so that, in a phrase familiar to Queensland senior secondary teachers, “the latest and fullest information” is used in assessing students’ levels of performance.
Conditions for Success Writers have identified a number of conditions that need to be met to ensure that the potential benefits of school-based assessment are met. These go to two key concerns. First, concerns have been expressed about the reliability and comparability of teacher judgments made in school-based assessments. (Reliability and comparability are often seen, of course, as the strong suits of standardised, external testing.) Second, the Queensland School Reform Longitudinal Study (Lingard and Ladwig, 2001) and the New Basics Research Report (Education Queensland, 2004) expressed concerns that the potential of school-based assessment to deliver an intellectually challenging and relevant learning experiences to all students often goes unrealised. In terms of reliability, Sadler (1986) has identified three conditions which need to be met to ensure reliable judgements are made in a school-based assessment regime: • time for teachers to make judgements and reflect on them; • internal moderation; • external moderation. In Queensland, these conditions have been best addressed in relation to Year 11-12 subjects. Schools are seen to be responsible for the first two conditions and procedures overseen by the Queensland Studies Authority address the last. The New Basics trial and the Queensland Curriculum, Assessment and Reporting Framework (DEA, 2005) are two recent developments that have sought to address these conditions for Years 1-10. A study by Masters and McBride (1994) confirmed that the processes used in Queensland in relation to Year 11-12 produced reliable and comparable assessment results. A key test for the Queensland Curriculum, Assessment and Reporting (QCAR) Framework will be whether it is able to put
in place processes that deliver greater reliability and comparability in Years 1-10. It should be noted that, even in Years 11-12, the degree of system support and resourcing required has been less than optimal, putting great pressure on teachers in terms of time and workload. In order to exercise reflective judgements or to engage in internal moderation of school assessments, for example, teachers are often forced to contribute a good deal of their own time. When the moderation process works well, however, it not only delivers reliable assessment results but serves as a valuable professional development activity allowing teachers to increase their understanding of assessment and improve their classroom practice. The SRLS and New Basics research can be said to demonstrate that school-based assessment is not a sufficient condition to ensure a high level of intellectual challenge, relevance and connectedness between assessment and curriculum and classroom practice. There is strong evidence, however, that it is a necessary condition (see, for example, Black and Wiliam, 1998). Indeed, unlike some of the proponents of national curriculum consistency, the SRLS and New Basics research can be seen as stressing the need to support and build on school-based assessment rather than curtail it. Among the important conditions that could contribute to full realisation
12 • QTU Professional Magazine October 2006
of the potential of school-based assessment are: • a specific, systemic focus on aligning curriculum, assessment and reporting; • identification of what is considered to be the essential learnings to be taught; • specification of criteria and standards in relation to these learnings; • support for the ongoing development of the professional capacities of teachers. The strength of school-based assessment in Years 11-12 in Queensland is linked to the fact that measures are in place to realise each of these conditions. Participation in the moderation system, for example, besides addressing issues of reliability, also provides teachers with opportunities to build their professional capacities providing professional development and the chance to engage in professional conversations with colleagues. In relation to Years 1-10, it is pleasing to see that both the New Basics project and the QCAR Framework identify the need to address these conditions. Again, however, it needs to be acknowledged that, even in Years 1112, the level of system support and resourcing has left much to be desired, in particular in relation to providing opportunities for all teachers to develop their professional capacities.
Square pegs, round holes Conclusion There is no necessary link between moves to greater curriculum consistency and cooperation and regimes of standardised, external testing. It is not “consistency” per se that poses a threat to school-based assessment and through it to a quality school curriculum. But there are key questions that must be asked about the current moves to ensure national curriculum consistency. These include: “consistency in what aspects of the curriculum?”, “consistency to what purposes?” and “how will consistency be ensured?”. As noted by Reid (ACSA, 2006, pp. 3-4), some of the arguments advanced so far for national curriculum consistency have been extremely superficial (relying excessively on the railway gauge analogy): … [W]e have till now … failed to develop a rigorous rationale for national curriculum collaboration … we have failed to articulate a coherent view of curriculum … we need to consider whether some of our developing policies and strategies are based on research and, if so, on the quality of that research … Finally, we have failed to develop or articulate a view of curriculum change. It is clear that some advocates for a national curriculum are using railway gauge arguments as a Trojan horse to secure even greater levels of standardised, external testing – with all the implications that has for making the curriculum shallower and narrower. But a descent into a curriculum that is driven by standardised tests could occur even without the efforts of these advocates – almost by default – if the issues raised by Reid are not taken seriously. Finally, a question about priorities is in order. Should the pursuit of consistency be put ahead of the pursuit of better educational outcomes for students? If quality issues should be given precedence over consistency issues, then the focus in Queensland should be on facilitating the conditions that maximise the potential of school-based assessment.
Dr John McCollow is currently Acting Federal Research Officer with the Australian Education Union. His substantive position is Research Officer with the Queensland Teachers’ Union. Previously he has taught English in secondary schools, lectured on education policy at university, and worked with the former Board of Senior Secondary School Studies.
References Amrein, A.L. and Berliner, D.C. (2002) “High-Stakes Testing, Uncertainty, and Student Learning”, Educational Policy Analysis Archives, 10 (18), March 28, http://epaa.asu.edu/ epaa/v10n18/, accessed 29 May 2006. Australian Curriculum Studies Association (ACSA) (2006) “National Approaches to Curriculum Forum: Forum Report”, http://www.acsainc. com.au/content/forum_report_final. pdf, accessed 2 June 2006. Black P. and Wiliam, D. (1998) “Inside the Black Box: Raising Standards Through Classroom Assessment”, Phi Delta Kappan, October, pp. 139-148. Department of Education and the Arts (DEA) (2005) Queensland Curriculum, Assessment and Reporting Framework, Queensland Government, http://education.qld.gov.au/ qcar/pdfs/qcar_white_paper.pdf, accessed 2 June 2006. Education Queensland (2004) “The New Basics Research Report: Synthesis of the Research”, http://education.qld.gov.au/corporate/newbasics/pdfs/2_synths.pdf, accessed 2 June 2006. Freebody, P. (2005) Background, Rationale and Specifications: Queensland Curriculum, Assessment and Reporting Framework, Department of Education and the Arts, Queensland Government, http://education.qld. gov.au/qcar/pdfs/expert_paper.pdf, accessed 2 June 2006. Kohn, A. (2000) “Standardised Testing and its Victims”, Education Week, September 27, http://www. alfiekohn.org/teaching/edweek/staiv. htm, accessed 2 June 2006. Kohn, A. (2001) “Fighting the Tests: A Practical Guide to Rescuing Our Schools”, Phi Delta Kappan, January, http://www.alfiekohn.org/teaching/ ftt.htm, accessed 2 June 2006. Kohn, A. (2002) “The Worst Kind of Cheating” Streamlined Seminar, Winter, 21 (1), http://www.alfiekohn. org/teaching/cheating.htm, accessed 2 June 2006.
Lingard, B. and Ladwig, J. (2001) Queensland School Reform Longitudinal Study: Final Report, Brisbane: Education Queensland. Masters, G., Forster, M., Matters, G., and Tognolini, J. (2006) Australian Certificate of Education: Exploring a Way Forward, Canberra: Australian Council for Educational Research, Commonwealth Department of Education, Science and Training. Masters, G.N. and McBride, B. (1994) An Investigation of the Comparability of Teachers’ Assessment of Student Folios, Brisbane: Queensland Tertiary Entrance Procedures Authority. McCollow, J. and McFarlane, L. (1993) “Student Assessment for System Accountability: Towards an Appropriate Model”, Queensland Teachers’ Union Professional Magazine, November, 11(2), pp. 1-8. Meisels, S.J., Atkins-Burnett, S., Xue, Y., Bickel D.D. and Son, H. (2003) “Creating a System of Accountability: The Impact of Instructional Assessment in Elementary Children’s Achievement Test Scores”, Educational Policy Analysis Archives, 11 (9), February 28, http://epaa.asu. edu/epaa/v11n9/, accessed 29 May 2006. Reid, A. (2005) “Rethinking National Curriculum Collaboration: Towards an Australian Curriculum”, Department of Education, Science and Training, http://www.dest.gov. au/sectors/school_education/publications_resources/profiles/rethinking_ national_curriculum_collaboration. htm, accessed 30 May 2006. Pitman, J., O’Brien, J. and McCollow, J. (1999) “High-Quality Assessment: We Are What We Believe and Do”, Brisbane: Queensland Board of Senior Secondary School Studies. Sadler, D.R. (1986) Subjectivity, Objectivity and Teachers’ Qualitative Judgements, Brisbane: Queensland Board of Senior Secondary School Studies. Skilbeck, M. (1988) “Broaden Basic Understanding”, The Age, 26 April, p. 30. Stiggins, R. and Chappuis, J. (2006) “What a difference a word makes: Assessment FOR learning rather than assessment OF learning helps students succeed”, JSD, 27 (1), http://www. nsdc.org/library/publications/jsd/stiggins271.pdf, accessed 2 June 2006.
QTU Professional Magazine October 2006 • 13