Assessment in Different Dimensions A conference on teaching and learning in tertiary education 19—20 November 2009 at RMIT University, Melbourne
Conference Papers
ATN Assessment Conference 2009
ATN Assessment Conference 2009: Assessment in Different Dimensions Conference Proceedings ISBN 978-0-646-52421-4 Editors: John Milton, Cathy Hall, Josephine Lang, Garry Allan and Milton Nomikoudis Published by: Learning and Teaching Unit, RMIT University November 2009 The ATN Assessment Conference 2009 is a conference on assessment in tertiary education hosted by RMIT University for the Australian Technology Network of universities with the support of the Australian Learning and Teaching Council. The conference is being held at Storey Hall, RMIT University, Melbourne on 19th and 20th November 2009. Information about this publication: http://emedia.rmit.edu.au/atnassessment09/ The theme - Assessment in Different Dimensions – encompasses: Assessing with technologies (AwT) Assessing authentically (AA) Feedback, moderation and quality (FMQ) Assessing in the disciplines (AiD)
Refereed Papers Assessing online collaboratories: a peer review of teaching and learning Theresa Dirndorfer Anderson, Nicola Parker, Jo McKenzie (AwT)
7
Improving student satisfaction with feedback by engaging them in self-assessment and reflection Iouri Belski (FMQ)
17
“Measuring up”? Students, disability and assessment in the university Judith Bessant (FMQ)
28
The affective domain: beyond simply knowing David Birbeck, Kate Andre (AA)
40
Feedback across the disciplines: observations and ideas for improving student learning Julian Bondy, Neil McCallum (FMQ)
48
A generic assessment framework for unit consistency in agricultural science Tina Botwright Acuña (AiD)
57
Assessment of interprofessional competencies for health professional students in fieldwork education placements Margo Brewer, Nigel Gribble, Peter Robinson, Amanda Lloyd, Sue White (AiD)
66
Feedback: working from the student perspective Kylie Budge, Sathiyavani Gopal (FMQ)
74
ATN Assessment Conference 2009, RMIT University
2
Authentic voices: collaborating with students in refining assessment practices Sue Burkill, Liz Dunne, Tom Filer, Roos Zandstra (AA)
84
Does the summative assessment of real world learning using criterion-referenced assessment need to be discipline specific? Kelley Burton (AiD)
94
Using a distributive leadership strategy to improve the quality of assessment across a university: initial results of the project Moira Cordiner, Natalie Brown (FMQ)
104
Are confidence and willingness the keys to the assessment of graduate attributes? Barbara de la Harpe, Christina David, Helen Dalton, Jan Thomas (AiD)
111
Integrating digital technologies into student assessment and feedback: how easy is it? Barbara de la Harpe, Thembi Mason, Ian Wong, Fiona Harrisson, Denise Sprynskyj, Craig Douglas (AiD)
119
Online role-plays as authentic assessment: five models to teach professional interventions Kathy Douglas, Belinda Johnson (AA)
128
An approach to student-lecturer collaboration in the design of assessment criteria and standards schemes Vincent Geiger, Rachael Jacobs, Janeen Lamb, Judith Mulholland (FMQ)
137
Refining assessment practice in the social sciences Jennifer Gore, Wendy Amosa, Tom Griffiths, Robert Parkes, Hywel Ellis (AA)
146
The good, the bad, the ugly: students’ evaluation of the introduction of allocating individual marks to group work assessment Jan Grajczonek (FMQ)
156
Perceptions of technologies in the assessment of foreign languages Paul Gruba, Laura Cherubin, Kathryn Eastcourt, Lay-Chenchabi, Henry Mera, Monica Claros (AiD)
168
Improving feedback in large classes: application of task evaluation and reflection instrument for student self-assessment (TERISSA) in a unit on business statistics Jennifer Harlim, Ashton de Silva, Iouri Belski (FMQ)
179
Developing assessment standards: a distributed leadership approach Sandra Jones, Josephine Lang (FMQ)
194
Harnessing assessment and feedback in the first year to support learning success, engagement and retention Sally Kift, Kim Moody (FMQ)
204
Bimodality: using assessment tasks to identify and monitor key troublesome concepts Peter Kipka (FMQ)
216
E-learning and role-plays online: assessment options Siew Fang Law, Sandra Jones, Kathy Douglas, Clare Coburn (AwT)
225
The development of moderation across the institution: a comparison of two approaches Kathryn Lawson, Jon Yorke (FMQ)
236
Exploring the use of digital textual, visual and audio feedback in design studio Scott Mayson, Barbara de la Harpe, Thembi Mason (AiD)
244
Poster presentations: authentic assessment of work integrated learning Judith McNamara, Ingrid Larkin, Amanda Beatson (AA)
253
ATN Assessment Conference 2009, RMIT University
3
Integrating e-portfolio into an undergraduate nursing course: an evolving story Robyn Nash, Sandy Sacre (AA)
263
Supporting the learning of self and peer assessment in groupwork. Ryszard Raban, Andrew Litchfield (AwT)
271
The role of industry supervisors in providing feedback to students as part of the assessment process in work integrated learning (WIL) Joan Richardson, Beverley Jackling, Friederika Kaider, Kathy Henschke, Mary Paulette Kelly, Irene Tempone (FMQ)
282
Improving the feedback mechanism and student learning through a self-assessment activity Paul Sendziuk (FMQ)
293
A review of the status of online, semi-automated marking and feedback systems Mark Shortis, Steven Burrows (AwT)
302
Predictors of the groupwork experience: generic skill development, peer appraisals, and country of residence Stephen Teo, Adam Morgan, Peter Kandlbinder, Karen Wang, Anurag Hingorani (FMQ)
313
Embedding generic skills means assessing generic skills Theda Thomas, Peter Petocz, Brendan Rigby, Marilyn Clark-Murphy, Anne Daly, Peter Dixon, Marie Kavanagh, Nicole Lees, Lynne Leveson, Leigh Wood (AiD)
321
Creating change in traditional assessment strategies in building and construction using point of vision e-technology Elise Toomey, Patricia McLaughlin, Anthony Mills (AwT)
331
Facilitating formative feedback: an undervalued dimension of assessing doctoral students’ learning Henriette van Rensburg, Patrick Danaher (FMQ)
341
Assessment for learning: using minor assessment to promote major learning Keith Willey, Anne Gardner (AwT)
352
Validating attributes based curriculum: giving voice to our students to enhance assessment and learning Dallas Wingrove, Anthony Mills (AiD)
363
A scaffolded approach to developing students’ skills and confidence to participate in self and peer assessment Denise Wood (AwT)
374
ATN Assessment Conference 2009, RMIT University
4
Acknowledgements Conference Convenor John Milton, Senior Advisor, Policy and Program Learning and Teaching Unit RMIT University Conference Committee John Milton, Conference Convenor Sarah Lausberg, Project Manager Margaret Blackburn Cathy Hall Sally Jones Josephine Lang Amgad Louka Gregory Plumb Felicity Prentice Diana Quinn Other key roles
Shiralee Saul Darren Smith Josie Ryan Andrew Buntine Garry Allan Milton Nomikoudis Other contributors
Jody Fenn Cassy Roberts Anne Lennox Jacinth Nolan Louise Handran Lara Morcombe Kate Ebbott ATN Teaching and Learning Group Beverley Oliver, Curtin University Margaret Hicks, University of South Australia Amgad Louka, RMIT University Jo McKenzie, University of Technology Sydney Deborah Southwell, Queensland University of Technology Special thanks: Dr Diana Quinn, University of South Australia A conference of this kind cannot be a success without drawing on others’ excellent practices and experiences. The Conference Committee particularly acknowledges the generosity and expertise of Dr Diana Quinn of the University of South Australia and Convenor of the 2008 ATN Assessment Conference. Dr Quinn and her ATNA08 Conference Committee are acknowledged for their permission to customise and use materials developed for the 2008 ATN Assessment Conference including guidelines for authors and reviewers. Associate Professor Peter Hutchings and ALTC For their kind support both actively with the organisation of particular aspects of the conference and for ALTC sponsorship our international keynote.
ATN Assessment Conference 2009, RMIT University
5
The review process Full papers accepted for publishing in the Conference Proceedings have undergone a double-blind peer review process, with de-identified feedback and suggestions for revisions provided to authors. The Conference Committee gratefully acknowledges the generous work of the reviewers, who all provided constructive and invaluable feedback to ensure the high standard of published papers. Reviewers Lynne Badger Lynne Barnes Stephanie Beames Lorraine Bennett David Birbeck Julian Bondy Natalie Brown Ric Canale Helen Carter Andrea Chester Catherine Clarke Moira Cordiner Caroline Cottman Keith Cowlishaw Brenton Dansie Anne Darroch Melissa Davis Martin Dick Peter Donnan Eveline Fallshaw Heather Fehring Sonia Ferns Helen Flavell Melanie Fleming Anne Gardner Philippa Gerbic Sara Hammer Andrew Higgins Simon Housego Katie Hughes Henk Huijser Kerry Hunter Sandra Jones Sue Jones Martyn Jones Peter Kandlbinder Megan Kek Lila Kemlo
Sharron King Gloria Latham Romy Lawson Linda Leach Betty Leask Theresa Lyford Judith Lyons Alasdair McAndrew Coralie McCormack Julie Mills Karen Nelson Matthew Oates Beverley Oliver Phoebe Palmieri Kate Patrick Deborah Peach Amanda Pearce Rob Phillips Felicity Prentice Malcolm Pumpa Marilyn Richardson-Tench Gayani Samarawickrema Michael Sankey Geoff Shacklock Helen Smith Heather Sparrow Gordon Suddaby Darrall Thompson Hans Tilstra Anne Venables Dale Wache Alexandra Wake Kate Westberg Keith Willey Denise Wood Carolyn Woodley Jon Yorke Nick Zepke
Disclaimer The papers published in these Proceedings have been reviewed, edited and proof-read to the best of our ability within the timelines permitted. We acknowledge that there may be outstanding proofing errors. © Individual authors of the ATN Assessment Conference 2009: Assessment in Different Dimensions 19th - 20th November 2009
ATN Assessment Conference 2009, RMIT University
6
Assessing online collaboratories: a peer review of teaching & learning Theresa Dirndorfer Anderson Creative Practices, Faculty of Arts and Social Sciences, University of Technology, Sydney,
[email protected]
Nicola Parker Institute for Interactive Media and Learning, University of Technology, Sydney,
[email protected]
Jo McKenzie Institute for Interactive Media and Learning, University of Technology, Sydney,
[email protected]
This paper presents action research informed by Peer Reviews of innovative assessment in a ‘fully blended’ undergraduate Communications subject. The assessments, the teachers’ intentions for student learning and the process and outcomes of two rounds of review will be discussed. Assessment is the crux of a subject for students and teachers, and the paper will show how ‘conversations about teaching’ as part of a Peer Review process can enhance assessment. The assessment that was the focus of the review involves collaboratories in which students use wikis to build on collaborative knowledge production about emerging technologies. Peer Reviews focused on the strategies used to encourage greater student-directed and managed participation in the construction of the wikis and associated student-moderated online discussions. The first round identified ways that the assessment criteria could be more specific and distinct in relation to the subject’s themes and practices. The second round specifically focused on the assignments that flowed from the collaboratories. One motivation for this teacher to engage in the project was the need to make the assessment more sustainable. This issue was confirmed, and ways of improving the sustainability of the assessment process were explored as part of the second round of review. The Peer Reviews are part of an Australian Learning and Teaching Council project across the five ATN universities. The paper discusses the Peer Review process and ways in which its outcomes are being applied to shape meaningful assessment, and engage students more explicitly in self- and peer-assessment of their collaboration and online activity. It also demonstrates the importance of conversations with colleagues about assessment, particularly in blended learning environments and invites discussions about the assessment of learning within the structured space of a Peer Review. Keywords: blended learning; online assessment; peer review of teaching; wiki’s.
Introduction This paper examines innovative assessment through the lens of Peer Review (PR) using a case study of a ‘blended’ subject at an Australian university. The case study highlights the value of PR of teaching for developing assessment in blended learning environments (BLE). Assessment is a fulcrum for student engagement and dominates their experiences of learning as well as consuming a large proportion of teaching time. It is therefore a key element in the improvement of learning and teaching practices. This case study shows how assessment quality can be improved in a BLE by providing a framework for the process of formative and summative feedback for teachers. It provides an example of how articulating what is unique about teaching and assessment in a BLE through PR can improve the work of even innovative teachers. The specific context for the case study is a subject where both the students’ collaborative knowledge production activities and two of the three assignments involve extensive online work. The paper firstly looks at the background of PR and assessment, and then briefly outlines the ALTC project that this case study was a small part of. The case study subject is introduced, the process of the PR of assessment from the reviewees and then reviewers’ perspective are described, followed by discussion of some implications and conclusions. ATN Assessment Conference 2009, RMIT University
7
Peer review of teaching and assessment: background Peer Review is a process of making scholarly judgements about the quality of learning and teaching, and of focusing on scholarly professional learning. Many teachers are seeking more formative feedback to improve their practices and developing PR evidence to improve individual teaching practice in a scholarly way and PR is a useful complement to information that can be provided by students (Alexander & Golja, 2007). Because PR can inform our teaching as well as provide evidence about our teaching to others, it can ultimately be used for both formative and summative purposes. Peer observation (Bell, 2005) has been widely used for face-to-face teaching and there have been many resources developed for this and for review of teaching, or course, portfolios (Bernstein, Burnett, Goodburn, & Savory, 2006). In the online environment, PR has been used for learning objects see (Taylor & Richardson, 2001) and online courses and course materials, (eg Wood & George, 2003). Less has been developed for online and BLEs, where PR of teaching presents particular opportunities and challenges (Bennett & Santy, 2009; Wood & Friedel, 2009). Many subjects in Australian universities are now delivered in ‘blended mode’ (Swinglehurst, Russell, & Greenhalgh, 2007). Bennett & Barp (2008) have argued the need for online PR, which they also call ‘peer observation’. They have noted the ‘dearth’ of literature in this area and outlined the challenges this presents: “much remains to be explored, researched and documented as to how, and how far, ‘onlineness’ impacts on the peer observation process, the experience and the benefits for participants. The evidence is that distinct strategies, processes and models are probably needed to provide guidance for transferring peer observation online” (p. 564). In contrast to peer observation in a face-to-face environment Swinglehurst et al. (2007) point out two aspects of online learning and teaching of interest for PR: the record and nature of the interactions taking place. Bennett & Barp (2008) note that teachers and students are repositioned online, in terms of both time and place, and this can be a challenge for reviewers: “Even with clear guidance on where to look and what to focus on, online-ness affects what you can ‘see’, how easily you can understand what is going on, and potentially presents ‘more’ for you to observe” (p.567). They found disagreement about whether the online learning environment meant that more was captured and observable online, or less, but point out that the scope of the review is changed considerably. This can increase the reviewer’s expectations about what they should consider, so that the review becomes overwhelming (Bennett & Barp, 2008). Assessment is well recognised as the driving force in most students’ learning, and for many defines their entire learning experience (Biggs, 2003; Ramsden, 2003). In fact, for most students the assessment is the subject “Assessment defines what students regard as important, how they spend their time, and how they come to see themselves as students and then as graduates” (Brown & Knight, 1994, p. 12). Although Polifroni (2008) includes the “quality of assessment measures used along with the “achievement of defined learning outcomes” (p. 96) and Kell & Annetts (2009) refer in passing to “formative feedback on assignments” (p. 68), assessment has been surprisingly absent from the PR literature, perhaps reflecting an emphasis on the face-to-face aspect of reviews. As Bennett & Barp (2008) have noted “both the implementation and exploration of online peer observation are still in their infancy and a wide range of aspects remain to be investigated” (p. 568), including the key aspect of assessment in learning and teaching. This case study is an example of how useful PR of assessment can be. The next section briefly outlines the project that provides the context for the case study reported here.
Embedding peer review in blended learning environments: an ATN teams project The PR process that is described here has been developed as part of a collaborative Australian Learning and Teaching Council project across the five Australian Technology Network universities (McKenzie et al., 2008). The project is a two-year initiative which has aimed to:
ATN Assessment Conference 2009, RMIT University
8
1. Create, trial & evaluate processes and resources to support scholarly PR of teaching and learning in blended learning environments 2. Enable the use of PR for both formative feedback and improvement and for recognition and reward. The project has used a co-productive, action-research approach (Kember, 2000) involving teams of six academics at each partner university in the development and trialling of PR frameworks, protocols and resources. Institutional team members were sought across a range of disciplines and blended learning contexts, including entirely online, as well as mostly face-to-face with some online support. The actionresearch cycles have involved team members engaging in reciprocal PR of aspects of teaching in BLEs, to develop, trial and refine a common framework and protocols. The project’s framework and the structure of the PR process presented here was built by integrating information from: The qualities of scholarly work (Glassick, Huber & Maeroff, 1997) Promotions criteria and related teaching descriptions in the five partner universities The literature on good teaching (Biggs & Tang, 2007; Ramsden, 2003), and was also informed by Learning in electronic or BLE (Boud, 2002; Laurillard, 2002) The PR and peer observation literatures (e.g., Bernstein et al., 2006; Van Note Chism & Chism, 2007;
Bell, 2005), and More recently in BLE (Bennett & Barp, 2008; Bennett & Santy, 2009; Swinglehurst et al., 2007).
These starting points were combined with the iterative feedback from the PR teams to modify the framework and protocols. The Framework Categories for reviewing learning and teaching that were developed in this process are: 1. Clear Goals: for students’ learning and the design of the learning environment 2. Current & Relevant Preparation: includes consideration of content, processes and student needs that are informed by scholarship 3. Appropriate Methods and Implementation: thoughtfully chosen, applied effectively and modified in response to students’ feedback 4. Effective Communication: with students, teaching team and other colleagues 5. Important Outcomes: student learning and engagement, other intended and unintended outcomes, possible scholarly presentations or publication 6. Reflective Critique: including use of feedback and reflection for improvement (modified from Glassick et al., 1997). As initial resources were refined, further action research cycles have been used to develop guidelines, briefings and resources for staff involved in recognition and reward processes, including academic promotion (see also, Wood & Friedel, 2009). Developing a process that accommodates diversity by using core qualities of good teaching that apply across all contexts has been important and, as far as practicable, we have sought to include the preparation, processes and outcomes of teaching and learning (see Biggs, 2003 3P model) in the reviews.
Peer review of assessment in a blended approach to teaching: a case study This section introduces the case study of assessment that is the focus of the paper. The subject (Social Informatics) is part of the core in an undergraduate degree in information and media within a B.A. Communications program. It is also an elective for students in other parts of the university and is a moderate sized subject with a mixed cohort of undergraduate and postgraduate students. The aim of this subject is to introduce students to the principles of knowledge construction in various socio-technical contexts. Emerging technologies are both the subject matter and the teaching tools. A hybrid learning environment (i.e. a combination of online and face to face activities) provides individual and collaborative opportunities for experiencing and analysing the interplay between people and technology. Students use alternative ways of
ATN Assessment Conference 2009, RMIT University
9
working with technologies to engage in interactive, constructive learning and collaborative activities, integrating creative and analytical skills with academic and personal experiences. There have been two layers of PR completed and documented to date in this case study subject. The first has focused on the strategies and assessment criteria used to encourage greater student directed and managed participation in the online wikis that were part of an assignment. The second round of review of the same subject, by the same reviewer specifically focused on the marking of the individual assignments that flowed from the online group work and class discussions. All three assessments in the subject are designed to interlink, with two of them being specifically connected, and both involving critical examination of the socio-technical challenges associated with six different emerging technologies (for this cohort technologies included humanoid robots, intelligent agents, immersive environments, wearable computing, mobisodes, and mashups). Both assignments examine the evolution of these technologies, the complexities associated with their adoption by various sectors of society and the interplay between people and a technology within various social contexts. One is group based, the other an individual essay. The group assignment ran for 10 weeks and involved working as part of a collaboratory to construct a wiki about one of six emerging technologies. Each team was also responsible for leading a twoweek class-wide discussion in an online forum and the PR in round one focused on these discussion activities. Teams were given flexibility to craft their wiki according to their collective talents, though basic guidelines and scaffolding provided a starting point. Successful wiki development called for skill sharing and utilisation of the special talents (both technical and inter-personal) of individual members of each collaboratory. The content of each wiki and discussion forum became the starting point for the final individual assignment: a critical evaluation of one of the six emerging technologies. Accompanying their wiki, each student submitted a reflective report about their individual experience with collaboration and lessons learned about the conditions for effective collaboration and communication in both environments. This was the focus of the second round of the PR.
The review of this subject The PRs were conducted according to the four-step process developed in the project (see above). The importance of each step in the process has been emphasised by the teacher in this case study and participants in the larger project. The teacher of this subject wanted to engage in PR to continue improving online and face-to-face experiences and to empower her students. She also wanted to make the subject sustainable by deflecting students’ dependency on her as the teacher. The type of PR outlined here provided a process and methodology for the teacher to develop external documentation and validation about the value of the teaching and learning in a subject that had been the focus of her learning and teaching action research for five years. The reviewer was a disciplinary colleague, with a research background in student learning who had been part of the teaching team in prior iterations of the subject. The review in this case involved ‘visiting’ the online Learning Management System site for the subject, and evaluating the group work wiki content and discussion, as well as looking at a range of written assignments and grades linked to these tasks. The subject documentation, lecture materials and tutorial handouts for students were also reviewed. The steps in this PR process are described below using the case study. The examples provided below are from the second round of PR, but this was both inspired and informed by the first round. 1. Pre-review briefing Establishing this important foundation for the review was done at a face-to-face meeting of an hour based on the Briefing Template. This provided a structure for: Discussion of the reviewee’s desired goals, aspects of teaching to be reviewed and areas of focus for the
review Developing the reviewer’s understanding of the context
The teacher in this case study stated that her goals were: I am interested in learning how well the design of the second assignment is suited to my stated goals for the assignments – and the subject overall…how effective the current
ATN Assessment Conference 2009, RMIT University
10
assessment criteria for the 2nd assignment (especially those related to participation and group activity) are for assessing and rewarding “useful” presence in the activities being assessed...to effectively communicate to students what is being valued in the assignments… The teacher went on to note when outlining the focus for the review that: …one of the concerns I have is privileging the quantifiable evidence available for assessing online activity over qualitative elements which are often more challenging to examine for ‘evidence’ when assessing an assignment against the established criteria. An additional short meeting took place once the reviewer had initially looked at what was going to be reviewed (Discussion Board, wikis and a sample range of graded assignments) in light of the Briefing. Because completing the Template and considering the Framework Criteria requires the teacher to analyse in some detail what they actually want to find out as a result of the review, this has been found to be an invaluable part of this PR process. 2. The ‘review’ The review was carried out using the framework criteria (see above) The comments in this case were extensive and were discussed and added to in an iterative process of the reviewer asking questions of the reviewee and then completing further aspects of the framework. This took place over the course of a few weeks. One example of comments by the reviewer for the ‘Clear Goals’ criteria illustrates how positive aspects and critical elements are intertwined in the reviewer’s comments: …There is an impressive cohesion of the goals for student learning, rationale for a blended learning environment and the way that the ‘collaboratories’, Assignment 2 and the tutorial activities have been designed… This is contrasted with: …I’m not sure about how clear the students would be about the goals (in their full depth) or if they actually need to be? Perhaps knowing what the objectives and assessment criteria are for assignment two is enough?... 3. Debriefing The debriefing meeting is an important formative opportunity which: Enables the reviewee to reflect Provides a space for the reviewer to offer supportive and constructive feedback Allows the reviewer and reviewee to discuss suggestions collaboratively.
In this case study there was a face-to-face Debriefing Meeting of an hour and this was a valuable opportunity to discuss what the reviewer had discovered and for the teacher to further clarify aspects under review. The optional documenting of the debriefing ‘conversation’ allowed the reviewer to fill in questions she had had during the review. Further ‘conversations’ between the reviewer and reviewee followed as required, via email or phone to clarify additional points prior to reporting. 4. Reporting The reporting options that have been developed in the project include a Full Report (Briefing Template; Teachers response to Framework Criteria; Reviewer full responses for Framework Criteria) and a two page Summary Report. In this case study the Summary Report (based on the two full reviews) illustrates how even an extensive review process can be condensed into a short written summary. It could be presented as summative evidence for future applications for performance reviews, promotion, teaching awards etc. This teacher has already used her Summary Report for performance development meetings with her academic supervisor and intends to use it for an application for a teaching award. In Figure 1 an example of part of a Summary Report shows some of the summarised comments for three of the criteria.
ATN Assessment Conference 2009, RMIT University
11
Criteria for Good Teaching 1. Clear Goals: For students’ learning and for the design of the learning environment
3. Appropriate Methods and Implementation: Thoughtfully chosen , considering the students, subject, context and available resources; also applied effectively, modified in response to students' ideas and understandings, to feedback and to changing situations
5. Important Outcomes: Strongly focused on student learning, and then achievement of additional intentions. Further outcomes may include Scholarly communication of teaching.
Peer Reviewer’s Comments Clear intentions and extremely thoughtful design of the face-to-face and online learning environments creates innovative and exemplary learning activities for students. However, this is not always reflected in the documentation and written messages the students are getting about what path to follow and what to prioritise? (NB Changes to faculty procedures for subject documentation...being addressed for 2009). Methods employed have been honed over several years of reflective teaching practice, modified in response to students' ideas and are of international interest…Some of the blended nature of the subject has evolved from teaching while travelling to international conferences into a mode of teaching that is a successful and ‘fully blended’ experience for students. It is the way the collaboratories specifically target students’ creativity that makes them a highly successful and innovative teaching and learning process for students. Powerful student learning occurs in this subject through student engagement in the teamwork of their collaboratories and the content. This is evident through the complex and rich learning environments the students create for their peers and the discussions they facilitate and engage with about emerging technologies, resulting in a creative learning resource for the whole group including the teacher! These resources become a platform for the whole class’s final essay assignments.
Figure 1. Summary review extract of assessment in a blended learning environment:
An interesting outcome of this particular review is that the innovative nature of the teaching of this subject, which has had to be defended by this teacher at times, is able to be highlighted in a detailed way that links teaching to scholarly processes through PR. For example the reviewer commented that: The broader disciplinary and professional context is not singular or unified and some of the subject matter and teaching methods might be ‘beyond’ the mainstream profession/discipline – this is a compliment not a criticism! The result was that the teacher engaged in a reflective and evaluative process which has been documented. This documentation (which details precisely what she is doing really well and how) has empowered her in lots of different situations. She can choose to use the evidence gathered in the process of reviewing her teaching not only for the enhancement of teaching, but also to support recognition and reward for this. It is also an example of using the same review for both formative and summative purposes: that is combining the initially formative purpose and process, with ultimately summative objectives. Experience in this case suggests that this is possible by undertaking an iterative, formative review process of an aspect of teaching in a subject and as a result evidence can be gathered for a future application for promotion or award. This case study is also an example of a quite extensive review that involved several iterations of communication over a couple of weeks between reviewer and reviewee to achieve both of these ends. Within the project there have been examples of reviewers and reviewee choosing to use parts of PR process presented here, or complete purely formative reviews for example, and these have been completed in half a day. How the peer review has informed the development of the assessment: discussion Some of the observations that have emerged from this teachers experience of being a reviewee include an appreciation of the PR framework as a useful visualisation tool for review and reflection, and of the value of conversation with self and colleague (critical friend) provided in the ‘structured place’ of the PR. In this case study this has enabled the teacher to articulate what is most valuable in her subject, and what needs further refinement. The review has also provided a valuable way to triangulate results of Student Feedback Surveys about assessment and has offered a more “multi-dimensional evaluation of teaching” (Schultz & Latif, 2006, p. 4). The next steps the teacher plans to take as a result of the two completed rounds of PR reported here are to further refine the subject for future semesters by:
ATN Assessment Conference 2009, RMIT University
12
1. Aligning the improvements made to the weekly delivery of the program and to the assignments with the documentation associated with the subject 2. Preparing a request for Courses Committee to amend the description and criteria for Assignment 2 to reflect outcomes from this review (with a view to making the delivery of the subject more sustainable for the new tutorial leader) 3. Conducting a third round of cross-disciplinary review with further reflection, discussion and the possible creation of a new Action Plan based on those findings. From the perspective of the reviewer there was also great value in this PR process (Schultz & Latif, 2006). This confirms Bennett & Barps’ (2008) findings that their online ‘observers’ felt they learnt more in this role. However, reviewing this ‘fully blended’ subject was challenging as this reflection indicates: It was hard to know what to pay attention to - given that there was not enough scope to pay lots of attention to everything! Should I focus on the documentation or the online environments equally? I also had a dilemma about what to print out, because I have difficulty thinking things through in depth and in a sustained way when they are onscreen. This experience of PR has highlighted the difficulty of keeping a balance between consideration of the subject as whole with the aspect under review (whole versus parts), especially in a BLE. Using a framework to structure the review was important to support and guide consideration of the aspects of learning and teaching that may be ‘hidden’. In nonetheless resulted in feeling obliged to complete all framework prompts, or alternatively trying to decide which parts of framework to choose? This issue could be related to a lack of prior experience of reviewing, because although this reviewer had a learning and teaching research background she was not an academic developer. One of the motivations for the PR of this case study subject was concern about the sustainability of the assessments from the point of view of the teacher. Related to this was a concern with succession planning in this subject. A suggestion emerged in conversation about the outcomes of the Review that may help make the assessment more sustainable and help students to communicate their achievement against the criteria, would be to engage the students in the assessment of their collaboration and online activity. For example, the assignment could be revised to ask students to use their written reports as ways to showcase their own learning. Students’ use of extracts from their collaboratory postings and teamwork tasks as “quotes” in response to the criteria is being trialled this semester, with the new tutorial leader. Some other issues that arise from this review that are worth mentioning include the: Pros and Cons of a close disciplinary colleague as reviewer. This enables a deep appreciation of context, but at ht e same time makes it challenging to articulate ‘evidence’. However, this underlined the value of having a ‘critical friend’ (Melrose, as cited in Lomas & Nicholls, 2005). Value of a process of review which encourages discussion, rich reflection and useful documentation Ability of each step of the PR to reveal deeper layers for analysis and evaluation, and the continuation of this process. In summary, this type of PR of teaching focuses on how and what students are learning in blended learning environments. It evaluates the connections between this evidence of learning and the teachers’ intentions and practices, by using a set of criteria as a scholarly framework. Teachers may choose to engage in a review process that is formative and at the same time has a summative final goal. The review process may also be ongoing, continuing beyond the trajectory described here. Implications Assessment of learning is not only the crux of a subject for students but also where teachers spend a lot of their time (Boud, 1995). The value of ‘conversations’ with colleagues about assessment, particularly in BLE, can be enhanced through the use of a structured PR process. In effect, a PR serves as a guided inquiry that combines the benefits of a collegial conversation about some aspect of one’s teaching with a template that can guide future action. For example, the importance of the assessment criteria for learning and teaching are ATN Assessment Conference 2009, RMIT University
13
widely recognised. In this case study the teacher had done extensive prior work to develop the assignment criteria but it was as a result of PR by a disciplinary colleague that she was able to map out a clear way forward. In the project a structure has been developed to focus on the ‘process’ and ‘product’ phases of learning, that is, how teachers and students engage with each other and the subject matter in blended learning environments in different disciplines and contexts - and what students learn as a result. Looking at assessment within the context of this review process provides a significant focus, but nonetheless requires an examination of the context as well as the content of the course as a whole. There is a need to develop a fairly comprehensive appreciation of the review context for the review outcomes to be really valuable (for example, how the aspect that is reviewed fits into the subject). In order to discern the relationship between the ‘parts’ and ‘whole’ of the subject more than one ‘visit’ is required, and this is particularly relevant in BLE. There is an added challenge associated with BLE, where the very nature of this type of environment further complicates any PR of assessment. Teaching and learning activities are distributed across both online and offline ’sites’ of classroom and learning activity: “the ‘archived’ nature of online learning opens up possibilities for online tutors to work together in ways (relating to time and place) that have not been possible in the past. This flexibility presents new challenges.” (Bennett & Santy, 2009, p. 404). Consequently, as the PR of this case study subject has shown, the traces of aspects impacting assessment are not always immediately discernible, and in the face-to-face components not as easy to ‘catch’ as they are online. For example, the reviewer commented: …the participation of the whole class in discussion [this] is numerically easy to see as being quite impressive. To evaluate student learning specifically against the Assignment OR Subject criteria is tricky… for a reviewer to get an idea of each collaboratory member, (therefore fulfilment of roles); issues that should be discussed as part of the topic; capture discussion about collaboration; etc seems over whelming even though I have tutored this subject before. Baker, Redfield & Tonkin (2006) point to the possibility of reviewing a prior course in an online environment and of repeated reviews of the same course (Cobb, Billings, Mays, & Canty-Mitchell, 2001) highlight the possibility of ongoing engagement: “unlike the peer review of a classroom visit which tends to represent a “moment’ in the course, the review of Web-courses can reveal teaching and learning over a longer period of time.” (p. 277). While our experience confirmed this benefit, it also underlines the need for the boundaries of a review in BLE to be to be very clearly defined so as not to become unmanageable. The case study (and broader project) have also found that PRs were most successful between ‘real peers’. ‘Peer observation’ has been used in the literature to describe visits by those who are anything but peers, for example, more senior academic managers, quality auditing teams or academic developers (HammersleyFletcher & Orsmond, 2004; McMahon, Barrett & O’Neill, 2007). Here, it was agreed that peers should preferably be “an equal with respect to teaching and learning activity…[and] functions of teaching they performed” (Kell & Annetts, 2009, p. 67), although they may well differ in terms of administrative seniority (as they did in the case study). McMahon et al. (2007) also make the point that the reviewee needs to be able to control the whole process – from whether they participate or not to what is done as a result of the review, and this was strongly emphasised in this case study. This case study highlights some important considerations for PR of assessment. The provision of a thoughtfully developed framework for the review process is important in order to support a broader perspective that goes beyond observation of teaching ‘performance’. The importance of clarifying this PR process for the reviewers and reviewees was highlighted. Evaluating assessment by using a PR process suggests there are benefits for curriculum and assessment development more generally.
ATN Assessment Conference 2009, RMIT University
14
Conclusion This paper has explored innovative assessment within the context of the discipline of Social Informatics using the lens of PR of teaching. The experience of the teacher and reviewer in this case study, and of the teams in the larger study, has been that the benefits of the PR process were reciprocal; providing “mutual support in the often isolated process of teaching online” (Bennett & Santy, 2009, p. 404). This collaboration is invaluable not just for novice online teachers but equally for early-adopting or pioneering teachers, offering an opportunity to “establish connections through which to gain a window into the practice of fellow innovators” (Bennett & Santy, 2009, p. 405). As well as contributing to the teachers’ professional development, a PR process such as described here leads to continuing conversations about the scholarship of teaching and ultimately to course improvement and quality (Cobb et al., 2001). In this case study for example there has been the unexpected use of PR as a form of 'succession planning‘ for subject sustainability, and ultimately the portability of 'what makes it great'. Finally, the value of PR for ‘empowerment’ of the teaching that has been experienced by the teacher in this case study and others echoes Swinglehurst et al’s (2007) recommendation that we need to ensure “sanctioned protected time” (p. 391) for academics in all disciplines to reflect upon what counts as ‘good teaching and assessment’.
References Alexander, S., & Golja, T. (2007). Using Students’ Experiences to Derive Quality in an e-Learning System: An Institution’s Perspective. Educational Technology and Society, 10 (2), 17-33. Baker, J.D., Redfield, K.L., & Tonkin, S. (2006). Collaborative Coaching and Networking for Online Instructors [Electronic Version]. Online Journal of Distance Learning Administration, IX, 7. Retrieved July 25, 2009. Bell, M. (2005). Peer Observation Partnerships. Milperra, NSW: Higher Education Research and Development Society of Australasia Inc (HERDSA). Bennett, S., & Barp, D. (2008). Peer observation – a case for doing it online. Teaching in Higher Education, 13 (5), 559-570. Bennett, S., & Santy, J. (2009). A window on our teaching practice: Enhancing individual online teaching quality though online peer observation and support. A UK case study. Nurse Education in Practice, 9, 403-406. In Press, Corrected Proof. Bernstein, D.J., Burnett, A.N., Goodburn, A., & Savory, P. (2006). Making Teaching and Learning Visible: Course Portfolios and the Peer Review of Teaching. Bolton, MA, USA: Anker. Biggs, J. (2003). Teaching for quality learning at university: what the student does (2nd ed.). Buckingham, UK: SRHE & OU Press. Biggs, J., & Tang, C. (2007). Teaching for quality learning at university: What the student does (3rd ed.). Maidenhead, UK: Open University Press. Boud, D. (1995). Assessment and learning: contradictory or complementary? In P. Knight (Ed.), Assessment for Learning in Higher Education (pp. 35-48). London: Kogan Page. Boud, D.P.M. (2002). Appraising New Technologies for Learning: A Framework for Development. Education Media International, 39 (3), 237-245. Brown, S., & Knight, P. (1994). Assessing learners in higher education. London; Philadelphia: Kogan Page. Cobb, K. L., Billings, D.M., Mays, R.M., & Canty-Mitchell, J. (2001). Peer review of teaching in web-based courses in Nursing. Nursing Educator, 26 (6), 274-279. Glassick, C.E., Huber, M.T., & Maeroff, G.I. (1997). Scholarship Assessed: Evaluation of the Professoriate (A Special Report). San Francisco, USA: The Carnegie Foundation for the Advancement of Teaching. Hammersley-Fletcher, L., & Orsmond, P. (2004). Evaluating our peers: is peer observation a meaningful process? Studies in Higher Education, 29 (4), 489-503. Kell, C., & Annetts, S. (2009). Peer review of teaching embedded practice or policy-holding complacency? Innovations in Education and Teaching International, 46 (1), 61-70. ATN Assessment Conference 2009, RMIT University
15
Kember, D. (2000). Action learning and action research : improving the quality of teaching and learning. London: Kogan Page. Laurillard, D. (2002). Rethinking university teaching: a conversational framework for the effective use of learning technologies. London: RoutledgeFalmer. Lomas, L., & Nicholls, G. (2005). Enhancing Teaching Quality through Peer review of Teaching. Quality in Higher Education, 11 (2), 137-149. McKenzie, J., Pelliccione, L., & Parker, N. (2008). Developing peer review of teaching in blended learning environments: Frameworks and challenges. Paper presented at the In Hello! Where are you in the landscape of educational technology? Proceedings Ascilite. Melbourne. http://www.ascilite.org.au/conferences/melbourne08/procs/mckenzie-j.pdf. McMahon, T., Barrett, T., & O’Neill, G. (2007). Using observation of teaching to improve quality: finding your way thorough the muddle of competing conceptions, confusion of practice and mutually exclusive intentions. Teaching in Higher Education, 12 (4), 499-511. Polifroni, E.C. (2008). Evaluating Teaching Strategies: A Blended Perspective. Journal of Nursing Education, 47 (3), 95-97. Ramsden, P. (2003). Learning to teach in higher education. London, UK: Routledge. Schultz, K.K., & Latif, D. (2006). The Planning and Implementation of a Faculty Peer Review Teaching Project. American Journal of Pharmaceutical Education, 70 (2), Article 32. Swinglehurst, D., Russell, J., & Greenhalgh, T. (2007). Peer observation of teaching in the online environment: an action research approach. Journal of Computer Assisted Learning, 24 (4), 383-393. Taylor, P.G., & Richardson, A.S. (2001). Validating Scholarship in University Teaching: Constructing a National Scheme for External Peer Review of ICT-Based Teaching and Learning Resources (Technical Report). Canberra, Australia: Department of Education Training and Youth Affairs. Van Note Chism, N., & Chism, G.W. (2007). Peer review of teaching: a sourcebook (2nd ed.). Bolton, Mass.: Anker. Wood, D., & Friedel, M. (2009). Peer review of online learning and teaching: Harnessing collective intelligence to address emerging challenges. Australasian Journal of Educational Technology, 25 (1), 60-79.
ATN Assessment Conference 2009, RMIT University
16
Improving student satisfaction with feedback by engaging them in selfassessment and reflection Iouri Belski Royal Melbourne Institute of Technology,
[email protected]
Student satisfaction with educational feedback is usually the lowest of all quality indicators. A novel procedure – the Task Evaluation and Reflection Instrument for Student SelfAssessment (TERISSA) has been under development at the Royal Melbourne Institute of Technology (RMIT) for the last eight years. This study explores the opinions of students and academics in relation to their perceptions and experiences of TERISSA while using it in their course activities. It considers the outcomes of TERISSA application by six RMIT educators in semester 2 of 2007 that was funded by the Learning and Teaching Investment (LTIF) grant. The study found that TERISSA helped RMIT educators in engaging their students in self-assessment and reflection and in achieving significant improvements in student satisfaction with educational feedback. It has also been found that a considerable number of students applying TERISSA were able to generate valuable educational feedback on their own learning by themselves. Nearly half of the surveyed students were determined to continue using TERISSA in their individual study. Moreover, educators involved in the study evaluated TERISSA as capable and effective in providing them with valid and timely information on student progress and their misconceptions. Keywords: self-assessment; peer-assessment; educational feedback; reflection; student satisfaction.
Introduction A need to improve assessment and educational feedback Devising an assessment which is capable of offering every individual student quality educational feedback requires a significant amount of time and effort. The recent Report from the Department of Education, Science and Training (DEST) has shown that assessment and feedback are still the greatest challenge for educators (Scott, 2005). The Report presented an analysis of student responses from all Australian universities to the compulsory Course Experience Questionnaire (CEQ). Of top concern in the Report was the assessment of study progress as well as educational feedback. The Course Experience Survey (CES) has been conducted at the Royal Melbourne Institute of Technology (RMIT) on a compulsory basis every semester since early 2006. The data collected has identified that student satisfaction with assessment and educational feedback has at all times been the lowest among all the evaluated areas of teaching quality. Similar student opinions were reported for courses at the University of Melbourne (Sondergaard & Thomas, 2004). Self-assessment and educational feedback Although it is usually anticipated that educational feedback needs to be provided to students by their lecturers, it is well known that students can produce valid educational feedback both for themselves and for their peer (Heron, as cited in Boud, 1981). This can be achieved by engaging students in self-assessment and/or peer-assessment and has been demonstrated as a valid approach to self-improvement in engineering education (Boud & Holmes, 1981). Well implemented self- and peer-assessment can notably improve educational feedback and simultaneously save significant time and effort for educators. There are numerous practices of self- and peer-assessment to choose from (Boud, 1995; Falchikov, 2005). Nevertheless, implementation of these strategies in engineering classes is often challenging. Usually engineering students consider reflective journals, learning essays and learning contracts as well as other self-assessment activities as unproductive. They do not always see the relevance of these activities to their learning. Such student
ATN Assessment Conference 2009, RMIT University
17
attitude to self-assessment and reflection is not an exception. Boud (1995) noted that the introduction of selfassessment often faces skepticism not only from students, but also from other educators. Engineering educators have reported three successful strategies of engaging students in self- and peerassessment. The first strategy advocates introducing different ways of course organization instead of the traditional teaching and learning methods. Problem-based learning (PBL) (Barrows & Tamblin, 1980), selfdirected learning (Hammond & Collins, 1991), experiential learning (Kolb, 1995) and learning contracts (Anderson, Boud, & Sampson, 1994) are just a few examples of this strategy. The second strategy unites approaches that exploit new computer and web technologies to make the existing techniques of self- and peer-assessment easy and quick to utilize. The following are just a few recent examples. Turns reported on the effectiveness of the Reflective Learner web-based environment in enhancing the traditional practice of self-assessment by writing learning essays on learning experiences (Turns, 1997). Smith and Kampf (2004) achieved effective peer-assessment by supporting informal cooperative student learning groups using the WebCT as a peer-review system. McGourty (2000) reported on the effectiveness of the Team Developer computer-based survey system in engaging students in generating multisource feedback. The third strategy relates to developing novel ways of self-assessment and peer-assessment that are appealing to students. O’Shea and Bigdan (2008), for example, devised an academic version of the Biggest Loser competition. Kay, Li and Fekete (2007) proposed an exciting two-stage process of self-assessment and reflection that engaged students in reflection-on-action, and in reflection-in-action as suggested by Schön (1987). Indeed, the strategies of self-assessment from different groups can be combined for better results. For example, to further engage students in the two-stage process, Kay, Li and Fekete (2007) developed and implemented the web-based Reflect system that their students, who were learning programming in C, successfully used. Although new teaching strategies and the application of novel educational resources often result in better outcomes in student self- and peer-assessment, they usually require considerable effort and time from educators to implement. Replacing a traditional course with a PBL course can be very time consuming. Implementing a web-based system to support student self-assessment usually requires significant time and financial resources. Moreover, the decision to utilize an existing computer-based tool in a course may demand a lot of the coordinator’s time and even result in a major redevelopment of the course. The author has devised a novel approach to engage students in self-assessment and reflection that minimizes the expenditure of an educators’ time and effort. The Task Evaluation and Reflection Instrument for Student Self-Assessment (TERISSA) is an easy-to-learn procedure which a student needs to follow while resolving problems, conducting project work, preparing assignments, etc. TERISSA can be added to an existing course without any course reconfiguration – simply as a ‘plug-and-play’ module. Hundreds of students used it under the author’s supervision from 2004 to 2007. They evaluated TERISSA as easy to use and helpful in getting valuable feedback on their learning (Belski, 2007). This paper analyses the effectiveness of TERISSA for other educators. It presents the outcomes of the TERISSA trial by six educators that took place at RMIT in semester 2 of 2007. Section II of the paper describes the general TERISSA procedure. Section III presents some results of TERISSA application by the author in 2004-2007 (Belski, 2002 & 2007). Section IV is devoted to the TERISSA trial of 2007. Section V discusses reasons for the effectiveness of TERISSA in engaging students in self-assessment and reflection. Section VI provides a discussion and conclusion.
The TERISSA procedure TERISSA is simply a procedure which a student needs to follow while resolving problems, conducting project work, preparing assignments, etc. TERISSA requires a student to conduct two task evaluations: when the task is first presented and after the task has been resolved, and to reflect on each of these evaluations and on the reasons for any discrepancy between the evaluations. It also requires a student to think of and to plan the actions needed to improve learning outcomes. This engages students in regular and frequent selfassessment, provides them with valuable feedback on their knowledge and skills at the time when the feedback is required, and further, it involves students in reflecting upon their learning. TERISSA can be used ATN Assessment Conference 2009, RMIT University
18
by students while involved in most course activities (both individual and group). Over the past eight years, TERISSA has been successfully used in tutorial classes, home and class activities, individual and group exercises, various home assignments and practical laboratory work. The following is the recommended general procedure for using TERISSA which a student is expected to follow while involved in active learning: Step 1. (To be conducted before you start work) Evaluate and record the complexity of the question, problem, assignment, etc. using the following scale: 1very simple; 2-simple; 3-so-so; 4-difficult; 5-very difficult. Give reasons (in writing) why you have not evaluated it as one level less difficult. Step 2. (To be conducted after the work has been concluded) Evaluate and record the complexity of the question, problem, assignment, etc. once again using the scale from Step 1. Reflect (in writing) why you have not evaluated the question as one level more difficult this time. Also reflect on the reasons for any discrepancy between the original (Step 1) and the final (Step 2) evaluation and on the actions you need to undertake in order to become more confident with a similar task next time. Once perfected, using TERISSA normally requires around five minutes for every problem, project or assignment considered.
Effectiveness of TERISSA: student surveys 2004 – 2007 Since 2004, students enrolled in electronic engineering courses coordinated by the author have been asked to evaluate the efficiency of TERISSA in providing them with valuable feedback on their learning. The data collected over four years clearly shows that TERISSA was very helpful for most students (Belski, 2007). Table 1 presents the results from student surveys conducted in electronic engineering classes between 2004 and 2007 (229 students participated in these surveys). It shows student answers to the following three questions: (1) Has TERISSA provided you with immediate feedback on your knowledge of the course at any given time?; (2) Has TERISSA helped you to identify the learning area which required your immediate consideration?; (3) Do you think that you will continue using TERISSA individually while resolving problems? Table 1. Student opinions of TERISSA, 2004 - 2007 Question/Answer
Fully
Pretty much
To some extent
Not at all
Has TERISSA provided you with immediate feedback on your knowledge of the course at any given time?
17%
48%
31%
4%
Has TERISSA helped you to identify the learning area which required your immediate consideration?
22%
46%
29%
3%
16%
37%
40%
7%
Do you think that you will continue TERISSA individually while resolving problems?
The student opinions presented in Table 1 support the effectiveness of TERISSA in helping to generate educational feedback. Over 65% of surveyed students chose ‘Fully’ and ‘Pretty much’ as their answer to the first question. Over 68% of them were certain that TERISSA helped them in pinpointing the areas of study that they needed to urgently focus on to improve their learning. The most interesting result, however, relates to the students’ answer to the last question. More than half of the students (53%) revealed that they plan to use the TERISSA procedure by themselves – without any request from their teachers!
ATN Assessment Conference 2009, RMIT University
19
The above student opinions, as well as numerous student comments on TERISSA presented in Belski (2007), clearly demonstrate that TERISSA worked well in the author’s hands. Would TERISSA be effective when used by other educators?
The TERISSA trial in semester 2 of 2007 General information In 2007, RMIT supported a project intended to evaluate TERISSA in various courses from different study programs with the Learning and Teaching Investment Fund (LTIF) Grant. In semester 2 of 2007 (13 teaching weeks from July to November), six RMIT lecturers agreed to trial TERISSA in their courses and joined the TERISSA Activity Group (TAG). Over 500 RMIT students utilized TERISSA in semester 2 of 2007. They represented different degrees and were from all year levels. About a month before the semester began, TAG members were briefed on the TERISSA process and the outcomes achieved during 2004-2007 (Belski, 2007). Once TERISSA could effectively be used in various ways, lecturers made their own decisions on the best way to deploy TERISSA in their courses. Table 2 summarizes the various uses of TERISSA in semester 2 of 2007. TABLE 2. TERISSA usage by TAG course coordinators during the trial in semester 2 of 2007
Course
Year
TERISSA Use
Course 1
1
Tutorials, Class Assignments, Homework (all weekly)
Course 2
1
Home Assignments (weekly)
Course 3
3
Tutorials (weekly)
Course 4
4, PG
Home Assignments (4)
Course 5
PG
Home Assignments (2)
Course 6
2, 3, 4
Home Assignments (4), Team project
Course 1 taught engineering design. Course 2 taught statistics to health sciences students. Courses 3 and 4 were part of the electrical engineering degree. Course 5 was presented to postgraduate (PG) students enrolled in a degree in building and construction. Course 6 was a university-wide elective devoted to problem solving methods. TERISSA usage Tutorials
Students enrolled into Courses 1 and 3 followed the TERISSA procedure during weekly face-to-face tutorials under the supervision of TAG academics. Every time that students in the tutorial group were presented with a problem to resolve, they were asked to evaluate the complexity of the problem (as outlined in the general TERISSA procedure, presented in Section II) and to record the complexity score together with their reflections on the reasons behind this score. In addition they were asked to raise their hands, indicating the complexity score they recorded, when a tutor named the appropriate score. One or two students were involved in counting the number of raised hands, recording the results, and calculating the average complexity score for the problem. This average complexity score was recorded on a whiteboard. After this first evaluation, students were involved in resolving the problem. Once the problem was solved, they were asked to re-evaluate its complexity using the five-level scale and then to record the score, and to indicate the new complexity score by raising their hand. Students were then asked to reflect in writing on the reasons behind this final evaluation and on the discrepancy between the final and the original scores. These final reflections were usually followed by a short discussion of the reasons for individual discrepancies, and students considered what actions they should undertake to improve their individual study outcomes. TAG lecturers initiated and actively participated in these discussions.
ATN Assessment Conference 2009, RMIT University
20
The TERISSA procedure had not been enforced in tutorials as compulsory. Nevertheless, the TAG members coordinating Courses 1 and 3 noticed that almost all students were recording the complexity scores, and indicating them to the rest of the class and to the tutor by raising their hands. Around two thirds of the students were also writing down reflections on their evaluations in their notebooks. After following the TERISSA procedure during two tutorials, most students realized their ability to quickly rank their study progress against the progress of peers if everyone ‘voted’ truthfully. The average complexity scores implied the average level of difficulty of the task as perceived by all tutorial attendees. Thus, as mentioned by many students privately, they always ‘voted’ for their real complexity score – as this ensured that their understanding of their own progress against others was correct. The author experienced a similar pattern of student participation in TERISSA, while conducting tutorials in electronic engineering in 20042007. This similarity was considered by TAG educators as an early sign favoring the effectiveness of TERISSA in their courses. Home and class assignments
In order to involve students in using TERISSA outside of face-to-face activities, students were provided with TERISSA pro-formas, which they could utilize while learning on their own. Various pro-formas – ready to use templates – were prepared by the lecturers involved in the trial. They simply adjusted the general TERISSA procedure to suit the needs of their individual courses. Pro-formas were used in all courses except Course 3. Its coordinator used TERISSA only in tutorials. Usually a pro-forma appeared on the first page of a home work/assignment paper. Some of the home assignments were submitted over the web (Courses 2, 5 and 6); other assignments were paper submissions (Courses 1, 2, 3 and 4). All class assignments were paper based. Figure 1 presents an example of the first page of a home assignment, developed by the TAG academic coordinating Course 4.
Figure 1. An example of the TERISSA pro-forma for a home assignment (Course 4)
ATN Assessment Conference 2009, RMIT University
21
The TERISSA pro-forma in this example contains three steps of evaluation. In addition to the general twostep TERISSA procedure, students enrolled in Course 4 were expected to go one step further and consider the usefulness of their reflections made during previous assignments. Using TERISSA during class and in home assignments was not compulsory, and students who followed the TERISSA procedure were not obliged to return their scores and reflections with their completed assignments. Nevertheless, 40 to 60 per cent of students returned their assignments with complexity scores and reflections entered into the pro-formas. Course coordinators analyzed this data and shared individual student reflections and evaluations of tasks complexity with all students during lectures and tutorials. Usually these ‘group reflections’ occupied five to ten minutes of class time and deeply interested most of the students. Such ‘group reflections’ typically took place one to two weeks after the assignment’s submission deadline, as soon as all assignments were graded and returned to students. Outcomes of the TERISSA trial The efficiency of TERISSA has been evaluated in two ways: (1) by utilizing student responses to the compulsory RMIT Course Experience Survey (CES); and (2) by analyzing student answers to the TAG survey that was developed by TAG lecturers. The TAG survey was conducted in all six courses of the TERISSA trial, during the lecture classes in the final week of semester. The survey was administered by the TAG project officer, who was employed to support TAG educators during the semester. The CES was conducted at the end of the semester by RMIT administrative officers. Both the TAG survey and the CES were paper-based. Results of the TAG Survey
Students from all six courses evaluated TERISSA as helpful in providing them with valuable educational feedback. Table 3 depicts the overall opinions of students (205 respondents) to three statements of the TAG survey, which are similar to the questions used by the author in the 2004-2007 surveys (see Table 1). Table 3. Student opinions of TERISSA after the trial in semester 2 of 2007: TAG survey Strongly Agree 5
4
3
2
Strongly Disagree 1
TERISSA provided me with immediate feedback on my knowledge of the course.
13%
42%
27%
16%
2%
TERISSA has helped me to identify the learning area which required my immediate consideration.
16%
48%
25%
9%
2%
I will continue using TERISSA while resolving problems.
9%
33%
31%
18%
9%
Question/Answer
To minimize student confusion in responding to different surveys, TAG members changed the four-level scale used by the author in 2004-2007 (Table 1) to a five-level Likert-type scale for the TAG survey. Due to changes in the evaluation scale, a comparison of the data in Table 1 and Table 3 was not entirely justifiable. Some scale-matching to reduce differences between the four- and the five-level scales was necessary. To achieve this, the judgments ‘Fully’ and ‘Pretty much’ used in the 2004 -2007 surveys were evaluated as practically equivalent to the judgments ‘Agree’ and ‘Strongly agree’, which were used in the TAG survey of 2007. The judgment ‘To some extent’, used in Table 1 sounds only very little positive. Accordingly, it was evaluated as ‘neutral’ and equated to the middle score of ‘3’ in Table 3. These scale adjustments meant that comparisons of the results from different surveys could be made, if the numbers of students who ‘agreed’ and ‘strongly agreed’ with the three statements in both surveys were weighed against each other. Sixty-five percent of the students enrolled in electronic engineering classes who used TERISSA in 20042007 (see Table 1) and were taught by the author, reacted positively to the immediacy of feedback received using TERISSA. Fifty-five percent of participants who undertook the TAG survey shared this opinion. ATN Assessment Conference 2009, RMIT University
22
Student opinions on the ability of TERISSA to pinpoint their weak study areas matched even better: 68% of the students in the author’s classes of 2004-2007 and 64% of students surveyed during the TERISSA trial were positive. The number of students planning to use TERISSA in their own study matched a little less – 53% of them from the author’s classes were positive; just over 42% of the students involved in the 2007 trial thought the same way. These similarities in the perceptions of students from different degrees, who used TERISSA in different years of study and under different coordinators, can be considered as strongly supporting the efficiency of TERISSA in engaging students to self-assess their progress and to reflect on their learning. The opinions presented in Table 3 are further supported by students’ written responses. The following are some student answers (selected from all six courses involved in the trial) to the TAG survey question “Which aspect of TERISSA do you find the most helpful?”: “Identify the learning areas that I am not good at”. “Analysing the problem and realising how far you have understood the subject”. “It makes me understand the areas I need to study harder”. “Helps me to see weak areas in studies”. “Which tasks are difficult and whether I need to review them to learn the concept”. “Helped me understand areas I need help with”. “Recognising the knowledge area I do not have expertise on; allowing me to observe what I need to accomplish”. “Self evaluation and getting back the feedback”. “The reflection on initial evaluation of problems is good because it adds to my confidence in tackling unfamiliar problems. (looking back and saying ‘it wasn't that hard’)” “Thinking about the problems and getting immediate feedback”. “The immediate feedback allows me to focus on areas which I am having difficulty with”. “Immediate reflection upon the problem given”. “Reflection. You physically have to write down how you feel”. “Gives you an understanding of what other students are at”. “Offers an indication of my progress in comparison to the rest of the class”. “Finding out if others have the same problem”. These student statements compare well with those collected in the 2004-2007 surveys (Belski, 2007). Once again, this demonstrates that students who used TERISSA under different course supervisors thought alike – they all judged TERISSA as effective for self-generation of timely and valuable educational feedback. RMIT CES Results
All the courses involved in the TERISSA trial were of one semester duration and, for a number of years, have been conducted on an annual basis. In other words, the most recent run of these six courses occurred in semester 2 of 2006. Therefore, to ascertain the impact of TERISSA on student perceptions of educational feedback, the results of RMIT CES from two consecutive years were compared. In order to get a valid comparison of these CES results and to make a meaningful judgment on the impact of TERISSA, the following two requirements were established in order for a course to qualify to be included in this evaluation:
ATN Assessment Conference 2009, RMIT University
23
(a) A course must be coordinated by the same academic in both 2006 and 2007; (b) TERISSA must not have been used in a course during 2006. Only Courses 1 to 4 satisfied both of the above requirements. Significant improvements in student opinions on educational feedback from 2006 to 2007 recorded in Courses 5 and 6 were noted, but were excluded from the formal comparison. In responding to the CES, students had five choices to make for every statement (Likert-type scale from 1 to 5): they could choose only one response from ‘strongly agree’ (identified as ‘5’ in the CES) to ‘strongly disagree’ (identified as ‘1’). RMIT Course Evaluation Surveys in both years were identical and consisted of 21 statements (questions). Only the following two of these 21 statements were closely related to educational feedback: (1) Question 5 (Q5): The teaching staff normally give me helpful feedback on how I am going in this course; (2) Question 20 (Q20): The staff put a lot of time into commenting on my work. Table 4 depicts the outcomes of the Mann-Whitney test on student responses to the above two statement, received in semester 2 of 2006 and semester 2 of 2007, for the four courses involved in the TERISSA trial that satisfied the abovementioned criteria. It is apparent from Table 4 that changes in student opinions for the statements related to educational feedback were statistically significant in six cases out of eight (indicated in Table 4 by the bold font for p). Changes for both questions Q5 and Q20 for Courses 1 and 4, as well as Q5 for Course 3 and Q20 for Course 2 were statistically significant. Such significant statistical differences imply that the students enrolled in these four courses in 2007 experienced different educational feedback to the students learning the same courses in 2006. Table 4. Change in student opinions for Q5 and Q20 from 2006 to 2007
Q5 2006
2007
M (SD)
M (SD)
Course 1
3.62 (.985)
Course 2
Q20 2006
2007
p [r] U
M (SD)
M (SD)
p [r] U
4.02 (.772)
.016 [-0.21] 1088
3.21 (1.230)
3.62 (.965)
.043 [-0.17] 1143
3.97 (1.013)
4.27 (.804)
.117 [-0.14] 581
3.51 (.837)
3.92 (1.064)
.012 [-0.26] 487
Course 3
3.11 (1.487)
3.81 (.849)
.032 [-0.23] 375
3.16 (1.385)
3.50 (1.030)
.122 [-0.11] 406
Course 4
3.30 (.951)
3.76 (.902)
.023 [-0.25] 396
3.21 (.992)
3.64 (.962)
.020 [-0.25] 392
Course
Changes for both questions Q5 and Q20 for Courses 1 and 4, as well as Q5 for Course 3 and Q20 for Course 2 were statistically significant. Such significant statistical differences imply that the students enrolled in these four courses in 2007 experienced different educational feedback to the students learning the same courses in 2006. As already stated, coordinators of all four courses had added TERISSA to the structure of their already existing courses – they simply used TERISSA as ‘plug-and-play’ modules. No other significant changes to
ATN Assessment Conference 2009, RMIT University
24
these courses were made for the 2007 run. Therefore, only the application of TERISSA can explain the notable progress in student satisfaction with educational feedback that occurred in semester 2 of 2007.
TERISSA: why does it work? The above discussion has verified the efficiency of TERISSA in engaging students in self-assessment and reflection. It also demonstrated that courses utilizing TERISSA were capable of significantly improving student satisfaction with educational feedback. Why have all of these successes occurred? An analysis of the data collected during the TERISSA trial in semester 2 of 2007, suggests that the most likely reason for TERISSA’s success relates to the fact that the majority of students recorded a significant discrepancy between the difficulty levels in pre- and post-solution problem evaluations. Table 5 presents the discrepancies between the original and final evaluations of task difficulty levels recorded for three tasks (denoted as T1, T2 and T3) in student home assignments. Only a minority of students solving these three problems were correct in their judgment of the problem’s complexity before it had been resolved. In the case of task 1 (T1 in Table 5), for example, only 5.6% of students did not change their opinion of the complexity score; whilst over 94% recorded discrepancies. Moreover, 22.3% of the students made vast errors in their judgment of complexity by over- or underestimating its difficulty level by 2 units of difficulty. Table 5. Discrepancies in student evaluation of task complexity: home assignments Discrepancy (original – final)
T1
T2
T3
Underevaluated [-2]
5.6%
0.0%
20.0%
-1
38.9%
14.3%
13.3%
No difference [0]
5.6%
28.6%
33.3%
1
33.3%
42.9%
33.3%
Overevaluated [2]
16.7%
14.3%
0.0%
As demonstrated by Table 5, different tasks exhibited different patterns of discrepancy. The complexity of task 2 (T2), for example, was over evaluated by most the students, with over 14% of them getting a discrepancy of 2. The difficulty level of task 3 (T3) was rather underestimated – 20% of students recorded the discrepancy of minus 2. An analysis of the data from over 30 student assignments showed that, on average, over 70% of students recorded discrepancies between the original and final evaluations. The fact that students’ pre- and post-solution evaluation of task complexity differs is fundamentally important for the success of TERISSA in engaging students in reflecting on their learning. Students usually do not expect any discrepancy in their judgments. When they apply TERISSA and discover that they were unable to evaluate the degree of difficulty accurately, they experience what Dewey referred to as “a state of doubt, hesitation, perplexity, mental difficulty, in which thinking originates” [19]. They naturally want to explain to themselves the reasons for the inaccuracy and are involved in “an act of searching, hunting, inquiring, to find material that will resolve the doubt, settle and dispose of the perplexity” [19]. In other words, they become involved in reflection and, as a result, provide valuable feedback on their learning to themselves. Moreover, as soon as TERISSA is enforced upon them by their teachers, and some of their individual opinions are discussed by course coordinators during classes, students perceive their teachers as the source of this valuable educational feedback. Therefore, they evaluate the role of teaching staff more favorably in formal course quality surveys, for providing them with educational feedback.
ATN Assessment Conference 2009, RMIT University
25
Conclusion The results of the TERISSA trial support the effectiveness of TERISSA in engaging students in selfassessment and reflection. Students found TERISSA easy to follow and helpful in their learning. Nearly two thirds of those surveyed discovered that TERISSA helped them to identify the learning areas that required their immediate attention. What is more important, however, is that these areas of weakness were identified by students individually i.e. with minimal or no help from lecturers. Therefore, it is not surprising that fortytwo percent of all the students surveyed in semester 2 of 2007 stated that they will use the TERISSA procedure in their individual learning, without any additional requests from educators. The 2007 trial has revealed that TERISSA can be used as a ‘plug-and-play’ module. Therefore, it can be deployed by educators in their courses with minimal effort and can result in substantial improvements in student satisfaction with educational feedback. Following the trial, TAG academics created numerous recommendations for other educators that are available at the TERISSA website: www.terissa.com. This website also contains recommendations for students and provides them with the opportunity to experience and quickly learn the TERISSA procedure by resolving web-based puzzles.
Acknowledgement The author wishes to thank RMIT for supporting the project with the LTIF grant; Chi C Wong, Roy Ferguson, Peter Burton, Anthony Bedford, Kourosh Kalantar-zadeh, Guillermo Aranda-Mena and Jennifer Harlim for their great efforts, input and cooperation during the TERISSA trial; Aaron Blicblau, Peter O’Shea and W. A. M. Alwis for their interest and helpful suggestions.
References Anderson, G., Boud, D., & Sampson, J. (1994). Expectations of quality in the use of learning contracts. Capability: The International Journal of Capability in Higher Education, 1, 22-31. Barrows, H.S., & Tamblin, R.M. (1980). Problem-Based Learning: An Approach to Medical Education. New York: Springer. Belski, I. (2002). Seven Steps to Systems Thinking. In Proceedings of the 13th Annual Conference and Convention of Australian Association of Engineering Educators (AaeE) (pp. 33-39). Canberra, Australia. Belski, I. (2007). Using Task Evaluation and Reflection Instrument for Student Self-Assessment (TERISSA) to Improve Educational Assessment and Feedback. In H. Søndergaard & R. Hadgraft (Eds.), Proceedings of the 2007 AaeE Conference. Melbourne, Australia: University of Melbourne. Boud, D. (1995). Enhancing Learning through Self Assessment. London: Kogan Page. Boud, D., & Holmes, W.H. (1981). Self and peer marking in an undergraduate engineering course. IEEE Transactions on Education, E-24, 267-274. Dewey, J. (1933). How We Think: a Restatement of the Relation of Reflective Thinking to the Educative process. Lexington, MA: D C Health and Company Falchikov, N. (2005). Improving Assessment through Student Involvement. London: RoutledgeFalmer. Hammond, M., & Collins, R. (1991). Self-Directed Learning: Critical Practice. London: Kogan Page. Heron, J. (1981). Assessment revisited. In D. Boud (Ed.), Developing student autonomy in learning (pp. 5570). London: Kogan Page. Kay, J., Li, L., & Fekete, A. (2007). Learner Reflection in Student Self-assessment. In Ninth Australasian Computing Education Conference (ACE2007) (pp. 81-95). Ballarat, Australia: Australian Computer Society, Inc. Kolb, D. (1984). Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs NJ: Prentice-Hall. McGourty, J. (2000). Using multisource feedback in the classroom: a computer-based approach. IEEE Transactions on Education, 43, 120-124. ATN Assessment Conference 2009, RMIT University
26
O’Shea, P., & Bigdan, V. (2008). The Biggest Loser Competition. IEEE Transactions on Education, 51, 123-130. Schön, D.A. (1987). Educating the reflective practitioner. San Francisco: Jossey-Bass Publishers.Scott, G. (2005). Accessing the Student Voice: Using CEQuery to identify what retains students and promotes engagement in productive learning in Australian higher education. Canberra: Department of Education, Science and Training (DEST). Smith K., & Kampf, C. (2004). Developing writing assignments and feedback strategies for maximum effectiveness in large classroom environments. In Proceedings of the International Professional Communication Conference (IPCC) (pp. 77-82). Søndergaard, H., & Thomas, D. (2004). Effective feedback to small and large classes. In Proceedings of the 34th Annual Frontiers in Education Conference (pp. F1E-9-14 Vol. 2). Turns, J. (1997). Learning essays and the reflective learner: supporting assessment in engineering design education. In Proceedings of the 27th Annual Frontiers in Education Conference (pp. 681-688, vol. 2).
ATN Assessment Conference 2009, RMIT University
27
“Measuring up”? Students, disability and assessment in the university Judith Bessant RMIT University
In this paper I ask how university students with disabilities negotiate with staff to arrange for alternative assessment practices. I draw on three case studies using a personal pronoun perspective to challenge the conventional view that educational policy and teaching practice are forms of rational action. I demonstrate how the lives of students and staff are typically characterised by unexpected events, disorder, emotion and prejudice. The analytic perspective offered here establishes how meanings, intentions and different viewpoints and alliances emerge as social actors work to create specific faculty and institution cultures. The case studies also reveal what does and what does not work, some of the obstacles and what needs to be done if we are serious about equity and inclusive education. They include practical assistance in recognising the specific requirements of students with disabilities, and how to design alternative assessment for students with specific ‘conditions’. I argue that professional development and specific techniques in curriculum design are needed. Some staff also require help in recognising their policy and legal obligations. The larger task of cultural change which identifies and challenges prejudice is a larger task if universities are to become places in which equal opportunity principles and inclusive education are present and actively practiced. Keywords: disability, higher education, reasonable adjustments, and equitable assessment arrangements, education policy Modern universities are complex organizations. As Clark (2006) observed pre-modern and early modern universities operated within a relatively simple and modest set of predominantly self-defined understandings about their educational and social roles. Conversely the contemporary university is defined by a range of social, educational and cultural expectations as well as a large number of statutory and policy obligations and contingent accountabilities. This circumstance acknowledges how modern universities have been integrated into the regulatory operations of modern nation-states, while also being required to acknowledge the requirements of industry and non-government organizations, as well as the needs of professional associations. This is to say nothing of the expectations students and their families have about their right to access a relevant, engaging and quality university education. Is it reasonable and realistic to expect these interests to result in a coherent set of expectations, practices and demands? It is within such a ‘contested terrain’ that university teaching staff, administrators and students respectively negotiate and enact their various roles and identities within a variety of discursive and organisational imperatives which are not always easy to reconcile. These introductory remarks help situate my intention in this paper which is to establish how university educators and managers address the problem of promoting quality learning experiences for students with disabilities. How do they do that in ways that are aligned with capability-based curriculum standards, various professional accreditation requirements and an array of government and institutional legislative and policy frameworks oriented to ideas about human rights. Amongst other things modern universities are required to attend to equal opportunity policies as well as other national and international legislative frameworks that that obligates universities, teachers and administrators to deal in specific ways with students with disabilities. This description of the problem suggests the difficulties the various players face. In the case of disability and assessment students, teachers and administrators confront a university culture that embraces traditional conceptions of hierarchy, various ideas and practices oriented to conceptions of academic rigour, and the requirements of disciplinary knowledge, as well as more contemporary understandings of how students learn and how a curriculum informed by capabilities should look. To this can be added external factors such as government Charters of Human Rights and Responsibilities and conceptions of what is owed to people with disabilities found in instruments like the 2006 UN Convention on the Rights of People with Disabilities.
ATN Assessment Conference 2009, RMIT University
28
I use a case study approach to establish how our modern universities are ‘measuring up’ with regard to students with disabilities and the assessment of their learning. As Gould (1996) observed, the value of the case study lies in its capacity to understand generalities by examining particulars: It is no use writing a book on ‘the meaning of life … an essay on … the meaning of 0.400 hitting baseball can reach genuine conclusions with surprising extensive relevance to such topics as the nature of trends, the meaning of excellence, and even… the constitution of natural reality. You have to sneak up on generalities, not assault them head-on (p. 20). Strategically chosen case studies like those offered here which provide insight into the hermeneutic question of human learning not only for the student, but also for teaching staff many of whom continue to struggle with the question of learning, assessment and ‘the need’ to measure educational achievement. The case studies provide ethnographic descriptive accounts and analyses of perceptions, attitudes and interactions between key players engaged in the process of determining whether students with disabilities are granted alternative assessments. The paper offers a reflexive case study approach which draws on Elias’ (1978) social theoretic ‘personal pronouns as figurational model’ to explore the range of different perspectives and interests at play. I conclude by saying how we might best interpret and learn from these accounts by taking notice of how the problems are perceived or constituted Bacchi (2009). I also say what this implies for interventions designed to promote principles of equity and the learning experiences of students with disabilities in universities. The case studies provide context dependent knowledge which can work to help us reflect on our own practice in ways that encourage our own learning as educators (Dreyfus & Dreyfus, 1986; Flyvberg, 2001).
Theoretical frame The popular premise that modern universities are organisations which to some degree conform to the Weberian bureaucratic ideal type is not a helpful starting point. Weber’s approach emphasised the very formal, rule-bound, value-neutral, authoritarian and impersonal qualities which have long been assumed to define the character of modern bureaucracies and those who work in them. Weber's account of the displacement of ‘value rationality’ by ‘instrumental rationality’ was fundamental to his claim that human action is best understood as ‘rational action’. Rationality was said to define the emergence of modern bureaucratic organisations and create what he described as an 'Iron Cage of Rationality'. Bureaucracy was and is understood to require the functional division of labour, impersonality, technical-rationality that enabled functionaries to act only as rule-followers. Bureaucracies according to Bauman (1991, p.101) produce patterns of conduct where “moral standards” become irrelevant because the technical characteristics and objectives of the bureaucratic operation is privileged. According to this account it is apparently enough for a law to be drafted, or a policy promulgated for all those working inside a large bureaucratically designed organisation to simply, instantly and obediently give effect to the intent of that law or policy. Of course many people may object to the idea that universities actually resemble the ideal type of bureaucracy, and call such claims ludicrous. What matters is that many people seem to imagine that administrators and academics inside universities act as compliant bureaucrats ought to act, that is, they comprehend and seek to give effect to laws policies and procedures set forth by the modern nation state. Yet even the most ‘bureaucratic’ organisitions rarely functions as an ideal type. There is value in acknowledging how people who are employed in both lowly and high-ranking positions in large-scale organisations can either habitually and energetically advance or frustrate the implementation of legal and policy objectives. This is a normal and common feature of modern large-scale organisations. The politicking within large organisations is on full display. Likewise while some people will provide active support for policies others subvert policy. We can see people in large organisations display their emotions, express values and indeed refuse to support or endorse policy of which they disapprove, and conversely embrace those they support enthusiastically. The case studies presented here have been strategically selected to highlight different rationalities, [il]logics, emotions and prejudices which inform both judgments, and actions regarding students with disabilities. To do this I draw on the work of the sociologist Elias (1978) and his personal pronoun model.
ATN Assessment Conference 2009, RMIT University
29
Elias (1978) recognised better than Weber (1978) or Bauman (1991) the role of human characteristics like emotion, desire and values in our conduct and relationships. Elias argued that if we are serious about understanding how and why people act as they do then we need to extend our inquiry beyond the Weberian idea that human action as always and only ever rational (see also Flyvberg, 2001). I argue here that the conventional view that education policy and teaching practices are based on rational action are misguided because the lives of staff and students in modern institutions like the university are occasionally orderly, systematic and rational, but largely they are disorderly, variable, and characterised by a range of unexpected qualities that draw on our creative energies, emotions and prejudices. Elias’ (1978) ‘personal pronoun model’ approach can be used to highlight the ‘perspectival nature’ of education and Equal Opportunity policies and their implementation. It identifies who ‘I’, ‘we’ and ’them as ‘Others’ are and their viewpoints. It offers the chance to show how players participating in negotiations about alternative forms of assessment are represented from different viewpoints: from the perspective of the ‘I’, (the student or teaching staff), the ‘we’ (student and her advocates or the academics) and ‘they’ (those who oppose ‘us’). This approach allows us see how players are represented and positioned during negotiations and how those constructions change. It provides insight from the academics perspectives – the ‘we’- about for example how the student may be represented as a ‘good student trying to overcome barriers caused by their disability’, or conversely as a ‘confidence trickster’ trying to ‘scam the system’ and get an un unfair advantage over other students. This matters because it can help in understanding why and how negotiation about learning and assessment succeed or fail. It is a model that provides insights into the complex nature of interdependencies, emotions, irrationality, prejudice and misconceptions that characterise negotiations around assessment. This includes particular constructions of ‘disabilities’, and assumptions that assessment scores are synonomous with what a student has learnt. This analytic perspective entails taking into account the first and third person perspectives and in doing so establish how meanings, intentions and orientations develop and change as they are realigned according to shifting relations. It reveals how the positions, the determination and desires of players intermesh and create ‘games’ which create institutional or faculty cultures. I apply this perspective to three case studies each involving students with disabilities with whom for different reason I have had some contact in the past years as they negotiated to establish an ‘alternative form of assessment’. In each case all identifying features, have been changed or removed. In one case a resolution was effected easily, in a second the matter took more than a semester to resolve and in the third case it was a very lengthy and inordinately complicated process. Case study one: Sandy Sandy is a full-time mature age student in an undergraduate degree program. She has a major physical disability which is that she is visually impaired (legally blind) and also has a number of related health issues that impact significantly on her ability to carry out tasks required in the subjects she is enrolled in. In her first year of studies for a professional undergraduate degree Sandy registered with the university Disability Liaison Unit (DLU) and subsequently received advice that she was eligible to apply for alternative form of assessment. She subsequently applied for ‘Reasonable Adjustments and Equitable Assessment’ in all her first year courses. The subjects in which Sandy was enrolled were not all located in the one Department or School, but were spread across a number of different departments. Within two weeks of applying for reasonable adjustments and equitable assessment, staff in her main program area proactively initiated a series of meetings. Those meetings involved Sandy, relevant teaching staff and academic administrators in each of the different areas. Alternatives to written exams and written assessments were developed and agreed on within the first three weeks of her commencing degree program. Each semester those agreements were renewed and refined to reflect the particular learning objectives of the courses. This process took place efficiently, they involved the student, and she knew with certainty what her assessment tasks will be early on in the semester. Sandy also received additional support from teaching staff in the preparation of her assessment. Indeed when Sandy complained of difficulties she was experiencing with the software she was using to enlarge text and of
ATN Assessment Conference 2009, RMIT University
30
other problems like access to lockers and library support, teaching staff and administrators advocated on her behalf. Case study two: Nathan Nathan is a young man in his early 20s enrolled in an undergraduate X university degree program. He suffers from chronic fatigue and another physical disability that impairs his capacity to study or concentrate for extended periods of time, to write or to sit for extended periods of time. In the first year of his program Nathan registered with the university Disability Liaison Unit (DLU) and received advice that he was eligible to apply for alternative forms of assessment. He went through the standard process of providing medical documentation and completed and submitted the required forms to apply for alternative form of assessment. His application was initially rejected by the School. No clear reasons for that decision were provided. Nathan accepted the decision and subsequently sat the exams. He failed and then successfully appealed the failed results. He subsequently sought legal advice. On the basis of that advice he applied for an alternative form of assessment once more. This time he sought the assistance of an advocate who made a representation to the School and explained the legal and policy obligations to students with disabilities of the kind he experienced. In the second semester of his first year Nathan was granted an alternative form of assessment to written exams. He performed those successfully and completed his degree. Case study three: Trevor Trevor is a mature age part time student in the X undergraduate degree program. He has a debilitating chronic neurological and psychological condition which makes certain assessment activities very difficult. In the initial years as a part-time student Trevor struggled with his assessment work which involved practical and written examinations. He reports doing well in understanding or ‘learning the material’ and non exam based assessment, but did very poorly in written exams due to his disability. When he performed nonexamination assessment Trevor did well. He did not receive information about ‘Reasonable Adjustments and Equitable Assessment’ or advice that he may was eligible to apply until his third year at university. During those first years Trevor repeatedly sought ‘special consideration’ when he became ill and formally appealed when he failed exams. Late in the 2008 Trevor and staff did managed to arrange an alternative form of assessment to a written exam for one of the second year subjects. Trevor did that over the summer break and performed exceptionally well. When Trevor decided to make use of the provision enabling him to negotiate for ‘Reasonable Adjustments and Equitable Assessment’ on an on going basis in early 2009, he worked with the DLU. In February 2009 the DLU worked with him to develop a ‘reasonable adjustment and equitable Assessment’ request that recommended to the School in which he is enrolled that an alternative form of assessment to written exams be offered. Trevor provided supporting medical documentation that explained how his disability prevented him from sitting written examinations, and how they also induced a debilitating medical condition. By early March the academic unit had received that recommendation to develop an alternative to written examinations. Half way through the semester, in week six, Trevor had still not received a response from that department. He then initiated a meeting with staff. A meeting then took place and he attempted to negotiate an alternative form of assessment. The discussion covered possible alternative assessments to written exams which included home based electronic competency tests, assignment tasks, essays etc. In the last week of the semester, Trevor received notice that his request had been denied. The DLU then intervened at the request of the student and met with teaching staff and required a written explanation. Despite that intervention staff insisted that formal written examinations were the only form of assessment that could be considered, but relented and said the exam could be taken at the student’s home provided that the student organise the administration of the exam and pay for an invigilator who the Department deemed to be ‘independent’ and ‘impartial’. Months of negotiations and argument followed with correspondence and negotiations between dozens of people and organisations (from DEEWR, Pro-Vice-Chancellors, various DLU staff, student’s rights officers, teaching staff, administrators and managers, medical practitioners, and a professional association). ATN Assessment Conference 2009, RMIT University
31
Eventually a very senior manager directed the relevant Head of School and Department to negotiate an acceptable alternative form of assessment. The process took over seven months during which time Trevor could not sit the exams for semester one, received fail grades, was placed ‘at risk’ and required to attend ‘at risk’ interviews. He experienced considerable stress and distress. Discussion How was it possible that some students and staff were able to negotiate an alternative form of assessment with relative ease while others could not? What factors were at play? Applying the perspectival model to the above cases will help answer this question. In saying this it is acknowledged that applying the perspectival ‘I’ ‘we’ ‘they’ model to such cases is complex because the perspectives changed according to time and context especially given the way that the ‘we’ and the ‘they’ changed as the processes unfolded. What follows is a ‘light’ application of this model that suggests how we can better understand and learn from our own practices. Sandy and the ‘I’ perspective
For Sandy the ‘I’ perspective was relatively straightforward. Her disability was long-standing, well documented and physically obvious in the sense that on meeting her you could see she had a physical disability and had difficulty with her mobility. Accordingly her self-identity as a person with disabilities was also clear. She knew how organizations worked and didn’t work in respect to ‘managing people with disabilities like her’. From her perspective what her rights and legal entitlements were was clear. Not surprisingly Sandy was happy with both the process and outcome. Teaching staff were uniformly respectful, and she seems always to have believed that any hurdles related to her disability and assessment were being accommodated, so she did not doubt that she would be given every opportunity to complete her degree. Even so Sandy sees herself as struggling. She experienced the alternate forms of assessment as successful in allowing her to demonstrate that she had an understanding of core learning outcomes in similarly rigorous but different ways to her peers. Sandy and the ‘we’ perspective
In Sandy’s case the ‘we’ included both the staff and Sandy. This is because the tasks and objectives were seen as common and everyone was committed to securing a common end. In this way the student and staff were allies. The student had a disability that placed a duty of care on staff to develop forms of assessment that allowed Sandy to complete her studies. The ‘we’ here shared a common view and judgment that while compliance with the legislative and policy requirements mattered, considerations of equity and justice mattered even more. There was an acceptance that this put additional work on staff because they had to develop alternative assessment tasks and ensure that Sandy was being assessed for the same ‘things’ as other students. Even so the ‘we’ perspective enabled everyone to adopt the shared view that Sandy was not being privileged, but being provided with an equitable or fair opportunity. Sandy and the ‘they’ perspective
In Sandy’s case there was no occasion for a ‘they’ to emerge. There was no resistance or opposition to the provision of alternative assessment because the perspectives of ‘I’ and ‘we’ were consistent. Nathan and the ‘I’ perspective
Nathan was a young male with a disability. This however was not a strong or secure source of identity for Nathan. Partly because his disability was not visible which meant some people thought that it was ‘not real’. Nathan was also reluctant to talk about his health because to do so made him feel ashamed, weak and less than ‘real man’. His disability clearly offended his sense of masculinity and confidence. Nathan said that he was reluctant to initiate the application for equitable adjustment of his assessment for these reasons. Unfortunately the initial rejection of his application confirmed his sense that he was being a ‘weakie’ and his sense that others thought he was out to secure an unfair advantage. It was not until he sought counseling and other advice that Nathan said that he felt confident and ‘angry’ enough to get legal advice and apply again. Nathan and the ‘we’ perspective
Looking at this case from the ‘we’ perspective ie., from the point of view of his teachers, Nathan’s disability was not real. All they could see was that Nathan looked and acted ‘normal’ and that therefore there was really nothing seriously wrong with him. Indeed some staff were annoyed with him for ‘trying it on’ and, as they put it for ‘wasting their time’. For some staff the value of equity was relevant because they believed that ‘it wasn’t fair on other students who had to do exams when he didn’t’. In this way Nathan also became ‘the Other’. This was particularly so in the initial stages of his second application and when he sought and got ATN Assessment Conference 2009, RMIT University
32
representation from his advocate. Equally it seems that teaching staff had become so entrenched in their commitment to examinations that the prospect of deviating from that tradition generated considerable fear, anxiety and resentment amongst staff. Over time however the ‘we’ perspective changed. The ‘we’ view altered as staff came to appreciate the nature of Nathan’s condition. As this change in perspective took place so too their capacity to identify their obligations enabling them to better grasp the need for alternative forms of assessment. This change was possible partly because key members of the staff group changed. (Among the key changes the Head of Department was taken up by a new member of staff). It also helped that a staff development initiative job was undertaken to help teaching staff appreciate the issues posed by students with disabilities and what that meant in respect to their associated responsibilities. Trevor and the ‘I’ perspective.
The ‘I’ perspective in Trevor’s case was always shaped by a basic lack of trust and confidence and by a lack of clarity about how to make sense of his disability. From the start Trevor believed that teaching staff treated him differently and communicated the message that he was trying to ‘scam’ them, that he was ‘causing bother’ and simply wanted an ‘easy ‘option’. This impacted on his self confidence and how he viewed himself. He was aware that some staff described what he was requesting as ‘a farce’ and that he simply needed to ‘get over it’. While such view ‘shook him’, he also saw them as a standard prejudice that he had encountered throughout his life. He was none-the-less disappointed to see it operating among university academics. When Trevor was granted a one-off alternative form of assessment to exams in late 2008 and did exceptionally well he was elated. That experience boosted his confidence and motivation. This confidence however was soon deflated when he attempted to negotiate an on-going arrangement in early 2009 and was told that his good performance was a key reason why the exams had to be retained. The message was that either the alternative compromised academic standards because it was ‘softer’ or he cheated’. Either way he was devastated and insulted by the suggestion and what it meant in terms of trust and the prospects of negotiating future arrangements. Trevor was also concerned about not having been informed about the option of applying for an adjustment in his assessment in the first year of his enrolment, but decided to let ‘bygones be bygones’ and make the most of what lie ahead. He was optimistic about the prospect of having a form of assessment that meant he did not have to face a written examination. This he thought would mean he could complete his studies successfully and without further damaging his health. Indeed he thought the prospect of alternative assessment meant he might enjoy his studies. Trevor was determined that he would continue with the negotiations and that with time his teachers and other university staff would realize that he was a very able and committed student. With the disclosure that both the Department and School would not be not granting an alternative assessment Trevor began loosing faith in the process. This impacted on his motivation to study and on his enthusiasm more generally. His health deteriorated as the process became increasingly protracted and he became uncertain about whether he would be able to complete his assessment. As time passed and the exam period came and went he reported that he was ‘resigned’ and was going to simply put faith in his advocates and hope ‘the university’ would ‘do the right thing’. When he received the fail grades because he could not sit the exam because the department refused to negotiate an alternative, his faith and confidence wanned. When his Department and School was finally instructed by senior managers to negotiate an alternative form of assessment, Trevor felt like a ‘change agent’, something that was both empowering but also terrifying because he knew he still had relatively little power in future negotiations and continued to rely on the good will of staff who were now being forced to do something they fought tooth and nail to avoid. Trevor and the ‘we’ perspective
One ‘we’ in Trevor’s case study was his advocate and family. Throughout the process the shared view was that what was being sought was legitimate and reasonable and that the School for various reasons was having trouble appreciating its obligations.
ATN Assessment Conference 2009, RMIT University
33
A second ‘we’ also included teaching and administrative staff. Their identity as educators in a program in which examinations were integral carried considerable weight. From the ‘we’ perspective they already offered a fair and objective assessment in the form of examination which was critical for maintaining academic standards. The teaching also staff believed that it would be unfair to other students if Trevor was given a different form of assessment. The Staff in the School also positioned themselves as having little choice arguing that they were constrained by the law, to act as protectors of standards and public safety and abide by what they said were the requirements for continued accreditation of the program. The subsequent revelation that this was not the case, that the relevant registration body did not in fact require exams ‘was not appreciated’ by teaching staff and managers. Nor was reference to the correspondence from the accrediting body in which they emphasized the need to the need for the university to comply with the relevant government legislation and university policy in respect to EO matters. The ‘we’ altered as key players entered the negotiations. With the eventual intervention by a senior manager and their direction to the head of School and teaching unit to negotiate an alternative assessment, to provide remission and RPL, changed the rules of the game considerably. Making sense of what happened How can we make sense of all this, and what can be learned from it? There seems to be four issues at stake here. what were the intentions of those involved? whose interests were being served? was what happened either good or desirable? what should we as educators do about it? My discussion of these issues draws on the idea found in the western philosophical tradition often referred to as ‘virtue ethics’ and the special role played by phronesis (or ‘good judgment) understood as all judgment that is oriented towards providing experiences that enable people to flourish (Flyvberg, 2001). (i) What were the intentions of those involved?
Establishing intent is typically seen as central to any education evaluation process. In the prescriptive literature on evaluation, the effectiveness of an educational experience is determined by establishing the degree to which intent or learning objectives match outcomes (Kellaghan & Stufflebeam, 2003; Popham, 1993). Identifying intent or objectives becomes complicated further when we are talking about groups of people such as teaching staff administrators, or the various interests associated with professional associations. Can we for example assume intent is something ‘the individual’ like a student or teacher posses, and if so what are the implications of that for a group? Can the intention of an influential player such as a Head of Faculty be made observable by means of interpretative techniques that expose the causal flow of social rules, meanings and norms that shape how we act emanating from intent like the ‘need to maintain standards’ or promote equal opportunity principles? Yet identifying the role of human intention in any process of social action is never a straight forward task. Understanding why this is so requires us to consider the character of ‘social action’, what it is and what forms it can take. There is a large and rich literature on the logic of social action (Anscombe, 1963; Chisholm, 1971; Davidson, 1980; Parsons & Shils, 1951; Parsons, 1954; Simon, 1945; Weber, 1978; Wilson, 1989). However while this is relevant here, it cannot be fully explored here without losing the central focus of this paper - to better understand how we can achieve quality learning experiences for students with disabilities. Can we best understand social action in Weberian terms, that social action can be categorised in terms of instrumental, rational, or affective? (Weber, 1978) Can we reasonably assume that people are fully selfconscious, reflective decision-makers, and that we can gain a reasonable interpretative understanding of how we action by looking and being able to locate for causal explanation? (ibid) Can we believe people typically ATN Assessment Conference 2009, RMIT University
34
know all the relevant facts before they act, that they are aware of their preferences and know the best ways of matching opportunities with those preferences? Perspectives like those influenced by the tradition of rational action theory privileges cognitive practices and rational motives for why we do or do not act in particular ways by emphasizing how intentions, beliefs, values and rules of interpretation shape our social interaction. While this is a popular way of understanding social action, I argue social relations do not seem to work this way. Indeed the idea that our actions are rational and are driven by intentions overlooks contingency and disorderly, variable, and unexpected qualities of social, political or educational experiences that characterise large institutions (Barbalet, 1998). The question of intent or values raises questions about how we interpret behaviour that is said to ‘demonstrate’ intent, or the character or ‘personality’. In Nathan’s case for example, how can we interpret the actions of teaching staff when he initially attempted to negotiate an agreement about Equitable Assessment Arrangements? Can we interpret the refusal on the part of staff to provide an alternative assessment as a lack of knowledge or understanding about the nature or status of Nathan’s disability – that it was ‘not real’? Or might we understand such action in terms of prejudice? Prejudice seems to play an important role in what we know about the world which entails having a belief or a claim to know something that is not based on evidence, and that typically involves making a generalisation about complex issues. Moreover, we often rely on and construct prejudices about groups of people and people with disabilities have long been subject to prejudicial treatment. Prejudices are typically unhelpful and damaging especially when we generalise we have negative things to say about them’. As staff explained in this case study, when I ‘look at’, or ‘talk’ with Nathan, he seems ‘normal’, implying that ‘there is nothing wrong with him’. How should we interpret the ‘annoyance’ felt by some staff and the view that he was ‘skiving-off’ and ‘wasting their time’? Is such a response due to a lack of knowledge about ‘chronic fatigue’? Is it due to a lack of clarity about how disability categories are defined, about ideas of who is ‘legitimate’, about who does and who does not have a disability? Did claims by staff about the ‘unfairness to other students’ arise from a lack of understanding about the purpose of ‘Equitable Assessment Arrangements? Did academics and managers understand that the purpose or intention of EAA was to compensate for Nathan’s functional impediments that hindered his capacity to perform in exams? Did they see EAA as a bid to improve the validity of assessment ‘tools’? In Nathan’s case his chronic fatigue prevented him from being able to demonstrate specific skills and knowledge and improving the reliability of assessment tools. I suggest that prejudice as well a failure to appreciate the intention of ‘accommodations’, due in part to a lack of clarity about its purpose is a common obstacle to negotiations about alternative forms of assessment. This is particularly so when the student’s disabilities is not physically obvious or is not generally regarded as a ‘legitimate disability’. This observation is apparent if we consider Sandy’s case. Sandy’s disability is one of the first things you notice when you meet her. It is clear ‘just by looking at her’ that she cannot sit formal written exams because of her physical disability. Her performance in a ‘normal examination’ or even in assessments like written assignments will be much lower than her actual ability would warrants. In Sandy’s case teaching staff had no problem in accepting and accommodating her request. Staff understood that the purpose of the ‘accommodations’ were to off-set her disability, and achieve a clear and accurate view of her capacities. Can we interpret the different treatment that Sandy received -compared to Trevor and Nathan -as a matter related to ambiguities and lack of knowledge about different kinds of disabilities? Can we best understand it in terms of prejudice or confusion or a lack of knowledge about the purpose of EAA? Trevor’s and Nathan’s disabilities were not physically apparent, and the precise status or categorisation of their disabilities seemed elusive to staff. This seemed to feed into the idea that ‘accommodations’ unduly helped the student get better results rather than help them demonstrate their actual level of proficiency (Koretz, 2008). Can we interpret the reluctance to develop alternative forms of assessment as simply not knowing how to do it in a way that was reasonable and effective? How do we know for example which accommodations can ATN Assessment Conference 2009, RMIT University
35
off-set a bias in assessment caused by acute anxiety, chronic fatigue, or visual impediments in ways that give us a realistic account of her learning? Added to this is the very practical issue of resources. The additional work involved in designing and administering alternative assessments can be significant. Doing the job well entails research on the particular disability and assessment options. In some cases what is needed may be obvious such enlarged font, more time or oral assessment, but that is not always the case. While most DLU’s and specialist educational support units in the university provide some guidelines, it is typically not enough for the task at hand if we are serious about educating teaching staff about the nature of the disabilities, their legal obligations and the practicalities involved in designing and administering various alternative assessments. How can we interpret the ‘change of heart’ with Nathan’s second application? Clearly the ‘we’ perspective changed as key members of the ‘we’ group changed, (specifically the arrival of a new Head of Department). The Department was so committed to examinations that the prospect of deviating from that tradition generated considerable umbrage amongst staff. Did the professional development initiatives help teaching staff appreciate the issues and their associated responsibilities? Somehow ‘we’ members came to appreciate the nature of Nathan’s condition, their obligations and the alternative assessment options. (ii) Who wins and who loses?
I ask here whose interests are served and whose are not in negotiating alternative equitable assessments? In Sandy’s case study it the needs of student seemed to have been served, along with the interests of teaching staff and the School. Sandy was given the opportunity to participate in education. She was not only granted an alternative equitable assessment, but also received additional support like one-on-one support from teaching, library and DLU staff. The interests of teaching staff were also served in the sense that they believed they ‘did the right thing’ by supporting a student with disabilities. Their compliance with the university policy and broader legislative frameworks meant the interests of teachers, the institution and community were served. The development of assessment that allowed Sandy to demonstrate what she had learnt also meant that staff had a reliable indication of her learning - something that was in their interest as professional educators. From the ‘I’ perspective of individual staff concerned, the additional work entailed in supporting Sandy was considerable. It meant the development of extra teaching material each week, and the design and administration of other forms of assessment. This in the context of already heavy teaching workloads was significant. Here the ‘we’ perspective (teaching staff as a collective) was that while we were ‘doing the right thing’, ‘they’ (ie., the university managers) were not, because ‘they’ were not acknowledging enough the needs of students like sandy and the demands that supporting ‘them’ requires. ‘We’ teaching staff were placed in a position were they had not choice, they had to do the additional work, but did so largely in the context of an institution that did not offer sufficient support by way of time, practical and material support or opportunities for professional development. In Sandy’s case the interests of other students were also served because they were part of a department and institution that was committed to equity. Moreover, it revealed staff were also committed to professional teaching practices. It needs to be said that this view point was not taken by all students. From the perspective of some other students without disabilities, resentment about what is seen as advantageous treatment that students with disabilities receive is not uncommon. Prejudices and the lack of understanding about the purpose of alternative forms of assessment often means those options can be interpreted by some other students as an unfair ‘ leg up’, which can in turn generate resentment and impact negatively on relations between students. In Nathan’s case his needs were finally met. He bore the ‘weight of justice’ in ways that were unfortunate in the first instance when his original application was rejected and then had to invest considerable time, energy and resources in demonstrating the legitimacy of his case a second time. In that way he lost in terms of the negative messages communicated to him about the attitudes of staff and university, a message that harmed ATN Assessment Conference 2009, RMIT University
36
his self-confidence. It also entailed the diversion of considerable resources that could otherwise have been invested in his studies – it was a significant ‘distraction’ given the disadvantages he was already experiencing. The turn-about and success with Nathan’s second request meant Nathan’s interests were eventually served in ways similar to Sandy’s. The same can be said for the interests of his peers, teaching staff and the university more generally. For Trevor however the question of who won is not clear. The fact that it took over seven months of negotiations and an enormous effort on Trevor’s meant he carried the ‘burden of justice’. While the final result that saw ‘accommodations’ eventually agreed to it took a very long time and an enormous amount of energy, determination and resources that would have been better spent on study. Trevor also lost because the ‘agreement about assessment’ was not reached in time for him to complete the assessment for the first semester subjects in which he was enrolled. Amongst other things this meant he had to apply for ‘remission’ and also repeat the subjects at a later date. Moreover, the prolonged nature of the process consumed a substantial amount of time and energy on the part of student advocates, university teaching staff and mangers. Clearly, it was a preventable loss that could have been avoided had the process been better managed and the obstacles to accommodating Trevor’s disability been recognized and overcome earlier. (iii) Is what happened good or desirable?
The answer to this question depends on the key values being used. For those with a commitment to equity, participation and diversity, what happened in the end with each case studies was good. Clearly Sandy’s case was more desirable than what Trevor endured. The prejudices and obstacles that Trevor experienced was neither good nor desirable. Moreover, the fact that agreement was finally reached did not advantage anyone. The fact that a ‘resolution’ was reached only after staff were instructed’ ‘from above’ to make accommodations for Trevor suggests there is considerable work to be done in making cultural changes of a kind that would see the impediments to recognizing and accommodating disabilities identified and remedied. Such a task is not a simple one, not only in terms of the material resources it would require, but also because it entails a recognising prejudice, a lack of information and understanding and significant changes in attitude and the Departmental culture more generally. (iv) What should ‘we’ do about it?
The case studies above suggest a number of ‘action items’. If we begin from the premise that supporting students with disabilities to learn in the university is a good, then it follows that there is much work to be done – particularly in developing alternative forms of assessment. The case studies suggest that some parts of our universities do well. It would be helpful to discover in greater detail what practices are working and why. The attitude of teaching staff and managers about people with disabilities seem critical. So too does their knowledge of the relevant legal obligations. Where there is ‘good will’ and a recognition of the legitimacy of the student’s needs we seem do well. Where there is not, and when there is evidence of prejudicial assumptions then it becomes clearer what needs to be done in respect to remedial action. When students with disabilities are labelled ‘failures’ or ‘untrustworthy’ because they are ‘putting it on’, or when students are advised they simply ‘just need to get over it’, then the nature of the problem that universities face becomes clear. The task of changing mindsets and deeply ingrained ideas noticeable in claims like: ‘there isn’t really anything wrong with him’ or he is after ‘an easy’ option and wants an ‘unfair advantage’ points to the nature of the problem as well as the solutions. Programs designed to change culture are needed. On going professional development and educational programs similar to successful health promotion campaigns that have changed attitudes and behaviour in respect to driver safety, or cigarette smoking may point the way. This in conjunction with the provision of better information about matters like the procedures students go through within the university to verify the legitimacy of their claims will also assist. From the various ‘we’ perspectives it can be seen how some staff framed the problem as an individual problem about and their intention to cheat or desire to ‘use the system’ to ‘unfairly improve their grades’ rather having a disability that prevented them from being assessed in the traditional form. In this way it can be seen how insiders accounts of how the problems are variously framed provides insights that can inform the kind of action that is needed if we are serious about inclusive education in the university. ATN Assessment Conference 2009, RMIT University
37
The claim that an alternative form of assessment runs the risk of ‘compromising academic standards’ and the reputation of a program is a frequent statement that student with disabilities hear. It is an assumption that is usually stated at the beginning of negotiations and is liberally sited across relevant institutional documentation. Its an assumption that is both prejudicial and revealing of the distrustful attitudes that obstruct attempts to accommodate students with disabilities. Why assume that developing different assessments – especially when they are an alternative to exams - somehow ‘compromises standards’? What does that reveal about attitudes towards ‘examinations’? Do exams in fact provide the best way of assessing all students? While these questions raise issues that are the focus of a separate paper, I suggest that 3 to 4 hours written examination at the end of a course of study provides a very limited way of ‘measuring’ student learning. Can answers to a sample questions said to be representative in terms of the knowledge and skills inherent in ‘learning outcome’, really give a comprehensive ‘measure’ of learning?
Conclusion The case studies in this article which offer personal pronoun perspectives helps reveal how teaching staff would benefit from practical assistance in recognising the specific needs of students with disabilities, and what is required in terms of the design and administration of alternative forms of assessment. How for example can we accommodate the assessment needs of a student with specific chronic medical condition? Clearly additional resources in the form of professional development, and techniques in curriculum design are needed. Some teaching staff also require support in recognising and appreciating their policy and legal obligations. Moreover, the ‘burden of justice continues to rest heavily on the student which reveals that the provision of greater support for students as they negotiate university processes seem warranted. The larger task of professional development oriented towards cultural change that identifies and challenges prejudice is a larger but none-the-less important task if universities are to become places in equal opportunity principles are respected and in which students with disability can enjoy learning experiences that are available to their peers.
References Anscombe, G.E.M. (1963). Intention. Oxford: Blackwell. Bacchi, C. (1999). Women, Policy and Politics: the construction of Policy. London: Sage. Barbalet, J. (1998). Emotion, Social Theory and Social Structure. Cambridge: Cambridge University Press. Bauman, Z. (1991). Modernity and the Holocaust. Ithaca: Cornell University Press. Chisholm, R. (1971). On the Logic of Intentional Action. In R. Blinkley, R. Bronough, & A. Marras (Eds.), Agent, Action and Reason (pp. 39-80). Oxford: Basil Blackwell. Clark, W. (2007). Academic Charisma and the Origins of the Research University. Chicago: University Chicago Press. Davidson, D. (1980). Actions, Reasons and Causes. In D. Davidson, Essays on Actions and Events. London: Duckworth. Dreyfus, H., & Dreyfus (1986). The power of the human Intuition and Expertise in the Era of the Computer, Machines Over Machine. New York: The Free Press. Elias N. (1978). What is Sociology (Trans. S. Mennell & G. Morrissey). London: Hutchinson. Flyvberg, B. (2001). Making Social Science Matter: Why Social inquiry Fails and how it can Succeed. New York: Cambridge University Press. Gould, S.J. (1996). The Mis-measure of Man. New York: W.W. Norton. Kellaghan, T., & Stufflebeam, D.L. (Eds.) (2003). International Handbook of educational Evaluation. Sage. Koretz, D. (2008). Measuring Up: What Educational Tests Really Tell Us. Harvard University Press. Parsons, T., & Shils, E. (1951). Towards a General Theory of Action. Cambridge: Harvard University Press. Parsons, T. (1954). Psychology and sociology. In J. Gillin (Ed.), For A Science of Social Man. New York. Popham, W.J. (1993). Educational Evaluation. MA: Allyn and Bacon.
ATN Assessment Conference 2009, RMIT University
38
Simon, H.A. (1945). Administrative Behaviour (1st ed.). Illinois: Free Press. Weber, M. (1978). Economy and Society: An Outline of Interpretative Sociology (2 Vols.). Berkeley: UCLA Press. Wilson, G. (1989). The Intentionality of Human Action. California: Stanford University Press.
ATN Assessment Conference 2009, RMIT University
39
The affective domain: beyond simply knowing David Birbeck Learning and Teaching Unit, University of South Australia,
[email protected]
Kate Andre School of Nursing and Midwifery, University of South Australia,
[email protected]
The affective domain is a vague concept that could relate to at least three different aspects of teaching and learning. Firstly, the affective domain could be about the teacher’s approach to teaching in terms of philosophy and what this communicates to the student. In this perspective the affective domain relates to the way in which the teacher interacts with students to build a relationship. Secondly, the affective domain could be about appealing to the affective attributes of students as a deliberate form of engagement. Such an approach might seek to make students annoyed or angry at an injustice and in this way some students may be motivated to take a greater level of involvement. In both these cases there is a profound reliance on the teacher to establish the learning environment. Students may choose to respond positively, or otherwise, but they do not initiate. The third perspective to affective teaching and learning is one where students are asked to engage with the development and understanding of their own motivations, attitudes, values and feelings in terms of their behaviour and actions as a professional and as a citizen. This paper seeks to explore this third perspective. There is a need to think of ways to move beyond simply embedding affective teaching and learning strategies in curricula while assessing cognitive outcomes. We need to ensure that the Graduate Qualities / Attributes we seek to develop are constructively aligned and assessed as outcomes in their own right, not as adjuncts to cognition and skills. This is not an argument that asks one to choose between the affective and cognitive domains, but supports a fusion of the two. To enable students to recognise the value of affective attributes they should be overtly developed, taught and assessed; explicit, rather than embedded in cognitive tasks. Keywords: assessment, affective, cognitive, professional identity, ethics
Introduction and importance At the 2008 Australian Technical Network (ATN) of Universities conference about assessment in Higher Education, the keynote speaker, Professor John Biggs asked this question; “As a teacher, what do you want to achieve in teaching?” While there were a number of responses overwhelmingly the types of responses were located within the affective domain. That is, the domain in teaching and learning that addresses concepts such as attitudes, values, feelings and motivations (Krathwohl, Bloom, & Masia, 1964). For example, “I want my students to have the same love for my discipline as I do”. This struck us as odd as the question was posed within the contexts of Biggs’ discussion of his Structure of the Observed Learning Outcome (SOLO) taxonomy (Biggs, 1999) which is predominantly, if not completely, cognitive. Further, this discussion incorporated that of constructive alignment (Biggs, 1999) which is based on the notion that desired education outcomes, the assessment and the learning environment must all be congruent. Given these two contexts one might have thought the audience primed to respond with a cognitive response such as something about understanding or the skills to problem solve. The discussion around constructive alignment prompted me to think about how one would constructively align the audience member’s response. That is, how would one align an outcome based on the statement “I want my students to have the same love for my discipline as I do”? As the desired outcome we would need to ask some fairly difficult questions such as; how would one teach “love of discipline” and how would one assess it? This lead to the even more difficult question of; How would one assess any aspect of learning that could be defined in terms of the affective domain of learning? ATN Assessment Conference 2009, RMIT University
40
The responses of a number of academics at a conference might be dismissed as merely provocative; however, these comments resonated with those of other teachers we were working with. These academics talked about their aspirations for their students in terms of wanting to develop “passionate” students; they saw their role as “motivators” or to “enthuse”; they aspired for their students to see the “elegance of the learning”. In this same vein of the affective domain, but less positive, were also comments of frustration about students who did not “take responsibility” for their own learning. Each of these aspirations, both positive and negative, reside in the domain of affective learning. Dansie et al. (2005) concur and claim that outcomes in tertiary education are overwhelming based in the cognitive domain yet teachers often have aspirations that their students develop affective domain outcomes. Dansie et al. (2005) goes onto question this imbalance and to suggest that teaching and learning outcomes in the affective domain should be considered and explored. There are good reasons to take heed of Dansie et al.’s (2005) advice that go far beyond the ambitions of some teachers for their students. Sumsion and Goodfellow (2004) in their work mapping generic skills across a number of curriculums articulate their concerns with what they describe as “unproblematised accounts of the development of generic skills and qualities” (p. 330). They claim that the skills that one might develop in an environment such as in a Higher Education setting might not automatically transfer to other settings. They claim “ …the lack of attention to the context in which skills are developed, and the paucity of evidence to suggest that they are, in fact, transferrable across contexts” (p. 330). Further they assert there is a difference between capability and competence such that “..capability extends beyond competence; it involves an ability and a willingness to apply understandings, knowledge and skills to unfamiliar contexts and unfamiliar problems (p. 332). In short what is claimed is that while cognitive skills may be developed well enough at university, unless the student has certain affective capabilities they are less likely to be able to use their cognitive skills and understandings across a range of environments (Boud & Falchikov, 2006). Consequently, there must be an explicit relationship between cognitive learning, assessment and “capability” (Sumsion & Goodfellow, 2004). Crebert, Bates, Bell, Patrick and Cragnolini (2004) claim that a student’s ability to integrate and demonstrate generic skills across contexts “Requires ethics, judgement and self confidence to take risks and a commitment to learn from the experience” (p. 148). The idea of skills, even generic skills is a cul-de-sac. In contrast, the way forward lies in construing and enacting a pedagogy for human being. In other words, learning for an unknown future has to be understood neither in terms of knowledge or skills but of human qualities and dispositions. (Barnett, 2004, p. 247) In ‘Learning for an unknown future’ Barnett (2004) states that a being capable of thriving with uncertainty needs certain dispositions. “Among such dispositions are carefulness, thoughtfulness humility, criticality, receptiveness, resilience, courage and stillness” (p. 258). The importance of the affective domain then is recognised by our graduates, our teachers and the professions to which out graduates seek employment. The transition of graduates into their industry and professional workplaces, and to the skills that are needed by our graduates who will seek employment in an uncertain world that is dynamic and changing at an exponential rate require that higher education institutions address the imbalance between cognitive and affective domains. However, the question from the conference remains. That is, how would one teach and assess affective domain outcomes?
Cognitive domain and the affective domain The affective domain in teaching and learning is a vague concept and could mean at least three different aspects of teaching and learning. Firstly, the affective domain could be about the teacher’s approach to teaching in terms of philosophy and what this communicates to the student. In this perspective the affective domain relates to the way in which the teacher interacts with students to build a relationship (Crossman, 2007; Huyton, 2009). Secondly affective Teaching and Learning can be about appealing to the affective attributes of students as a deliberate form of engagement (Beard, Clegg, & Smith, 2007; Crossman, 2007). Such an approach might seek to make students annoyed or angry at an injustice and in this way some students may be motivated to take a greater level of involvement. In both these cases there is a profound ATN Assessment Conference 2009, RMIT University
41
reliance on the teacher to establish the learning environment. Students may choose to respond positively or otherwise but they do not initiate nor are they responsible for the environment. There is a good deal of literature pertaining to both of these approaches. The third perspective to affective teaching and learning is one where students are asked to engage with both the development and understanding of their own motivations, attitudes, values and feelings in terms of their behaviour and actions as a professional and as a citizen. It is this third perspective that is relevant to this discussion. Arguably the most influential literature in this area are the seminal works of Bloom, Englehart, Furst, Hill and Krathwohl (1956) and Krathwohl et al. (1964) in the development of both cognitive and affective taxonomies. While the cognitive taxonomy articulated by Bloom et al. (1956) has been modified over the decades (Krathwohl, 2002) it remains significantly intact in Higher Education where curriculums are dominated by cognitive outcomes such as the development of both critical and creative thinking skills and large content loads (Atherton, 2005; Krathwohl et al., 1964). However, the affective domain has received sporadic attention and while it is often acknowledged as important remains significantly underdeveloped in terms of explicit outcomes and assessment strategies. Typically, two arguments have been asserted that seek to explain the lack of attention given to the affective domain. Firstly, that the cognitive domain is intuitive in that it seems to make sense at University to concentrate on the body of knowledge, makes sense for students to develop problem solving skills and to critically question science and society and makes sense to have graduates who have the capacity to develop creative responses to difficult and complex problems (Krathwohl et al., 1964; Pierre & Oughton, 2007). Secondly, the cognitive domain is relatively easy to assess and to apply sound assessment practices like moderation to ensure some level of objectivity and fairness (Pierre & Oughton, 2007). Conversely, the affective domain is contentious raising all manner of fundamental challenges and questions that go to the very heart of the purpose of education at a tertiary level and asks hard questions about social and cultural power in education, such as: How does one judge intrinsic qualities such as values, motivations, feelings and attitudes? Is higher education an appropriate place to develop qualities such as hard work or having a go? If so, how should they be assessed? What will be used as a standard upon which one judges? How does one ensure any sense of validity and transparency? How can one tell if students are authentically displaying these intrinsic traits and not just “playing the
game”? The challenge presented by working with, and in, the affective domain must not be ignored simply because it presents difficulties and is more contentious than the cognitive. Further, the affective and cognitive domain teaching should not be seen as a dualism. Rather, by using both affective and cognitive domains, teaching and learning can be a holistic undertaking where both are given their due value or in the terms put forward by Sumsion and Goodfellow (2004) we must think in terms of developing students to be not simply knowledgeable and competent, but capable as well. The importance of both domains, working together, can be seen through assessment and Graduate Qualities.
Affective domain and graduate qualities / attributes In common with all Australian universities the University of South Australia 1 makes certain claims about its graduates (University of South Australia). At the University of South Australia 1 these claims are articulated through a non-hierarchical list of Graduate Qualities which are said to be at the core of our teaching and learning (Lee, 2007). These Graduate Qualities, like many others at universities across Australia contain claims that graduates will have certain intrinsic affective qualities as well as explicit cognitive qualities. In order to evidence the attainment of the Graduate Qualities, programs and courses are required to align their outcomes via an agreed set of Graduate Quality indicators. These indicators contain both cognitive domain outcomes and affective domain outcomes. If we are to evidence the attainment of the Graduate Qualities then course and program outcomes need to articulate how these Graduate Qualities are to be achieved and how they are to be measured. If affective domain outcomes are claimed then they should be assessed. At present, predominantly what appears to ATN Assessment Conference 2009, RMIT University
42
happen is that affective domain skills are taught by embedding them in the learning environment but it is overwhelmingly the cognitive skills that are actually assessed. In terms of what John Biggs’ calls “Constructive Alignment” (Biggs & Tang, 2007) what one should expect to see is the outcomes, the teaching method, and the assessment all intrinsically aligned. That is, one would expect there to be an affective domain assessment to go with the affective domain outcome. When assessing a outcome located in the cognitive domain most academics I suspect would be concerned with awarding marks or grades solely for cognitive domain engagement or critical thinking and most would consider the level of academic rigor relatively straightforward to judge by using taxonomies like Bloom et al.’s (1956) or Biggs’ SOLO (Biggs, 1999). However, the same cannot be said for the affective domain. Commonly, what is accepted as assessment of intrinsic, affective domain outcomes described in Graduate Qualities such as, for example, valuing ethical perspectives, is that students are asked to “critically reflect” or write an evaluative or analytical discourse about the importance of valuing ethical perspectives; but this does not demonstrate the act of valuing nor does it demonstrate the attainment or development of an affective outcome. Indeed, at a fundamental level, writing a critique about an affective outcome posits the outcome squarely in the cognitive domain. Krathwohl et al. (1964) note this discrepancy and explain it in terms of the apparent similarity that exists between the affective outcome “receiving phenomena” and the cognitive outcome “knowledge” (p. 50). There is a distinction in that “receiving phenomena” incorporates an openness of mind and discrimination; where as “knowledge” is more about recall and memory. Further, they note that it is probably possible to restate an affective objective at the lowest level of “receiving phenomena” in cognitive terms by utilising a combination of knowledge, comprehension, analysis and evaluation. That is, using ethical perspectives as an example, that it is possible to make an argument that the cognitive skills of knowledge, comprehension, analysis and evaluation might be used together to demonstrate the one is open minded and willing to consider ethical perspectives. In terms of affective engagement this would represent an outcome at the level of “receiving phenomena” of the taxonomy, the very lowest level of the affective domain. Moreover, if one was to accept knowledge, comprehension, analysis and evaluation, all cognitive skills, and make the argument that an affective domain outcome was achieved, it may, in effect, defeat the whole purpose of aspiring to achieve an affective learning outcome, and also, in terms of using the taxonomy to guide learning and develop higher order outcomes. That is, the purpose of the lower levels of a taxonomy is to allow the student to build on, and to attain, higher levels of learning. The cognitive equivalent approach may arguably be useful at attaining low level affective outcomes but does not allow access to higher order attainment. There are examples of some universities that have sought to incorporate the affective domain in their Teaching and Learning (University of North Carolina at Charlotte, no date). A full investigation remains to be undertaken but they typically mention measurable outcomes by fairly nebulous statements as “acting professionally” to very confusing claims such as this one for the highest level of the affective taxonomy “Characterisation by value or value complex “I’ve decided to take my family on a vacation to visit some of the places I learned about in my class” (University of North Carolina at Charlotte, no date). While we expect that the author could articulate how this is measurable, and how going on vacation evidences “pervasive controlling tendencies” and “the integration of beliefs, ideas and attitudes into a total philosophical or world view” (Krathwohl et al., 1964, p. 184) we, however, are severely challenged. Krathwohl et al. (1964) state that it is unlikely that the level of behaviour required by “Characterisation by value or value complex” (p. 184) could be demonstrated in a higher education environment as it relates to the real world where one makes judgments and decisions that need to over time reflect an identity of affective consistency.
Challenges rethought In their seminal work, Krathwohl et al. (1964) describe the affective domain by contrasting it with the cognitive domain in this way, “In the cognitive domain we are concerned that the student shall be able to do the task when requested. In the affective domain we are more concerned that he does do it when it is appropriate after he has learned he can do it” (p. 60). Krathwohl et al.’s (1964) characterisation is compelling for a number of reasons. Most importantly it repositions the affective domain away from subjective judgements from privileged positions to “did you” or “didn’t you” when you knew how?
ATN Assessment Conference 2009, RMIT University
43
The distinction here can be illustrated by an example. I once taught ethics to fourth year Education students. The final assessment asked the students to discuss their understanding of ethics and they were encouraged to use examples from their experiences on preceding practicum placements. One student wrote about how he came to believe that a student in his year two class had been sexually abused. He reported the matter to his mentor teacher and his ethical discussion in his essay centred on the fact that to his knowledge the teacher did not comply with South Australian law in terms of mandatory notification. What was not covered in the essay was that the student had completed his mandatory notification training and was under an equally compelling obligation to notify. Arguably, it could be claimed he had a higher obligation as it was his conviction of the abuse that raised the issue. There is no room for discretion in this matter; it is mandatory notification not discretionary notification. The student articulated his discussion well. He presented a thorough critical argument of the need and importance of mandatory notification and by the assessment criteria based on the cognitive domain scored highly (Birbeck, 2009). However, the discussion viewed through the lens provided by Krathwohl et al. (1964) in the affective domain is not as meritorious. That is, he knew it was appropriate to report, but did not. There is no way to definitively know why he did not report. We don’t know his motivation, nor are we aware of his values. We have some sense through the essay about what he feels about the issue now, but we have no sense at all of his attitude in respect to his role as a teacher in this situation. All we know is he could have reported and did not. If the student were a first year education student one would not be as concerned (about the student teacher at least). University is about learning and mistakes are often powerful learning events. However, this student had proceeded through to his last semester of his last year. In fact, he would be teaching as a professional inservice teacher in less than 3 months. That is, on his own, with his own class and yet he has not demonstrated that he has the capability to protect his students through an expectation placed on him by society, accepted by him personally, and expected of his profession. The application of Krathowl et al.’s (1964) characterisation allows the opportunity to judge an outcome in the affective domain without the burden of positioning oneself in the untenable position of judging another’s attitude, values, feelings or motivations . The judgement is made possible by aligning the student’s actions with that expected by the profession in which one is engaged. The evidence suggests that affective behaviors develop when appropriate learning experiences are provided for students much the same as cognitive behaviors develop from appropriate learning experiences. The authors of this work hold the view that under some conditions the development of cognitive behaviors may actually destroy certain desired affective behaviors and that, instead of a positive relation between growth in cognitive and affective behavior, it is conceivable that there may be an inverse relation between growth in the two domains (Krathwohl, Bloom, & Masia, 1971, p. 20). Boud (2000) concurs “If assessment tasks within courses at any level act to undermine lifelong learning they cannot be regarded as making a contribution to sustainable assessment” (p. 151). The issue then, in developing authentic assessment in the affective domain, is about how to constructively aligning the desired learning. In this case, developing the student’s capacity to move beyond simply knowing what is ethical, on to developing the confidence, the resilience and the courage to act ethically. Assessment criteria, therefore, must be developed in terms of student actions, not student knowledge. While there are various simulation and role plays that involve the affective domain that can be imbedded into teaching situations, these generally require students to practice activities rather than execute ‘real’ behaviours with the resulting interpersonal implications. The assessment of ethics within group / team work is useful example of how the development of ethical actions might be imbedded in situations that require students to truly enact their ethical beliefs. Students sometimes have issues with group work. Predominantly what concerns them is the unfairness of having to carry a participant or “social loafer” (Kavanagh & Crosthwaite, 2007). They believe that an individualized mediation of marks within a team is fair and one often finds that if this is done well that students are more supportive (Kavanagh & Crosthwaite, 2007). Typically though what is assessed is cognitive and is stated in terms of the problem or project in conjunction with the process and skills learned.
ATN Assessment Conference 2009, RMIT University
44
Affective domain thinking would be assessed differently. Once basic team / group work roles and expectations had been established and understood then assessment in the affective domain would be possible. When you work through the layers of what team / group work really is you find that it may be framed as an exercise in ethics and trust. How one behaves within a group is about ethics and one’s sense of responsibility to self and group. The focus of peer assessment and self reflection take on a completely different perspective. In this type of assessment one is not assessing a student’s ability to critically reflect or analyze an ethical problem. Nor is it about the group work process, skills or the problem being worked on. It is about actual ethical behavior. What was done and why. For example an affective domain aligned assessment of the Graduate Quality “acting ethically” might require an evaluation of the following questions: Did the student meet their agreed commitments? Did the student fulfil their agreed role within the group/team structure? Did the student defend their ideas? Was the student willing to confront team members who were not meeting their obligations? Was the student willing to take risks? When decisions were reached by consensus did the student commit?
Bond University, School of Business have a course titled “Organisational behaviour” which incorporates the essence of these ideas within the assessment structure (Murray & Chao, 2009). In respect to a group based project the students are required to reflect on their behaviours and the behaviours of others such that: Sometimes company projects raise ethical issues among participants. Issues may involve the choice of project, the distribution of proceeds, and whether or not to give accurate feedback to problematic team members. You will need to present yourselves very professionally as you work to obtain cooperation from the other organisations necessary to implement your project. Your team is also likely to be made up of students from a variety of countries, requiring you to learn to work with those from different cultures. Class discussions will enable you to benefit from the insights other companies have experienced with regard to cultural issues (p. 4). It is the nuanced assessment that differentiates this project from many others. That is, students are explicitly required to consider one’s behaviour, that is, one’s ‘actions’ and those of others. For example the paragraph explicitly cites issues of choice, distribution, accurate feedback and lastly but not less important, international perspectives. This is distinct from addressing the issues of ethical behaviour from a cognitive perspective where the assessment might call for a discussion on the ethical issues inherent in the business case project.
Future research intentions The paper thus far has sought to raise awareness of an issue in the way universities, both here and internationally, value cognitive learning above affective learning. This, in despite of statements by Higher Education institutions, typically in terms of “graduate attributes” or as “graduate qualities”, that articulate a desire of their graduates to possess certain affective capabilities. As a consequence we intend to address this issue by trialling, small scale, and targeted interventions. Through a collaboration with the Learning and Teaching Unit and staff from the School of Nursing and Midwifery, we have identified courses where nuanced modifications to both learning activities and aligned to assessment have the potential to develop students’ ability ‘to do’ not just ‘to think’. For instance in 2010 we plan to trial an intervention whereby students engage with clients/actors depicting specified mental states in the classroom setting. The study will evaluate how students who have practiced their behaviours in this simulated environment, approach and engage with clients while on placement. Evidence suggests that students and staff who have merely observed, rather than participated in, encounters with clients with altered mental health states, interact superficially with psychiatrically ill clients and provide a lesser level of care. At this point it is critical to reiterate that these interventions are not intended to be wholly about the affective domain. Rather they are intended to address an imbalance and the cognitive ability of students will remain an important aspect of the learning and the assessment. ATN Assessment Conference 2009, RMIT University
45
Conclusion It is acknowledged that the affective domain is commonly used by teachers and embedded into their teaching methods. However, there is a need to think of ways to move beyond simply embedding affective teaching and learning strategies while assessing cognitive outcomes. We need to ensure that the Graduate Qualities we seek to develop by using affective strategies are constructively aligned and assessed as outcomes in their own right, not as adjuncts to cognition and skills. This is not an argument between the affective and cognitive domains but a fusion of the two. The application of Krathwohl et al.’s (1964) characterisation allows the opportunity to judge an outcome in the affective domain without the burden of positioning oneself in the untenable position of judging another’s attitude, values, feelings or motivations . The judgement is made possible by aligning the student’s actions with that expected by the profession in which one is engaged. There are a number of drivers that demand a closer examination of this issue. Industry and professions have identified the importance of affective attributes in our graduates. Further, affective attributes enable the transference of cognitive skills between contexts such as between university and the work place. To enable our students to recognise the value of affective attributes they should be overtly developed, taught and assessed; explicit rather than embedded in cognitive tasks.
References Atherton, J.S. (2005). Learning and Teaching: Bloom's Taxonomy. Retrieved September 16, 2008, from http://www.learningandteaching.info/learning/bloomtax.htm. Barnett, R. (2004). Learning for an unknown future. High Education Research and Development, 23 (3), 247-260. Beard, C., Clegg, S., & Smith, K. (2007). Acknowledging the affective in higher education. British Educational Research Journal, 33 (2), 235-252. Biggs, J. (1999). Teaching for Quality Learning at University. Buckingham: SRHE & Open University Press. Biggs, J., & Tang, C. (2007). Teaching for Quality Learning at University. Buckingham: Society for Research into Higher Education and Open University Press. Birbeck, D. (2009). Graduate qualities and the affective domain: New horizons to explore. Adelaide: University of South Australia. Bloom, B.S., Englehart, M.D., Furst, E.J., Hill, W.H., & Krathwohl, D.R. (Eds.) (1956). Taxonomy of Educational Objectives: Handbook 1: Cognitive Domain. London: Longmans Green and Co Ltd. Boud, D. (2000). Sustainable Assessment: rethinking assessment for the learning society. Studies in Continuing Education, 22 (2), 151-167. Boud, D., & Falchikov, N. (2006). Aligning assessment with long term learning. Assessment and Evaluation in Higher Education, 31 (4), 399-413. Crebert, G., Bates, M., Bell, B., Patrick, C.-J., & Cragnolini, V. (2004). Developing generic skills at university, during work placement and in employment: graduates' perceptions. High Education Research and Development, 23 (2), 147-165. Crossman, J. (2007). The role of relationships and emotions in student perceptions of learning and assessment. Higher Education Research & Development, 26 (3), 313-326. Dansie, B., Fursenko, F., Gelade, S., Itzstein, G.S., Li, K.W., & Wahlstrom, K. (2005). Are intrinsic student qualities assessable? Learning from the mapping of trans-national assessment practices in IT degree programs. University of South Australia. Huyton, J. (2009). Significant personal Disclosure; exploring the support and development needs of HE tutors engaged in the emotion work associated with supporting students. Journal of Learning Development in Higher Education, (1), 1-18.
ATN Assessment Conference 2009, RMIT University
46
Kavanagh, L., & Crosthwaite, C. (2007). Triple-Objective team Mentoring: Achieving Learning Objectives with Chemical Engineering Students. Education for Chemical Engineers, 2, 68-79. Krathwohl, D.R. (2002). A Revision of Bloom’s Taxonomy: An Overview. Theory into practice, 41 (4), 212-218. Krathwohl, D.R., Bloom, B.S., & Masia, B.B. (1964). Taxonomy of Educational Objectives: Handbook 2: The Affective Domain. London: Longmans, Green and Co Ltd. Krathwohl, D.R., Bloom, B.S., & Masia, B.B. (1971). Taxonomy of Educational Objectives: Handbook 2: The Affective Domain. London: Longmans, Green and Co Ltd. Lee, P. (2007). The teaching and learning framework. University of South Australia. Murray, J., & Chao, G. (2009). Organisational Behaviour: Subject Packet [MGMT:11-101]. Gold Coast, Australia: Bond University. Pierre, E., & Oughton, J. (2007). The affective Domain: Undiscovered Country. College Quarterly, 10 (4), 17. Sumsion, J., & Goodfellow, J. (2004). Identifying generic skills through curriculum mapping: a critical evaluation. High Education Research and Development, 23 (3), 329-346. University of North Carolina at Charlotte. (n.d.). Bloom's taxonomy of educational Objectives. Retrieved January 20, 2009, from http://teaching.uncc.edu/resources/best-practice-articles/goalsobjectives/blooms-taxonomy. University of South Australia (2009). Indicators of Graduate Qualities. Retrieved May 20, 2009, from http://www.unisa.edu.au/gradquals/staff/indicators.asp.
ATN Assessment Conference 2009, RMIT University
47
Feedback across the disciplines: observations and ideas for improving student learning Julian Bondy School of Global Studies, Social Science and Planning, RMIT University,
[email protected]
Neil McCallum School of Global Studies, Social Science and Planning, RMIT University,
[email protected]
This paper summaries a cross-disciplinary project that explored student and staff understandings and perspectives of what kinds of feedback they found most valuable. The project, undertaken in 2008, involved a broad array of disciplinary areas within the social sciences, humanities and engineering sought to reveal how staff can use feedback and assessment more effectively to promote student engagement and learning. Students’ understandings and experiences of feedback, their observations and recommendations regarding what works and what doesn’t were compared with the literature and the perspectives of recognised leading teaching practitioners. It is argued that while each disciplinary area has distinct practices and approaches, there are principles and methods of good assessment practice that span these disciplinary distinctions. Central to these is the role of feedback in student engagement and the significance of transparent assessment practices in empowering students in their learning processes. The paper shares the results of this project and in doing so seeks to contribute to a greater understanding of how feedback can be at once localised as well as an overarching organising principle of good practice in teaching and learning. Keywords: feedback, assessment, pedagogy, university, Australia
Introduction While not without controversy and debate, teaching quality in Australian Universities is indicated through the Good Teaching Scale (GTS) index that is published annually and widely disseminated throughout the community. GTS is derived from six questions that are asked of all graduating students in Australia. These questions are (i) the teaching staff motivate me to do my best work, (ii) the staff put a lot of time into commenting on my work, (iii) the teaching staff make a real effort to understand the difficulties I might be having with my work, (iv) the teaching staff are extremely good at explaining things, (v) the teaching staff work hard to make this course interesting and (vi) the teaching staff normally gave me helpful feedback into how I was doing. RMIT University, like many other Universities in Australia regard this GTS score as a critical measure of the quality of our teaching and is committed to raising this score across the University. Within this context, it was felt that for RMIT University community is to improve their GTS scores individual courses and programs will need to have a coordinated approach to improving both how they provide feedback to students and integrating how that feedback is used in the learning process. In order to assist the University in achieving this strategic goal, a cross-disciplinary (Social Sciences & Humanities and Engineering), cross-School project was developed that developed, piloted and evaluated a range of approaches and strategies designed to provide meaningful and useful feedback to students. The project used data collected from a number of sources in semester 1, 2008 to inform the development of feedback approaches or strategies designed to actively support students learning. This paper outlines some of the relevant literature associated with feedback, the emergent themes from the student focus groups and staff
ATN Assessment Conference 2009, RMIT University
48
interviews to reveal underpinning principles and practices associated with feedback and good teaching that transcend disciplinary boundaries.
Overview of the literature The issue of student feedback (both formal and informal) is one that many universities have struggled with. This struggle has been manifest in two dimensions, the limited literature based on substantive evidence and the absence of success strategies that have been broadly replicated. As Mutch (2003) observes much of the literature on assessment based on small samples and is exploratory in nature. Similarly Rae and Cochrane (2008) identify the absence in the literature on written assessment feedback, despite its prevalence as a mode of assessment and despite the continued “expressing for the need for meaningful and constructive feedback” (p. 217). Not only is there confusion about what is meant by feedback with academics defining it broadly to include informal feedback while students primarily understand it narrowly as an codicil of assessment it is clear that no one university or for that matter feedback technique successfully addresses the quandary that is student feedback. One only needs to look at the amount of time and resources universities put into feedback to illustrate this point (for example RMIT University, 2007; QUT, 2009; Macquarie University, 2008; Flinders University, 2008). It is also an area where the literature clearly indicates the importance of providing feedback to students. Feedback is not only beneficial to the student learning process, but students also want to receive feedback back. “Of the whole assessment process, the research literature is clear that feedback is arguably the most important part in its potential to affect future learning and student achievement. There is evidence that students appreciate this and want good feedback (Rust, O’Donovan & Price, 2005). ACER (2008), the Australian Council for Educational Research, puts forth that engagement is one way in which educators can be effective. The more students are brought into the learning process the more they will respond favorably. Programs that do not take advantage engaging student’s initiative are thereby limited to only what staff brings into the classroom. One of the ways students can be brought into the learning process is through the use of feedback. One problem revealed in the literature on feedback is that complex strategies to rectify the problem are most likely to be ineffective. To address this problem it was suggested that staff must first address the most obvious needs with the content of the course before looking to ways to improve feedback. This stresses the fact that the course itself must be firmly established before any attention should be paid to how students act in response to it, there is no point addressing feedback issues if the course itself is ‘broken’ (Scott, 2005). Student workload (as identified by Lizzio, Wilson, & Simons 2002; RMIT 2007) has an effect on student feedback. The problem may not be just one class but the way in which the program is structured. A common emerging theme of keeping things simple with regards to student feedback is that it does not take drastic measures to enact feedback techniques into the classroom. The simpler the method to be trialled the more likely it is to succeed. Simple methods of integrating the students into the learning direction of the class can produce successful results (Gross, 1993; Isaacs, 2001; QUT, 2009). Another emergent theme was the under-researched nature of assessment from the perspective of students and the deleterious consequences of this. Many researchers advocate paying closer attention to the student voice (Carless, 2006; Rae & Cochrane, 2008; Bondy, Jollands, & McCallum, 2009) to overcome misunderstandings between lecturers intentions regarding feedback and the students receipt of this feedback. As Nicol and MacFarlane-Dick (2006) point out, in order for students to use feedback effectively they must manage their own learning and lecturers play a significant role in motivating and facilitating this independent learning. With these observations regarding the importance of systematically investigating student’s perceptions and the importance of not over-complicating assessment strategies this project sought to reveal the student voice and their perceptions and experiences of assessment and contrast this with what recognised leading practitioners.
ATN Assessment Conference 2009, RMIT University
49
Methodology An iterative process was undertaken shortly after the literature review was commenced and a draft set of questions and circulated to project steering group for comments and additions. Once this process was finished and ethics approval granted seven student focus groups of approximately fifteen students each, were drawn from a range of disciplines from within the two Schools participating in this project and ten staff were selected from across the programs in the two schools to be invited to be interviewed. Critical in the development of these focus groups was ensuring that we succeeded in having a spread of representation from each of the programs taking part in the study and also ensuring that we had an appropriate mix of year-levels within these groups. Students were asked what they understood by feedback and to provide concrete examples of feedback that they have found assisted their learning and discuss why. They also discussed the timing of feedback, the relationship between feedback and assessment, and how (or if) the learning associated with feedback for one course might be applied in another course. A discourse analysis of the transcripts of these focus groups was undertaken and the project steering group discussed the emergent themes. The purpose of the staff interviews was for recognised leading practitioners to share their insights, techniques and understanding of feedback (both formal and informal) and to compare these with the data collected from the focus groups and with the literature. Potential staff interviews were identified by reviewing the individual GTS results from the previous two years and selecting those who consistently had scores of 90 or over or those who had improved their scores by at least 25 points during this period. From this smaller pool we then reviewed potential candidates disciplinary backgrounds to ensure that we had at least one representative from each disciplinary group. These staff were then invited to participate in one hour interview. All staff who were invited agreed to participate.
Discussion of results One of the most intriguing results from the study was the narrow definition of feedback that the majority of students used in describing their understanding of the term. While they understood that feedback could be both informal and formal, for most, feedback was a synonym for assessment. Eight themes were repeated through the student focus groups. In order of prevalence these were: (i) marking consistency: (ii) accessibility to lecturers and tutors; (iii) marks with legible comments; (iv) seeing their own feedback used; (v) assignments and tests handed back for future work; (vi) receive verbal feedback along with written; (vii) online learning; (viii) peer and self assessment. The interviews with staff did not result in the same narrow definition of feedback that the majority of students had. Without prompting, all staff described feedback multi-dimensionally, not only in terms of formal and informal but also in terms of the formative and summative forms of feedback. These staff interviews also yielded a different set of prevailing themes. In order of frequency these were: (i) the observation that engaged students more likely to use feedback, (ii) the importance in their personal professional practice of being approachable as possible, (iii) the need to make all course material available to students online, (iv) the primacy of dialogue compared to formal assessment, (v) the importance of empowering students in the learning process including in the assessment process. While these themes appear at first to be different, closer inspection revealed that students and staff responses had significant overlap. In the section detailing these thematic areas below, the students themes to organise this material and where there was overlap between students and staff responses have been combined. Marking consistency Number one in regards to student concerns was the issue of marking consistency. From our focus groups with students it became apparent that the number one want of students was to have their courses use marking based criteria sheets, sheets that would be attached to essay for example that would indicate to the student how the assignment was graded, where they got their marks with comments detailing why they received the grade. It was clear that cross the disciplinary groups involved in this project, that students were concerned about the consistency of marking. Students’ views on marking showed that they were concerned in which the way staff arrived at the marks they gave. Some tutors were seen as easier markers while others were seen as unnecessarily hard. Staff expectations on essay structure and how they wanted assignments completed was
ATN Assessment Conference 2009, RMIT University
50
also a contributor to the inconsistency of marks. Differing expectations of their students by staff led to student confusion and this was regarded as a major problem regarding feedback. Interestingly, while the importance of explicit marking criteria was raised during the staff interviews, the issue of inconsistency of assessment approaches within the same courses did not arise. This points to an important disconnection between students’ experiences and lecturing staffs’ understanding of their teaching and assessment responsibilities. What follows is a sample of student responses to the theme of marking consistency: CS6 – In one paper first term I got full marks for my referencing section, then on another paper later in another class I did the exact same referencing format and did not get full marks. Very confusing, it’s like different in every class or with every marker and I did it APA like we are supposed to. CS1/2/6 – When doing essays it’s not consistent among classes and staff, some tell you to lay out the paper differently, using headings, don’t use headings, have a conclusion, put the conclusion in the end of another section, do this don’t do that and it gets really, really, confusing. All these different essay styles between class’s makes it easy to mess up and the staff will mark you down for a style that they don’t like, it’s not exactly fair. Accessibility to lecturers and tutors Throughout the focus groups students signalled that accessibility to staff, lecturers and tutors, was a significant problem. This was manifest particularly in the area of trying to get further feedback from staff regarding work already graded. Interestingly the staff interviews also revealed a desire for more and better out-of-class communication and a frustration at it being difficult to achieve. The following quotes from students and staff are illustrative of the issues raised: CS2 – It took me five days of really trying to get in touch with this person (staff member) and I felt, I know you guys are busy, but at least tell us your busy because it would save us time tracking staff down, we have assignments, classes and are busy too, we don’t need the extra stress of trying to track down staff just so they can explain their markings to us. It would be more efficient to use a detailed marking system so we don’t have to track down staff and they don’t have to be bothered by us (students). CS2/3/1 – You can send of emails or go wait outside the staff’s office for them hoping to catch them, but it’s not a good way to build a relationship with the staff. Don’t want to have to stalk staff that is not a nice feeling or position to be in as a student. Lecturer 1 - All this talk about student feedback is fine and its true, students deserve feedback, but I want feedback too. I want students to come and talk to me, knock on my door or email me, whatever method they choose, but it is hard to get students to come see you. Lecturer 2 - Students are very busy kids, I mean they have a lot to do, but it’s very hard for them to come and see you, it is like they have a very hard time admitting they are having trouble or do not understand something. Legible comments with marks A major annoyance for students is receiving work back with only a grade or receiving feedback that is illegible or unintelligible. Regardless of the mark students want to receive feedback on their work. The view of the students participating in these focus groups was that if an assignment is given a set grade, regardless of the amount, students should be able to receive full marks; therefore even if they do well they still would like the feedback from staff as to what it takes to get full marks. Also the legibility of feedback provided by staff was an issue of concern. Trying to decipher staff handwriting can be quite a challenge for students and can be as frustrating as not receiving any feedback at all. Student’s difficulties in deciphering feedback was also acknowledged by at least one of the leading practitioners who then used the opportunity to provide verbal feedback. What follows are some selected quotes by students and staff who took part in our student focus group with their view on receiving feedback with marks:
ATN Assessment Conference 2009, RMIT University
51
EMP3 – We had a draft assignment handed back to us and I received mine with no comments just crossed out section in red pen, no explanation as to why it was removed or anything. It’s devastating to have work handed back to you covered in crosses, but no comments makes it worse cause you have no idea why they were crossed out or what you did wrong. it’s like sticking a knife into someone when they do that. CS3 – If we are getting feedback on an essay, we need to know what we did, like we need to see it brought to our attention so we can understand what we wrote and why we got the mark. Teacher 10 - Best for of providing feedback is to go through their work in detail with a red pen point out grammatical, formatting errors and giving detailed comments on their work in the margins, the only problem I have with that is my hand writing is not very good, so I have to take students through what I wrote. See their own feedback used What was striking to the researchers of this project was the overwhelming lack of student interest on the topic of feedback. It became apparent from the student focus groups held that the students themselves saw providing feedback to staff as effectively useless. The overall student view was that staff never really made use of the feedback provided by students in large part because those students were not around the following semester if any changes to the course were made or they were not aware of what other students in the previous years had suggested that had been implemented. This belief that feedback was not being used was also raised by staff who noted how many end of semester assignments end up being pulped as they are never collected. Typical comments from staff and students included: CS3 – Regardless who is giving the feedback it needs to be transparent and it needs to be justified. CS3 – Those end of year tick the box how’s the course going sheets are not transparent and hard to see where it is going. It’s very tokenistic, here’s your chance to give your say, meaningless, hard to take it seriously. Teacher 1 - Students don’t collect their essays at the end of the year so they don’t get that formal assessment feedback. Teacher 6 - At the end of the semester students don’t pick up their stuff, so it’s quite disheartening to see all the attention you placed on to giving student feedback getting tossed in the bin. CS1 – Feedback is a waste of time. We do these feedback things after every class every year and I never see anything from it. Assignments & tests handed back in time for future work The students taking part in the focus groups indicated that too often they have not received back assessments that were needed for future coursework. In order to be effective for students their overall opinion was that graded assessments should be handed back to students in a timely matter so that they might be able to implement that feedback into their next graded assessment. Some staff felt however that the most useful feedback for student learning was formative and there was an under-utilisation of informal feedback and an over-reliance on graded assessment.Typical comments included: CS all – We had one course where we were doing a field assignment and the first one was supposed to help us with the second one and to this day we still have not got our first one back and the second was already handed in and the course is ending soon. That is frustrating. We were told staff is still marking the first one and we need those field assignments for the exam and we haven’t gotten the first back yet.
ATN Assessment Conference 2009, RMIT University
52
CS2 – When we do get our stuff back in a timely matter its great. It is so helpful, because we get the comments and staff has told us what we need to do to improve and it really helps for future work in the course. Teacher 5 - I don’t see formal assessment as the main game, if a person in third year or master’s main focus is being assessed than they are in the wrong place they should be at TAFE or out working. Teacher 6 - The non-assessed feedback is real important because it is a real chance for them to improve on their work. When we are talking about assessed feedback it’s like the train is already passed and a lot of students are quite dissatisfied with the result they get. Receive verbal feedback a long with written Some members of the student focus groups indicated they would like to receive verbal feedback a long with the written feedback. Some students felt that they learn better when staff talk to them and explain personally what they did right and wrong in their graded assessment. It should also be mentioned that the students who participated in the focus groups are well aware and appreciate that staff are extremely busy, especially those with large classes, and realise that expecting staff to take time out and explain their feedback to them is unrealistic in certain circumstances; however they still feel that experiencing verbal feedback in conjunction with written feedback is extremely beneficial for the students learning. The importance of having more than one mode of providing feedback was also picked up in the staff interviews, however because of the difficulties associated with setting aside time to meet individually, staff employed strategies where the verbal feedback could be collective. Typical comments included: CS3 – I have found that I need to see the staff member who marked my assignment so they can go through it with me and say this is right this is right this is wrong and why and so on and I’ve gotten very constructive feedback from them. CS1 – Getting verbal feedback is very helpful and very constructive and helped me understand how I got the mark I did and how I could improve on it. Teacher 3 - Discussion in class is very important I think to add to the direct individual comments. We regularly discuss the assessment in tutorials and we do it in a way that is not just between the teach and student but between all so students can hear how other students are going and if they are doing similar material they could be put in touch with each other. DLS busy work or online learning Online learning activities was a strong theme by both students and staff. However, the focus and tenor of these discussions were markedly different. On the one hand students focused very strongly on online postings which many took strong exception to. On the other the majority of staff spoke about how useful online learning facilities were in communicating and supporting their students in their learning activities. Interestingly however is that while staff described the online learning tools positively, it was also the case that when discussion forums were mentioned they were sites with little teacher direction or involvement. Students typically described the requirement to participate in online forums as ‘DLS busy work’. The feeling from students was that these posting were not an effective use of their time as, most importantly, they did not receive any type of feedback on their internet contributions. The lack of staff feedback for their work gave students the impression that their contributions were not taken seriously and were essentially ‘busy work’ with little learning value. Beneath are some quotes from students and staff illustrating their juxtapositions in focus and value regarding online learning: CS2 – We had a class where we continued discussion through online forums and that was good and there was real participation, but that is not common amongst online classes. CS all – It’s just tokenistic, it’s just busy work, no value (online discussion boards).
ATN Assessment Conference 2009, RMIT University
53
Teacher 2 - I put all my material on the DLS so the students can download it, PowerPoint lectures, tutorial answers, course material all of it. Teacher 4 - We have a few students who are really opinionated and the discussion board allows them to have their say, this saves them taking up time in the lectures by diverting attention from that weeks topic. Teacher 7 - I use the DLS, I post lecture notes every week and I do find it effective, I do try and use the discussion board but later year students don’t tend to make use of it and this year it was taken down because students were harassing other students on it. Peer & self-assessment The last large-scale theme to emerge from the student focus groups was the issue of peer and self assessment. The majority of students who were part of the focus groups had not experienced very much instances of self and peer assessment and this was also reflected in the fact that none of staff who were interviewed as part of this project raised either self or peer assessment for comment. Those students who had undertaken self assessment felt unsupported in what they were supposed to be doing, and in light of the absence of support or direction, saw it as an opportunity to give themselves generous marks. With peer assessment students indicated mixed feelings on its effectiveness. For some, their only involvement with peer assessment came from group work assessments in which the feedback the group received did not reflect the division of work and the effort by group members. In other situations the students did individually receive peer feedback, but it was at the end of semester after all the assessment tasks had been completed. By this time it had lost utility for receiving students: CS all – What’s bad about getting peer feedback is that we get it too late. The best time for it is right after the presentation, but we like don’t get it until the end of course and that’s too late because I’ve forgotten everything by then. It doesn’t help if it is too late like that. INT6 – There is no monitoring with group work so you can have groups that have only one or two people doing all the work for the whole group. INT6 – The marking criteria for group work like presentations should be like how the group worked together and how you spread the work between you and not just the overall final product because there are a lot of people who cheat and don’t do anything. CS2 – When I’ve had self-assessment I’ve given myself an A.
Conclusion Perhaps the most striking observation to emerge from this project was just how internally homogeneous the responses from students and staff were, but how different they were from each other. In many ways, it appears that students and staff occupy parallel universes with little connection between. This could be an artefact of methodology in that the students’ focus was on their own experiences and many of them have not been good whereas the in-depth interviews were with leading teaching practitioners who do not engage in the practices that many of the students found problematic. That observation aside, there was also, at a fundamental level that spans disciplines, predominant and consistent message from the staff that the most effective tool to engage students is formative and often informal feedback. This observation by staff is also supported by recent research that that concludes that feedback more than the ‘bit on the end’ of the process of (or courses of study in) assessment has the strongest influence, that is, the variable with the largest effect size on student learning (E.S.=1.13) (Hattie, 2003). It is clear from student focus groups that many students, as well as their teaching staff, conceive of feedback very narrowly and that the consequences of this has been to limit the students capacity to engage in their learning. While the results of this study should be seen as limited and suggestive, there are, particularly when linked to other research (Carless, 2006; Nicol & MacFarlane-Dick, 2006; Rae & Cochrane, 2008; Bondy et al., 2009) important implications for practice. It became clear through this project that what leading ATN Assessment Conference 2009, RMIT University
54
practitioners regarded as feedback was very different from students (and as they reported many staff). If there is such variance in the ‘what’ then the risks of misunderstandings between staff and students regarding ‘why’ are compounded. In order to provide students with the a more enriching and engaging learning experience the role and the importance of feedback in the learning process needs to be revitalised and additional efforts need to be made in communicating what feedback is and what purposes it serves. It is also important to recognise that notwithstanding these conceptual-definitional differences between students and staff, this project revealed a number of common principles and methods that both students and staff recognised as good practice. These included consistency in assessment processes, transparency of assessment criteria, feedback to students regarding teaching quality surveys, and accessibility of lecturers to students. The experiences of students in these focus groups and insights from our leading teachers and the evidence from the literature each point to the need for the underlying purposes and principles of good-practice in assessment generally and feedback specifically to become more widely embedded and better understood by all stakeholders.
References Angelo, T.A., & Cross, K.P. (1993). Classroom Assessment Techniques: A Handbook for College Teachers (2nd ed.). San Francisco: Jossey-Bass. Australian Council of Education and Research (ACER) (2007). Australasian Survey of Student Engagement (AUSSE 2007). Retrieved December 12, 2008, from http://www.acer.edu.au/ausse/findings.html. Australian Council of Education and Research (ACER) (2008). Australasian Survey of Student Engagement (AUSSE). Retrieved December 12, 2008, from http://www.acer.edu.au/ausse/index.html. Bondy, J., Jollands, M., & McCallum, N. (2009). Re-Visioning Feedback for Learning. Journal of the World Universities Forum, 2 (4), 69-83. Carless, D. (2006). Differing Perceptions in the Feedback Process. Studies in Higher Education, 31 (2), 21933. Flinders University (2008). Giving Feedback. Retrieved December 12, 2008, from http://www.flinders.edu.au/teach/t4l/assess/feedback.php. Gross, D.B. (1993). Tools for Teaching. University of Berkeley. Retrieved December 12, 2008, from http://teaching.berkeley.edu/bgd/feedback.html Hattie, J. (2003). Teacher makes a difference: What is the research evidence?’ Paper presented at the ACER Annual Conference. Melbourne. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81– 112. Higgins, R., Hartley, P., & Skelton, A. (2002). The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in Higher Education, 27 (1), 53-64. Isaacs, G. (2001) "Assessment for Learning", Teaching & Learning in Higher Education Series, Brisbane, Australia: Teaching and Educational Development Institute University of Queensland. Retrieved August 9, 2008 from http://www.tedi.uq.edu.au/downloads/Assessment_for_Learning.pdf Jackson, M., Watty, K., Yu, L., & Lowe, L. (2006). Inclusive assessment. Improving learning for all. A manual for improving assessment in accounting education. Strawberry Hills: The Carrick Institute for Learning and Teaching in Higher Education. Retrieved December 12, 2008, from http://www.altc.edu.au/system/files/resources/grants_2005project_accounting_finalre- port_2006.pdf. Krause, K., Harris, K., Garnett, R., Gleeson, D., Peat, M., & Taylor, C. (2007). Enhancing the assessment of learning in Australian higher education: Biological sciences. Strawberry Hills: The Carrick Institute for Learning and Teaching in Higher Education. Retrieved December 11, 2008, from http://www.griffith.edu.au/__data/assets/pdf_file/0003/37479/BioAssess.pdf. Krause, K., Hartley, R., James, R., & McInnis, C. (2005). The First Year Experience in Australian Universities: Findings from a Decade of National Studies. Centre for the Study of Higher Education, University of Melbourne. Retrieved January 20, 2009, from http://www.dest.gov.au/sectors/higher_education/publications_resources/profiles/first_year_experience .htm#authors. ATN Assessment Conference 2009, RMIT University
55
Laurillard, D. (1993). Rethinking University Teaching. A Framework for the Effective Use of Educational Technology. London: Routledge. Lizzio, A., Wilson, K., & Simons, R. (2002). University Students’ Perceptions of the Learning Environment and Academic Outcomes: implications for theory and practice. Studies in Higher Education, 27 (1). Macquarie University (2008). Assessment and Feedback. Retrieved December 12, 2008, from http://www.mq.edu.au/learningandteachingcentre/about_lt/assessment.htm. Mutch, A. (2003). Exploring the Practice of Feedback to Students. Active Learning in Higher Education, 4 (1), 24-38. Retrieved October 7, 2009, from http://alh.sagepub.com.ezproxy.lib.rmit.edu.au/cgi/reprint/4/1/24. Nicol, D., & MacFarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31 (2), 199-218. Retrieved October 7, 2009, from http://www.reap.ac.uk/public/Papers/DN_SHE_Final.pdf. O’Donovan, B., Price, M., & Rust, C. (2001). The Student Experience of Criterion-Referenced Assessment (Through the Introduction of a Common Criteria Assessment Grid). Innovations in Education and Teaching International, 38 (1), 74-85. Retrieved May 6, 2009, from http://www.informaworld.com/10.1080/147032901300002873. Price, M., O’Donovan, B., & Rust, C. (2007). Putting a social-constructivist assessment process model into practice: building the feedback loop into the assessment process through peer review. Innovations in Education and Teaching International, 44 (2), 143-152. Retrieved May 6, 2009, from http://www.informaworld.com/10.1080/14703290701241059. QUT (2009) Learning and Teaching Plan 2009-2013. Brisbane, Australia: Queensland University of Technology. Retrieved October 20, 2009 from http://www.frp.qut.edu.au/services/planning/corpplan/documents/20092013LearningandTeachingPlan.pdf Rae, A.M., & Cochrane, D.K. (2008). Listening to students: How to make written assessment useful. Active Learning in Higher Education, 9 (3), 217-230. Retrieved October 7, 2009, from http://alh.sagepub.com/cgi/content/abstract/9/3/217. Richardson, J. (2005). Instruments for obtaining student feedback: a review of the literature. Assessment & Evaluation in Higher Education, 30 (4), 387-415. Retrieved October 6, 2009, from http://134.250.13.201/faculty/robertsw/Evaluation%20Task%20Force/Instruments%20for%20obtaining%20 student%20feedback.pdf. Rust, C. (2002). The Impact of Assessment on Student Learning: How Can the Research Literature Practically Help to Inform the Development of Departmental Assessment Strategies and LearnerCentred Assessment Practices? Active Learning in Higher Education, 3, 145-158. Retrieved January 6, 2009, from http://alh.sagepub.com/cgi/content/abstract/3/2/145. Rust, C., O’Donovan, B & Price, M. (2005) “A social constructivist assessment process model: how the research literature shows us this could be best practice”, Assessment and Evaluation in Higher Education, 30 (3), 233-241. RMIT University (2007). Student Feedback Persistent Themes & Responses. Melbourne: RMIT University. Scott, G. (2006). Accessing the student voice: using CEQuery to identify what retains students and promotes engagement in productive learning in Australian higher education: final report. Canberra, A.C.T; Department of Education, Science and Training. Retrieved May 6, 2009, from http://nla.gov.au/nla.arc68139. Young, M. (1996). Learning To Learn From Assessment. Innovations in Education and Teaching International, 33 (3), 162-170. Retrieved May 6, 2009, from http://www.informaworld.com/10.1080/1355800960330303.
ATN Assessment Conference 2009, RMIT University
56
A generic assessment framework for unit consistency in agricultural science Tina Botwright Acuña University of Tasmania, School of Agricultural Science,
[email protected]
Criterion-referenced assessment (CRA) is considered, when used appropriately, as having the capacity to improve student-learning outcomes. This project describes the development of a generic assessment framework for the School of Agricultural Science, using the process of peer-to-peer professional learning in response to an external imperative to implement CRA across the University of Tasmania. An innovative visual diagram was used to present the generic assessment framework. Four key criteria of; knowledge, analysis, practical skills and communication were divided into various sub-criteria, with level of proficiency in each of the four years of the degree course represented diagrammatically. The generic assessment framework was then applied to an assessment rubric for a 3/4th year laboratory report. The project was evaluated using mixed method approaches, with a quantitative survey of staff on their use of CRA in teaching and qualitative feedback from staff at a workshop on use of the generic assessment framework. Around 60% of the 15 staff (n=17) currently use CRA and all staff considered the generic assessment framework to be of use in developing future assessment rubrics. The workshop identified further issues regarding assessment for discussion within the school, or for clarification at the faculty level. The generic assessment framework, although developed to meet the assessment requirements for agricultural science, could be adapted for use in other disciplines within the University of Tasmania, or at other universities. Keywords: criterion-referenced assessment, agricultural science
Introduction There is a significant body of scholarly work on assessment and what constitutes good assessment practice in relation to student learning. Recently, (Joughin, 2009) offered a revised definition of assessment as “…to make judgements about students’ work, inferring from this what they have the capacity to do in the assessed domain, and thus what they know, value, or are capable of doing”. This definition concerns the process of summative assessment in recording student achievement and not student learning, to which assessment is inexorably linked. Appropriate assessment and effective formative feedback (Gibbs & Simpson, 2004) are strategies cited to improve student learning outcomes when undertaken in tandem with teaching practices that encourage deep learning by students (Biggs & Tang, 2007). Together these approaches may lead to selfregulation by students in learning and assessment (Nicol & Macfarlane-Dick, 2006). Various grading models have been used to judge students’ work and these broadly fall into one of two categories, including norm- or criterion-referenced assessment. Norm-referenced assessment determines student performance on the basis of grade distributions and is now widely considered to result in poor student learning outcomes through the over-emphasis on grades (Biggs & Tang, 2007). In contrast, criterionreferenced assessment (CRA) uses preset criteria and performance standards to determine student achievement and is either criteria-or competency-based (Sadler, 2008). When used appropriately, CRA is regarded to result in improved learning outcomes for students compared with norm-referenced assessment as it is integral to student learning and teaching and is transparent (Allen et al., 2007; Carlson, MacDonald, Gorely, Hanrahan, & Burgess-Limerick, 2000; Neil, Wadley, & Phinn, 1999). The majority of Australian universities have opted to use the CRA approach to student assessment. At the University of Tasmania, the University Senate approved the recommendations of a CRA working party (Allen et al., 2007) to implement CRA across all faculties (7) and schools (26) by 2010. Educational change with regards to assessment in a complex, large institution such as a university (Macdonald & Joughin, 2009)
ATN Assessment Conference 2009, RMIT University
57
typically meets limited success when driven from senior management. Instead, the University of Tasmania is promoting change by engaging one member of the teaching staff to ‘champion’ CRA in their school, with support from a CRA implementation team. Similar models for distributive leadership exist (e.g. LeFoe, Smigiel, & Parrish, 2007) but the champions are more ubiquitous (all schools) and focussed on a single initiative with an underpinning principle developing a shared values or bottom up approach (Brown, 2008). This approach, through peer-to-peer professional learning (Brookfield, 1995), gave each school champion scope to devise an agreed implementation strategy with staff that was best suited to meet their collective needs. The School of Agricultural Science (SAS) at the University of Tasmania offers two undergraduate courses, including a 3-year Bachelor of Applied Science (Agriculture) and a 4-year Bachelor of Agricultural Science. A survey conducted in the SAS in late 2008 on the use of CRA revealed that teaching staff solely used normreferenced assessment to assign grades, which was consistent with the presiding rules for academic assessment within the Faculty of Science, Engineering and Technology. Even though there was an external imperative to implement CRA in the SAS, potential barriers to adoption were numerous and included not only the attitude of the teaching staff to change, but also practical considerations such as the time required to write rubrics (Neil et al., 1999; Sadler, 2008) and the quality and consistency of the rubrics. Generic assessment rubrics or frameworks have been reported in the literature (Hughes & Cappa, 2007; Neil et al., 1999; QUT, 2008) to assist staff in preparing assessment rubrics. None of these generic assessment rubrics, however, was appropriate for direct application in the agricultural science discipline, nor did they address issues of continuity and progression in assessment criteria both within and across years. The aim of this project was to develop a generic assessment framework to ensure a degree of consistency and logical progression between units offered in the SAS. Exemplar assessment rubrics were developed for two types of commonly used assessment tasks in the SAS for 1st, 2nd and 3/4th year units. Research was conducted using quantitative and qualitative approaches on use of CRA by teaching staff, consistency in assessment across units and the potential application of the generic assessment framework in teaching. The project was evaluated by conducting a survey of staff use of CRA in their teaching and recording staff attendance and feedback at a workshop about the generic assessment framework.
Methodology The project The research approach adopted a pragmatic method (Creswell, 2003) to address a real world issue in the SAS. The project used a very context-specific approach to address an external imperative at the University of Tasmania, which aims to implement criterion referenced assessment across all faculties and schools by 2010 (Allen et al., 2007). The project incorporated (i) ascertaining the number and type of assessment tasks used in the SAS degree program; (ii) development a generic assessment framework and assessment rubrics, with associated peer-to-peer professional learning; and (iii) evaluation of generic assessment rubrics. The aim was to develop staff tools to assist in the implementation of CRA within the school and to ensure a degree of consistency and logical progression in assessment between units across the teaching program. Number and type of assessment tasks Ascertaining the number and type of assessment tasks used in the SAS was integral to the development of two types of assessment rubrics that showed progression between units, yet had contrasting criteria and standards. To do this, SAS staff were asked to classify their assessment tasks for each year group into one of the nine categories shown in Table 1. Selection of the type of assessment tasks for development of the assessment rubrics was then guided by the frequency of tasks reported in each category across year groups. One obvious choice was the written report, which was assessment format used across all year groups. Selection of a second assessment task was problematic, given that none of the remaining formats (excluding the quiz) was represented in each year group. A compromise was made to develop assessment rubrics for a written report for all years and a laboratory report for years 2 and 2/3/4.
ATN Assessment Conference 2009, RMIT University
58
Table 1. Number and type of assessment tasks examined in the school. Assessment rubrics were developed with staff for the assessment task/year combinations highlighted in bold #
Written report
Units
Essay
Review
Report
Field
Lab.
Poster
Seminar
Specimen collection
Quiz
1
3
2
0
3
2
0
0
1
0
1
2
6
1
1
6
2
6
2
0
1
6
2/3/4
19
8
4
16
5
9
3
8
1
26
4
2
0
1
4
0
0
1
1
0
0
Development of a generic assessment framework and assessment rubrics A draft generic assessment framework was subsequently developed as a tool to assist staff in writing the assessment rubrics. The framework included the four broad criteria of A) knowledge of information; B) analysis of information and data; C) practical skills; and D) communication. Selection of these four criteria was driven by the particular requirements of the agricultural science discipline and while these can be shown to be consistent with accepted teaching practice (see discussion), the criteria were not based on any one particular published generic assessment framework or teaching pedagogy. Each criteria was divided into subcriteria and the relative weighting between year groups represented by the width of various shapes that were intended to allow the lecturer to easily conceptualise the progression in assessment between year groups. The draft generic assessment framework was presented to staff at a workshop, structured within a SAS Teaching Interest Group meeting, for feedback and comment. The revised generic assessment framework, plus explanatory notes, is presented in Appendix 1. The generic assessment rubric was then applied to the development of the assessment rubrics for a report, across all year groups, and a laboratory/project report for a 2nd and a 2/3/4th year unit. The assessment rubric for a project report in a 3rd/4th year Agronomy unit is an example of the application of the generic assessment framework. Briefly, it was appropriate for this assessment task to include all four criteria of knowledge, analysis, practical skills and communication. The level of proficiency of students in relevant sub-criteria followed that of the third and fourth year groups in the generic assessment framework (Appendix 1). The knowledge criterion emphasised, particularly at the HD standard, student depth and integration of knowledge of information about their research topic, which was to be sourced primarily from refereed journal articles. The analysis criterion focused on student analysis and critical evaluation of information, including both data and knowledge. The sub-criterion of constructing new knowledge was included only in the HD standard of the analysis criterion. The skills criterion was included in the assessment rubric as the student research project had a strong practical component. Standards of the sub criteria ranged from students demonstrating proficiency in laboratory and/or field experiments and experimental design, without major errors, to an increasing level of achieved standard in statistical analysis. Only two of the four sub-criteria of communication were assessed in the rubric for the written assignment, including adherence to English conventions and use of scientific terminology. Project evaluation Mixed method research of both quantitative and qualitative data was used in project evaluation (Creswell, 2003). Quantitative data from four Likert (Uebersax, 2006) questions was collected through an on-line staff survey using Survey Monkey (SurveyMonkey, 2009) on the structure and content of assessment rubrics, assignment of grades and perceived value of a generic assessment framework. Survey data was collated and the means and range reported. Qualitative data on the draft generic assessment framework were collected from staff in the form of postworkshop written feedback (Kirkpatrick, 1994). Staff were requested to annotate the draft generic assessment framework and to provide written feedback to three questions: Q1) What aspects of the meeting today did you find useful?; Q2) What questions do you have for future meetings?; and Q3) Any general comments? The numbers of staff who attended the workshop and completed the feedback form were recorded. Approval was gained from the University of Tasmania’s Human Research Ethics Committee to undertake the project (reference: H10526). ATN Assessment Conference 2009, RMIT University
59
Results Staff survey on use of assessment rubrics A total of 15 out of 17 teaching staff in SAS (excluding the author) completed the on-line survey. The majority of staff used 3-4 criteria in their assessment rubrics and awarded numeric grades either based on a grading rule or as an overall grade (Table 2). Only around 20% of staff used a grading rule to award an alphanumeric grade based on standards. Less than half the staff who completed the survey considered that their assessment rubrics were consistent and progressive between years (Table 2). Around one-third of staff surveyed did not use assessment rubrics in their teaching. All staff either strongly agreed or agreed that a generic assessment rubric would assist them in developing their own assessment rubrics. Table 2. Staff survey on use of assessment rubrics My assessment rubrics contain:
2-3 criteria
3-4 criteria
Staff (n=15)
14% (3)
36% (5)
I assign grades to assessment tasks by:
Using grading rules based on standards to award an alphanumeric grade
Using grading rules based on standards to award a numeric grade
Awarding an overall numeric grade
Staff (n=15)
20% (3)
47% (7)
33% (5)
My assessment rubrics are consistent and progressive between year groups in the units that I teach
Strongly agree
Agree
Neutral
Disagree
I have not used assessment rubrics
40% (6)
13% (2)
20% (3)
27% (4)
Neutral
Disagree
Strongly disagree
Staff (n = 15) Generic assessment rubrics will assist me in developing assessment rubrics for the units that I teach
Strongly agree
Agree
Staff (n=15)
54% (8)
46% (7)
4-5 criteria
5 or more criteria
I don’t use assessment rubrics
14% (2)
36% (5)
Awarding an overall alphanumeric grade
Workshop on the generic assessment framework The draft generic assessment framework was presented to SAS teaching staff at a workshop. Twelve out of seventeen teaching staff attended the workshop, five of whom provided qualitative data through postworkshop written feedback. All five staff had positive comments about the draft generic assessment framework in response to Q1), What aspects of the meeting that they found useful? Staff responses ranged from general positive comments on the discussion, such as: “The discussion on the breadth and depth of knowledge, communication and critical analysis skills” (SAS1) In contrast, one member of staff provided specific feedback on the usefulness of the draft generic assessment framework in their teaching: “(The generic assessment framework)…provides a template for choosing appropriate criteria for the (unit) I teach into” (SAS2) Three staff provided written feedback to Q2) What questions do you have for future meetings? Feedback to
ATN Assessment Conference 2009, RMIT University
60
Q2 was diverse and included comments for further discussion regarding the assessment of creativity in agricultural science: “What is more important in assessing students: Creativity or new presentation style?” (SAS3) Other comments raised discussion points on how to relate the generic assessment rubric to standards within a year group: “How can we use the generic (assessment framework) to develop assessment criteria in a specific year, where there would be different levels of performance (standards)” (SAS4) Only three staff responded to the third question Q3) Any general comments? One person, in particular, could see an on-going discussion of assessment was warranted in the SAS monthly teaching meetings: “Very valuable discussion. Probably integrate it with monthly teaching interest group meetings” (SAS5) Several staff annotated the draft generic framework with suggested changes to the shape of diagrams for the sub-criteria. For example, staff recommended to separate breadth and integration of knowledge into two subcriteria, change the shape of data handling and manipulation, addition of a new sub-criteria of data acquisition skills within the practical skills criterion and revision of the sub-criteria of ‘creativity in approach to assessment’ to ‘creativity in presentation’. The revised generic assessment framework plus guidelines for use is shown in Appendix 1.
Discussion Teaching staff in the SAS were overwhelmingly in favour of the development of a generic assessment framework as a tool to assist them in writing assessment rubrics. This was a likely reflection of recognition by teaching staff of the requirement to implement CRA within the SAS (Allen et al., 2007), but who needed practical assistance to adopt change in their already hectic schedules. The generic assessment framework developed in this project is innovative in that it employs a visual representation of assessment sub-criteria across year groups, supported by exemplar assessment rubrics. Together these provide a degree of consistency and progression in the standard of assessment of criteria and sub-criteria relevant to the task within and across year groups, respectively. This approach contrasts with the textual frameworks currently in use by the Queensland University of Technology (QUT, 2008) and others (Hughes & Cappa, 2007; Neil et al., 1999) in the teaching and learning literature. The generic assessment framework, although developed specifically for assessment in agricultural science, could be adapted to meet the assessment needs in other disciplines, either within or external to the University of Tasmania. No one source of information was used in developing the generic assessment framework but key features of the four criteria can be shown to be similar to published literature, such as the SOLO taxonomy proposed by (Biggs & Collis, 1982). In particular the criteria of knowledge, analysis, practical skills and communication are very similar to the four dimensions of understanding of knowledge, purposes, methods and forms, respectively, as proposed by (Boix Mansilla & Gardner, 1997) through Harvard’s Project Zero into the development of learning process in children and adults. Furthermore, the level of proficiency in year groups within the generic assessment framework described here has parallels with (Boix Mansilla & Gardner, 1997) four levels of understanding within each dimension of naïve, novice, apprentice and master. The similarities between the generic assessment framework to the four dimensions of understanding provide increased confidence that it has a basis in accepted theories of teaching and assessment practice. Peer-to-peer professional learning was used to encourage staff participation in the project at workshops and in developing assessment rubrics, which was strongly supported by the Head of School. Peer-to-peer learning has been linked to positive outcomes in solving problems collaboratively, as described by (Brookfield, 1995), as was the case here where teaching staff who attended the workshop had a positive response to the generic assessment framework and offered constructive comments and feedback. Development of the generic assessment framework will be ongoing, as staff continue to debate the inclusion of current or new sub-criteria. The survey indicated that the SAS teaching staff at the start of the project ATN Assessment Conference 2009, RMIT University
61
appeared to vary in their understanding of the role of assessment in student learning and ability to develop assessment rubrics. Poor quality assessment rubrics has been one issue identified in the literature against the use of CRA (Sadler, 2008). However, comments from staff at the workshop on the generic assessment framework would appear to indicate that the project has helped to clarify the relationship between criteria and learning outcomes for students, which is integral to good teaching practice (Biggs & Tang, 2007). Several issues were identified during the project, which will require further discussion at the school level or clarification at the faculty level. There is a need to consider redeveloping some assessment tasks so that the various types are represented in all year groups within the SAS. For example, oral seminars were not assessed in second year, which highlights a gap in assessment that may constrain the progressive development of student skills. Within the SAS, there will be a need for continued support to implement CRA, which can be further evaluated in a follow-up survey. At the faculty level, clarification is needed on the current requirement for norm-referenced assessment, which is at odds with standards-based, criterion referenced assessment (Sadler, 2008).
Conclusion The SAS has showed significant advances since 2008 in the implementation of CRA, which is now used by two-thirds of the teaching staff. However, only 40% of staff agreed that their assessment to be consistent across units and year groups. The generic assessment framework has appeared to be a successful tool to assist staff in developing and refining their assessment rubrics, to ensure a degree of consistency among units within the school. The research reported here has also highlighted a need for the University of Tasmania to review the award of student final grades, which is currently inconsistent with standards-based CRA. Future research is planned to evaluate how the generic assessment framework can be adapted to suit other schools in the university, to explicitly relate the framework to generic graduate attributes and to assess the potential benefits of the generic assessment framework to student learning.
Acknowledgements I would like to acknowledge the participation and support of teaching staff from the School of Agricultural Science at the University of Tasmania throughout this project.
References Allen, P., Brown, N., Butler, L., Hannan, G., Meyers, N., Monkhouse, H., & Osborne, J. (2007). Guidelines for good assessment practice. University of Tasmania. Biggs, J., & Collis, K. (1982). Evaluating the quality of learning: The SOLO taxonomy. New York: Academic Press. Biggs, J., & Tang, C. (2007). Teaching for Quality Learning at University (3rd ed.). Maidenhead, England: Open University Press McGraw Hill. Boix Mansilla, V., & Gardner, H. (1997). What are the qualities of understanding. In M. Wiske (Ed.), Teaching for Understanding: Linking Research with Practice. Jossey-Bass. Brookfield, S. (1995). Becoming a critically reflective teacher. San Francisco: John Wiley & Sons, Inc. Brown, N. (2008). Implementation of Criterion Referenced Assessment at UTAS. Hobart, Tasmania: University of Tasmania. Carlson, T., MacDonald, D., Gorely, T., Hanrahan, S., & Burgess-Limerick, R. (2000). Implementing criterion-referenced assessment within a multi-disciplinary university department. Higher Education Research and Development, 19, 103-116. Creswell, J. (2003). Research design: qualitative, quantitative, and mixed methods approaches (2nd ed.). London: Thousand Oaks. Gibbs, G., & Simpson, C. (2004). Conditions under which assessment supports students' learning. Learning and Teaching in Higher Education, 1, 3-31. Hughes, C., & Cappa, C. (2007). Developing generic criteria and standards for assessment in law: process and (by)products. Assessment and Evaluation in Higher Education, 32, 417-432.
ATN Assessment Conference 2009, RMIT University
62
Joughin, G. (2009). Assessment, learning and judgement in higher education: A critical review. In G. Joughin (Ed.), Assessment, learning and judgement in higher education (pp. 13-27). Springer. Kirkpatrick, D. (1994). Evaluating training programs: The four levels. San Francisco, CA: Berrett-Koehler. LeFoe, G., Smigiel, H., & Parrish, D. (2007). Enhancing higher education through leadership capacity development: Progressing the faculty scholars model. In paper submitted for the Enhancing Higher Education, Theory and Scholarship, Proceedings of the 30th HERDSA Annual Conference. Adelaide, SA. Macdonald, R., & Joughin, G. (2009). Changing assessment in higher education: A model in support of institution-wide improvement. In G. Joughin (Ed.), Assessment, Learning and Judgement in Higher Education (pp. 193-213). Springer. Neil, D., Wadley, D., & Phinn, S. (1999). A generic framework for criterion-referenced assessment of undergraduate essays. Journal of Geography in Higher Education, 23, 303-325. Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31, 199-218. QUT (2008). QUT Guidelines for academic staff on the implementation of criterion referenced assessment in units. Retrieved October 1, 2008, from http://www.appu.qut.edu.au/coursedev/units/guidelines/2_assess_cra_guidelines.pdf. Sadler, D. (2008). Indeterminacy in the use of preset criteria for assessment and grading. Assessment and Evaluation in Higher Education. SurveyMonkey. (2009). SurveyMonkey. Portland, Oregon. From http://www.surveymonkey.com/. Uebersax, J. (2006). Likert scales: dispelling the confusion. Retrieved March 16, 2009, from http://ourworld.compuserve.com/homepages/jsuebersax/likert2.htm.
ATN Assessment Conference 2009, RMIT University
63
Appendix 1
A generic assessment framework for Agricultural Science The table can be used as a tool to guide teaching staff in writing assessment criteria for their units to ensure a degree of uniformity and progression in assessment within and across year groups in the School of Agricultural Science (SAS). The four generic assessment criteria of (i) knowledge; (ii) analysis of information; (iii) practical skills; and, (iv) communication are divided into sub-criteria. The suggested standard or weighting of assessment of sub-criteria for each year group is represented by the width (narrow, basic; broad, advanced) of the shape in the right-hand column of the table. In preparing an assessment rubric, include only those criteria and sub-criteria that are appropriate to the task. Students will have been assessed against all criteria by the end of the course. 1. Knowledge criteria Knowledge includes factual information and may be in written, tabulated or graphical formats. In a first year unit, breadth of knowledge across disciplines is assessed. As depth of knowledge is not required, it is appropriate that content be been sourced from general information (e.g. web pages, books). In subsequent years, knowledge is assessed in one or more disciplines within a unit at greater depth. By fourth year, assessment tasks could be designed to assess student’s ability to integrate deep understanding (knowledge) across disciplines that has been sourced predominately from specialist sources (e.g. journal articles). 2. Analysis criteria The second criterion assesses students’ ability to analyse information, including literature and/or data. The standard of assessment of student ability to analyse information to inform decisions and solve problems is relatively consistent across year groups. In comparison, higher-level analysis through critical evaluation and integration of information is assessed at increasingly advanced levels in subsequent years. By fourth year, student ability to construct of new hypotheses and understandings from new information and existing knowledge may also be assessed. 3. Practical skills criteria Standard of assessment for data handling and manipulation (e.g. Excel) increases from moderate to advanced in first to fourth year. Experimental design and statistical analysis are not be assessed until second year, if appropriate, and standards would increase from basic proficiency (e.g. use of simple univariate statistics) to advanced (e.g. use of statistical software) in fourth year. Standard of assessment of student proficiency in laboratory and field techniques and use of databases (e.g. CAB abstracts) and referencing software (e.g. Endnote) would be expected to increase from first to fourth year. 4. Communication criteria Standard of assessment for English and referencing conventions are the same across year groups. In contrast, standards of assessment of oral communication skills (e.g. in seminars), use of discipline-specific scientific terminology and creativity (artistic flair and artistry) in presentation would increase from first to fourth year. Contact: Dr Tina Acuña Ph. 6226 7507 Email.
[email protected]
ATN Assessment Conference 2009, RMIT University
64
Generic assessment framework for the School of Agricultural Science 1st year
Criteria
A. Knowledge
2nd year
Breadth of knowledge across disciplines
Breadth and depth of knowledge of information Source of information
3/4th year
Depth of knowledge within disciplines Integration of knowledge across disciplines Journal articles as source of scientific information
Use of general sources of scientific information
Analysis of information to inform decisions and solve problems B. Analysis and evaluation of information
Critical evaluation of information to draw conclusions Construct new hypotheses or create new understanding
Data handling & manipulation skills C. Practical skills
Data handling & manipulation Experimental design & statistical analysis Proficiency in laboratory & field techniques
Skill in experimental design & statistical analysis
Skills in laboratory & field techniques
Proficiency in database searches & use of referencing software
D. Communication Written English conventions – (structure, punctuation, spelling, grammar) Referencing conventions Scientific terminology Oral communication Creativity
ATN Assessment Conference 2009, RMIT University
Adherence to English and referencing conventions
Use of scientific terminology that is discipline‐specific
Oral communication skills
Creativity in presentation
65
Assessment of interprofessional competencies for health professional students in fieldwork education placements Margo Brewer Faculty of Health Sciences, Curtin University, Curtin Health Innovations Research Institute,
[email protected]
Nigel Gribble School of Occupational Therapy and Social Work, Curtin University,
[email protected]
Peter Robinson School of Physiotherapy, Curtin University,
[email protected]
Amanda Lloyd School of Psychology and Speech Pathology, Curtin University,
[email protected]
Sue White School of Pharmacy, Curtin University,
[email protected]
The purpose of health professional education is to prepare students for the challenges of clinical practice. These challenges have changed significantly, with client care becoming more complex due to advances in knowledge and technology, and clients themselves being more informed and wishing to be involved in their health care planning. Service providers are therefore required to work closely in interprofessional teams to provide collaborative, client-centred care. As a result, universities must change the way they prepare their health professional students to ensure that they are both willing and able to work in a range of interprofessional teams. This paper outlines the development of a tool to assess students’ interprofessional competencies whilst on fieldwork education placements. The tool was developed collaboratively by an interprofessional group of staff, and the challenges faced will be described. 25 items are organised within four scales: communication, professionalism, collaborative practice and service delivery. Grade-related descriptors for four levels for each of the items were developed: unsatisfactory, satisfactory, developing and outstanding. Piloting of the tool was conducted in two international fieldwork placements. Initial feedback from students and clinical educators indicates that some refinements need to be made to ensure it is an authentic assessment tool. Finally, the wider implementation of the interprofessional competencies tool will be described along with limitations of this research. Keywords: interprofessional, assessment, communication, collaborative practice
Introduction Fieldwork or clinical education is a vital component of the curriculum in most health science courses providing students with the opportunity to develop their competence in the application of theory to practice. Competency-based models of professional education are widely recognised as a useful way to define the outcomes expected of the learner (Barr, 1998; Curran et al., 2008). As with all other aspects of the curriculum, rigorous evaluation of competency outcomes is critical. In order to assess these outcomes it is essential to establish what is being measured. That is, what is professional competence? Different researchers have identified different competencies.
Professional competence Epstein and Hundert (2002) describe professional competence as the “habitual and judicious of communication, knowledge, technical skills, clinical reasoning, emotions, values and reflection in daily ATN Assessment Conference 2009, RMIT University
66
practice for the benefit of the individual and community served” (p. 226). They describe a number of professional competencies categorised as cognitive, technical, integrative, context, relationship, affective/moral, and habits of mind. Verma, Paterson and Medves (2006) in their examination of the competencies of four key health professions defined competency as a set of behaviours that describes excellent performance in a particular work context. In health professions, this notion of competence is used to define the standards or expectations of that profession. These researchers were describing competencies which although related were uni–professional. Over the last decade researchers have suggested the case for competency-based assessment rests on a number of factors including: the need for students to integrate both professional and interprofessional aspects of their course; to equip health professionals for multi-dimensional collaboration; to reposition interprofessional learning in mainstream professional education; and to respond to government and other key organisations’, such as the World Health Organisation’s calls for increased collaboration between health professionals (Barr, 1998). The World Health Organisation in their 2006 report “Working together for health” indicated that the health science curriculum could be an important catalyst for change and innovation in the health system and as such must be responsive to the needs and demands of this system including consumers’ expectations. As new paradigms of care are driving a shift from acute tertiary hospital care to patient-centred and team-driven care, new skills and interprofessional collaboration are required. The need to change the way we prepare health professionals is also recognised by a number of powerful organisations within Australia, including the National Health Workforce Taskforce’s Strategic Framework (2004) and the National Health and Hospitals Reform Commission’s report “A Healthier Future for all Australians” (2009). One of the key reforms is the development of strategies and incentives to improve the performance of health professionals in interprofessional teams. The changes in practice and ways of working require universities to meet the challenge and prepare our graduates to be more effectively able to practise collaboratively with a wide range of professionals and nonprofessionals. This collaborative ability requires an understanding of, and respect for, the contribution of others as well as good interpersonal and communication skills. Evidence is mounting to suggest that interprofessional education at both the undergraduate and postgraduate levels can engender the knowledge, skills and attitudes required for effective collaborative practice (Mackay, 2001). Interprofessional education What is meant by ‘interprofessional education’? The most widely accepted definition is ”those occasions when members (or students) of two or more professions learn with, from and about one another to improve collaboration and the quality of care” (Centre for the Advancement of Inter-professional Education, 2002). Whilst the evidence for interprofessional education being beneficial to the quality of care is growing, until recently the systematic reviews such as those conducted by Cooper, Carlisle, Gibbs and Watkins (2001), Hammick, Freeth, Koppel, Reeves and Barr (2007), and Reeves et al. (2008) revealed a number of shortfalls in the body of evidence for the outcomes of inter-professional education. There is therefore a need to ensure a rigorous evaluation of interprofessional education as the demand for evidence continues. Much of the evidence in this field thus far is at a relatively low level on Kirkpatrick’s 1967 (cited Belfied, Thomas, Bullock, Eynon, & Wall, 2001) model of education outcomes, with an emphasis on participants’ satisfaction with the experience rather than the more difficult to measure aspects of attitudinal and behavioural change (see Table 1). Table 1. Kirkpatrick’s four-point typology of educational outcomes Level 1
Reaction
Level 2 a
Modification of attitudes/perceptions
Level 2b
Acquisition of knowledge/skills
Level 3
Behaviour change
Level 4a
Change in organisational practice
Level 4b
Benefits to patients/clients
ATN Assessment Conference 2009, RMIT University
67
A number of instruments are used in studies related to interprofessional learning. Some of these instruments are described as methods. As an example Bales Interaction Process Analysis Tool (Atwal & Caldwell, 2006) and The Contact Hypothesis (Hean & Dickinson, 2005) are described as methods. In contrast some instruments are described as tools, for example the Readiness for Inter-professional Learning Scale (McFadyen, Webster, & MacLaren, 2006) and the Attitudes to Health Professionals Questionnaire (Lindqvist, Duncan, Stepstone, Watts, & Pearce, 2005). Yet other instruments are a combination of both a method and a tool such as the Patchwork Test (Crow, Smith, & Jones, 2005) and the System for the Multiple Level Observation of Groups (Cashman, Reidy, Cody, & Lemay, 2004). This paper describes the development of a tool to assess interprofessional competencies in fieldwork education placements.
Method The development of the interprofessional competencies assessment tool Health science students at Curtin University undertake a number of practice-based learning experiences during their studies. The majority of these take place in the final year of their course. One of the key interprofessional placements for these students involves students who are studying different professional programmes, travelling, living and undertaking clinical experience together in China, India, South Africa and the Ukraine under an international service learning program called ‘Go Global’. In order to appropriately assess this learning experience the authors developed the Interprofessional Assessment Form based on a combination of the shared competencies between the professional groups involved and those competencies deemed to be most critical in these international, service-learning settings. As such the tool was designed to be applicable to health professionals in general and is a product or outcome measurement tool which focuses on student behaviour change which is at level 3 of Kirkpatrick’s model. The University of British Columbia’s College of Health Disciplines developed an interprofessional competency framework by comparing and contrasting the consistencies, inconsistencies, overlap, discrepancies, and language used in 15 different existing competency frameworks (Wood, Flavell, Vanstolk, Bainbridge, & Nasmith, 2009). Their framework is organised into three domains: interpersonal and communication skills, patient-centred and family-focused care, and collaborative practice. Likewise the Interprofessional Capability Framework developed by the Combined Universities Interprofessional Learning Unit, a collaborative project between the University of Sheffield and Sheffield Hallam University, developed their interprofessional assessment tool in 2005, which was revised in 2009 (Walsh, Gordon, Marshall, Wilson, & Hunt, 2005). The competency areas included in this tool fall under four categories: collaborative working, reflection, cultural awareness and ethical practice, and organisation competence. Phase 1 – Instrument development In order to develop this tool a comparison of the clinical competencies in the assessment tools used by Curtin University for students who would be participating in the 2009 Go Global international fieldwork placements i.e. physiotherapy, occupational therapy, speech pathology and pharmacy was conducted. This was then compared with the British Columbia Competency Framework for Inter-professional Collaboration and Sheffield-Hallam’s Interprofessional Capability Framework. It was also compared to the competencies of one of the professions not involved in these placements in 2009 and one with less similar fieldwork placements to the four professions, medical science. The final list of competencies to be included was selected by an interprofessional academic team consisting of one staff member involved in clinical education from the disciplines of physiotherapy, occupational therapy, speech pathology and pharmacy, as well as the staff member responsible for interprofessional clinical education. The initial instrument contained 28 items organised into four dimensions of competence: communication, professionalism, collaborative practice and service delivery. The items were circulated to others including staff involved in clinical education from dietetics and nursing, and the director of the Go Global fieldwork programme to determine their face validity. The team then examined each item, along with the additional items recommended by the wider group of academics to ascertain their relevance to the Go Global learning experience. Examples of the additional items included students’ ability to use technology such as emails, other online communication tools (e.g. Skype) and mobile phones which are essential for these students who would be working at a vast distance from the campus. An examination of the breadth of the items was also conducted. For example, under the dimension of professionalism the behavioural descriptors Respects values, belief and culture of service users was expanded to Respects values, belief and culture of all relevant parties to ensure that students understand the importance of engaging with everyone involved in the practice ATN Assessment Conference 2009, RMIT University
68
setting. In addition an attempt was also made to reduce any obvious duplication of these behaviours. An example of this was merging the items: Manages workload as required and Completes task by agreed deadlines. The final tool comprised 25 items grouped into 4 subscales. Table 2 provides an illustration of this. Table 2. Subscales and sample items Subscales
Examples
1. Communication
Verbal and written communication is clear, comprehensive and culturally appropriate Actively listens to and respects service users’ needs/concerns and encourages self-management of health
2. Professionalism
Maintains flexibility and adaptability when working with others Accepts feedback and constructive criticism appropriately, modifying practice as required
3.Collaborative Practice
Recognises and respects the roles, responsibilities and competence of other team members Works in effective collaboration with team members to ensure optimal services
Critically evaluates service outcomes Demonstrates adherence to industry best practice
4.Service Learning
A four-point scale was developed with each of the 25 items rated as Outstanding (4), Developing (3), Satisfactory (2) and Unsatisfactory (1). A comprehensive rubric describing the rating for each of the items was created resulting in 100 performance descriptors across the 25 items. See Table 3 for an example: Table 3. Sample grading rubric 1.0 Communication skills
Score = 1 Unsatisfactory
Score = 2 Developing
Score = 3 Satisfactory
Score = 4 Outstanding
1.1 Verbal & nonverbal communication is clear, comprehensive & culturally appropriate
Fails to recognise & understand the impact of verbal, non-verbal, cultural & situational components of communication
Requires support to plan and address the verbal, nonverbal, cultural or situational components to facilitate effective communication
Successfully plans & addresses the verbal, nonverbal, cultural or situational components to facilitate effective communication
Successfully manages complex communication situations, including verbal, non-verbal, cultural or situational components to facilitate effective communication
The initial design team then met with other staff on the Go Global Steering Group to determine which items were essential aspects of the fieldwork placement and therefore labelled as core items. Examples of core items include Maintains professional behaviour at all times and Actively participates in interprofessional team meetings. Each core item must be graded as satisfactory or above on the four-point scale in order for the student to pass the overall placement. A score of unsatisfactory on any one core item would result in a fail grade for the placement. The 1 to 4 scoring system was included so that a grade out of 100 could be calculated for those professions such as pharmacy whose accrediting bodies required a mark rather than a Pass/Fail grade. In relation to calculating a mark the scoring system places equal emphasis on each item. Assessment is conducted both formatively and summatively. Feedback was provided verbally to each student and the inter-professional team by the supervisor/s on a daily basis. Students received feedback after each session with clients and the group feedback was provided after the evening debrief sessions. Team members were encouraged to reflect on their performance and areas for improvement prior to and during the feedback from the supervisor/s Students received written formative feedback using the tool after eight to ten days of the international placement (initial evaluation). A tripartite feedback system was formulated whereby each student received
ATN Assessment Conference 2009, RMIT University
69
written feedback from the supervisor. Each student was allocated another student to give feedback to, and in turn the peer assessor’s written feedback was also provided to the supervisor. Lastly, the student selfevaluated using the same tool. Verbal formative feedback occurred between each key tripartite member. See Figure 1. Supervisor
Student
Peer assessor – fellow student
Figure 1. Feedback relationships within the fieldwork placement model
As the supervisor is not onsite at all times, feedback was also garnered from significant stakeholders including the site manager, director and other staff via email, phone and Skype sessions. Summative written and verbal feedback was completed in the final days of the placement (final evaluation) using the same tripartite system as Figure 1. Phase 2 – Instrument trial The tool was piloted in 2009 on two Go Global placements for final year students: one to the Ukraine involving two physiotherapy and six occupational therapy students, and one to South Africa involving six occupational therapy students only. At the conclusion of these placements a collective interview with the fieldwork supervisors involved was recorded, transcribed, analysed and key themes identified (Cooper, 2001). Their overall reaction to the tool was that it was a vast improvement on the tool they were using on previous placements to these Go Global settings, which were the traditional occupational therapy and physiotherapy assessment tools. They felt that it assessed the interprofessional competencies that were most relevant to the learning experience. Qualitative statements were made such as: the items were an “incredibly good prompt” to discuss issues that arose in the placement with the students. The move away from the assessment of profession-specific skills to the more interprofessional skills required in a placement such as this was seen as very positive. Each subscale was then discussed in more detail. Constructive critical comments included: The communication subscale (Subscale 1) was felt to be critical to the placement however the lack of inclusion of the student’s ability to use interpreters, which is an essential requirement in such intercultural settings needed to be addressed. The item related to the appropriate use of technology also generated some discussion, with the students rating this based on their technical skill whereas the supervisors felt that the most important aspect was their culturally relevant use of the technology. It was agreed that both of these issues could be overcome by including this detail in the comments section. Professionalism (Subscale 2) was seen as a critical subscale that lead to some interesting discussions between the fieldwork supervisors and the students on the scope of professional behaviour and where this starts and ends. For example, some students, in a social situation, were observed to criticise staff within earshot of their colleagues. Item 2.2 which relates to ethical practice in accordance with legal and regulatory guidelines raised some concerns in these settings where what is considered legal in Australia is not deemed to be legal in another country. Supervisors felt that this item requires clarification with the students prior to the use of the tool. Collaborative practice (Subscale 3) was also seen to be a very important inclusion. The supervisors interpreted the word ‘team’ to be inclusive of others that the students were required to work with on a regular basis such as the interpreters. Item 3.7 which involves the competency of active participation in interprofessional team meetings as a core item caused concern on the uni-professional placement where the supervisors felt there was no opportunity for interprofessional meetings. They suggested that this be included but as a non-core item. ATN Assessment Conference 2009, RMIT University
70
Supervisors reported that on previous placements they often had to remind students that service delivery (Subscale 4) is a key element of the Go Global placements, and therefore the final subscale which focuses on this was a very useful inclusion. Three out of the 5 items in this however required further clarification. Item 4.2 requires the students to critically evaluate the service outcomes. Supervisors felt that this wording reinforced the students’ focus on clinical models of intervention rather than the more programme-oriented interventions utilised in these settings. They also commented that the structure of the placement required students to develop an action plan at the outset which they then alter as required. The issue of changing this item to the evaluation of the student action plan and/or project outcomes was raised. Item 4.3, which assesses the student’s ability to advocate recommended interventions to be implemented and sustained, encouraged students to take a more clinical and client centred approach rather than a programme and community centred approach. Supervisors also felt the use of the word ‘advocacy’ implied that the students should be directive in their approach with the staff in the settings they were working in. A final consideration with this particular item was the implication that all interventions should be sustained, whereas in fact many are designed to be short-term and one-off. It was felt that a change of wording to ‘facilitates community-centred interventions’ would be more appropriate. Item 4.4 assesses the student’s adherence to industry best practice. This raised two areas for discussion: firstly, that what is best practice in Australia is not necessarily best practice in another cultural context; and secondly, students interpreted this as involving traditional clinical interventions. Once again a change of wording was suggested so that the focus is on culturally appropriate interventions. The tool appears to have some face validity as the key supervisors reported that they discussed how they felt each of the different competency items applied to their placement setting and were easily able to reach consensus on this. The issue of a struggling student arose on one of the trips, and there were concerns raised about the sensitivity of the form to identify this student’s reduced performance. The majority of the students rated themselves as 2 on the rating scale but the struggling student rated herself as 1 on some items suggesting that it had some level of sensitivity. The role of peer assessment was also discussed. On the trip that involved only a single discipline (occupational therapy) the students all rated each other with what the supervisor described as “very high marks”. Supervisors felt that the peer assessment exercise was very worthwhile however this needs to be trialled on more diverse student groups. Supervisors and students wanted more space for making comments in the electronic version of the tool and suggested a 5-point rating scale might be more useful so that there was an obvious average point. This feedback along with the reflection of staff from the remainder of the Go Global fieldwork placements conducted this year will form the basis for any further modifications to the tool to ensure it is appropriate for the future student groups. The tool will also be reviewed such that it can be used with other placements and indeed as a tool to measure interprofessional competence throughout the programme. It is important to note that the research presented in this paper is preliminary. The tool has been trialled on a small number of students from a limited range of disciplines and in limited contexts to date. The validity and reliability of the tool also need to be examined in detail.
Conclusion The aim of this paper was to describe the development and field-testing of a tool designed to assess students’ interprofessional competencies. We believe that this provides a useful framework and language to evaluate students’ interprofessional competencies in health care teams. Crucial to the success of the tool was the involvement at all stages of an interprofessional team of staff experienced in clinical education and the evaluation of the tool in practice. The process, as recommended by Barr (1998), was consultative, collaborative and consensual. To date, a key factor in the acceptance of the tool’s integration into interprofessional placement settings has been the emphasis on the global or generic skills rather than discipline-specific skills. This has enabled assessment of students in one profession by fieldwork supervisors from a different profession. The crossATN Assessment Conference 2009, RMIT University
71
disciplinary assessment has challenged conventional thinking regarding assessment of competence in some disciplines, and barriers to further development are still evident in the accreditation bodies of some professions. It is hoped that removal of assessment of discipline- specific skills will enable a greater integration of cross-discipline assessment than has traditionally been undertaken. Further analysis of the trends in this aspect will enable appropriate adjustments to be made to the tools that encourage this crossdiscipline engagement in the assessment process. Future work will continue to refine the instrument, compare its usage with different student groups and in different settings, and ensure that it is generic to a wide range of fieldwork programs at Curtin and other universities. The sensitivity of the tool in identifying students with a lower than expected level of competence also requires further examination.
Acknowledgements We would like to thank the students and staff from Curtin University and Go Global for their input into the development and piloting of this tool.
References Atwal, A., & Caldwell, K. (2006). Nurses’ perceptions of multidisciplinary team work in acute health-care. International Journal of Nursing Practice, 12 (6), 359-365. Barr, H. (1998). Competent to collaborate: Towards a competency-based model for interprofessional education. Journal of Interprofessional Care, 12 (2), 181-187. Belfied, C., Thomas, H., Bullock, A., Eynon, R., & Wall, D. (2001). Measuring effectiveness for best evidence medical education: a discussion. Medical Teacher, 23 (2), 164-170. Cashman, S.B., Reidy, P., Cody, K., & Lemay, C. (2004). Developing and measuring progress towards collaborative, integrated, interdisciplinary health care teams. Journal of Interprofessional Care, 18, 183-196. Centre for the Advancement of Interprofessional Education (2002). Retrieved August 31, 2008, from http://www.caipe.org.uk/about-us/defining-ipe/?keywords=definition. Cooper, H., Carlisle, C., Gibbs, T., & Watkins, C. (2001). Developing an evidenced base for interdisciplinary learning: a systematic review. Journal of Advanced Nursing, 35 (2), 228-237. Curran, V., Casimiro, L., Banfield, V., Hall, P., Lackie, K., Simmons, B. et al. (2008). Research for interprofessional competency-based evaluation (RICE). Journal of Interprofessional Care, 23 (3), 297300. Crow, J., Smith, L., & Jones, S. (2005). Using the Patchwork Text as a vehicle for promoting interprofessional health and social care collaboration in Higher Education. Learning in Health and Social Care, 4 (3), 117-128. Epstein, R.M., & Hundert, E.M. (2002). Defining and assessing professional competence. Journal of American Medical Association, 287 (2), 226-235. Hammick, M., Freeth, D., Koppel, I., Reeves, S., & Barr, H. (2007). A best evidence systematic review of interprofessional education: BEME Guide no. 9. Medical Teacher, 29, 735-751. Hean, S., & Dickinson, C. (2005). The Contact Hypothesis: an exploration of its further potential in interprofessional education. Journal of Interprofessional Care, 19 (5), 480-491. Lindqvist, S., Duncan, A., Stepstone, L., Watts, F., & Pearce, S. (2005). Development of the ‘Attitudes to Health Professionals Questionnaire’ (AHPQ): A measure to assess interprofessional attitudes. Journal of Interprofessional Care, 19 (3), 269-279. Mackay, S. (2001). The role perception questionnaire (RPQ): A tool for assessing undergraduate students’ perceptions of the role of other professions. Journal of Interprofessional Care, 18 (3), 289-302. McFadyen, A.K., Webster, V.S., & MacLaren, W.M. (2006). The test-retest reliability of a revised version of the Readiness for Interprofessional Learning Scale (RIPLS). Journal of Interprofessional Care, 6, 633639.
ATN Assessment Conference 2009, RMIT University
72
National Health Workforce Taskforce: Strategic Framework (2004). Retrieved August 9, 2009, from http://www.nhwt.gov.au/. National Health and Hospitals Reform Commission (NHHRC) (2009). A Healthier Future for all Australians. Retrieved August 9, 2009, from http://www.nhhrc.org.au/internet/nhhrc/publishing.nsf/Content/nhhrc-report. Reeves, S., Zwarenstein, M., Goldman, J., Barr, H., Freeth, D., Hammick, M., et al. (2008). Interprofessional education: effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews, Issue 1. Art. No.: CD002213. DOI:10.1002/14651858.CD002213.pub2. Verma, S., Paterson, M., & Medves, J. (2006). Core competencies for health care professionals: what medicine, nursing, occupational therapy, and physiotherapy share. Journal of Allied Health, 35 (2), 109-115. Walsh, C.L., Gordon, M.F., Marshall, M., Wilson, F., & Hunt, T. (2005). Interprofessional capability: A developing framework for interprofessional education. Nurse Education in Practice, 5 (4), 230-237. Wood, V., Flavell, A., Vanstolk, D., Bainbridge, L., & Nasmith, L. (2009). The road to collaboration: Developing an interprofessional competency framework. Journal of Interprofessional Care, 1-9. Working Together for Health (2006). Retrieved September 2, 2008, from http://www.who.int/whr/2006/en/.
ATN Assessment Conference 2009, RMIT University
73
Feedback: working from the student perspective Kylie Budge College of Design & Social Context, RMIT University,
[email protected]
Sathiyavani Gopal College of Business, RMIT University,
[email protected]
Feedback is a key element of quality teaching and assessment and is a powerful influencer of student achievement. Recent years have seen an increasing interest in the provision of feedback by a range of stakeholders. A student perspective on the provision of feedback has been acknowledged as an under-researched area (Rowe & Wood, 2008). Students both in Australia and internationally have reported dissatisfaction with feedback they receive on their work, including assessment. Consistent with this pattern, RMIT student survey results also reveal low levels of satisfaction with aspects of the feedback they receive. This paper presents the details of a study initiated to explore students’ perceptions of feedback and the form in which they prefer to receive it and is unique in terms of capturing students’ perceptions of feedback in a dual sector institution. Students from one dual sector discipline were surveyed in 2008 and both quantitative and qualitative data was collated and analysed to identify patterns and relationships of interest. By contextualizing the study for a specific discipline the authors developed a detailed understanding regarding the provision of feedback from the student perspective. The key findings include issues with the timing, frequency, quantity and quality of feedback, the feedback form, and peer feedback and self-assessment. Contrary to popular opinion that suggests students do not value or use feedback to improve their work, the authors found that 95% of respondents indicated they use feedback to improve their results in future assignments and projects. In addition, new understandings regarding the possibilities and potential for providing feedback and the need for a multifaceted feedback strategy are presented in the paper. The findings are offered as a contribution to the development of a deeper understanding of feedback, particularly from a student perspective. Keywords: feedback, student perceptions, assessment, learning
Introduction In recent years university students both within Australian and internationally, have reported dissatisfaction with the feedback they receive on their work via a range of national student surveys such as the Australian Course Experience Questionnaire (CEQ) and the Student Outcomes Survey (SOS) as well as internal university surveys and studies (Atwood, 2009; BBC News, 2007; Chen, 2007; Mahoney & Poulos, 2004; Potter & Lynch, 2008; Price & O’Donovan, 2007; Rowe & Wood, 2008;). In 2007 the UK National Student Survey indicated high levels of student dissatisfaction with the quality, quantity and timing of feedback on their work (HEFCE, 2007). In Australia “the recent DEST Report on student responses to the Course Experience Questionnaire (CEQ) has put assessment and feedback at the top of the list for the three main areas of student concern at Australian universities” (Scott, as cited in Belski, 2007, p. 1). Educational research reveals that the feedback students receive about their work plays an important part in supporting student learning (Black & William, 1998; Hattie & Timperley, 2007; Sadler, 1989). Vygotsky’s (1978) socio-cultural theory and his concept of the zone of proximal development (ZPD) advocate scaffolding of instruction as a teaching strategy and defines this “instruction as the role of teachers and others in supporting the learner’s development and providing support structures to get to that next stage or level” (Vygotsky, as cited in Van Der Stuyf, 2002, p. 2). Facilitated learning involving scaffolding creates the environment for students to build upon their prior knowledge and increase the depth of their learning. Within tertiary education environments “a socio-cultural model of learning exists between students and their peers ATN Assessment Conference 2009, RMIT University
74
and between students and tutors, and within this dynamic model of learning the language of feedback [as an element of scaffolding] enables students to achieve goals to a greater extent than they would without peers or tutors” (Merry & Orsmond, 2008). Feedback is a form of effective scaffolding in learning and a number of studies have shown that feedback does enhance and deepen learning. In a review of 87 studies on feedback Hattie (1987) found that feedback was the most powerful influencer of student achievement. Black and William (1998) also emphasize the widespread and consistent positive effects of feedback on learning when compared to other aspects of teaching. Hyland found that “feedback serves a variety of purposes including the grading of achievements, the development of students’ understanding and skills, and in motivating students” (as cited in Rowe & Wood, 2008, p. 1). However, Gibbs and Simpson along with Nicol and Macfarlane-Dick contend that “for feedback to be effective it needs to be detailed, understood and used by the student to self-assess their learning” (as cited in Merry & Orsmond, 2008). In many contemporary tertiary education contexts, providing detailed feedback is a challenge due to an increase in class sizes. Rowe and Wood (2008) state that feedback constitutes a central aspect of learning, yet has been largely neglected in research to date, particularly from the student’s point of view. This study attempts to develop an understanding of feedback from the student perspective anticipating that this new understanding about student perceptions and values could assist teachers in providing effective and timely feedback for learning.
Research question The authors suspected that student dissatisfaction with feedback was a combination of differing student and teacher understandings of ‘feedback’, and perhaps, a lack of understanding of the kind of feedback that students value. Rather than presume to know the kind of feedback students value or need, the aim of this study was to explore students’ perceptions of feedback, including its meaning, and the form in which they’d prefer to get it by surveying one dual sector discipline as a case study: students enrolled in the RMIT School of Fashion and Textiles. The research study set out to address the following questions: What feedback is provided to students? What perceptions do students have of feedback? How much do students value feedback? What preferences do students have for feedback? What suggestions do students have for improving feedback?
Methodology The research was initiated as a learning and teaching project by the authors, two academic development advisers for the School of Fashion and Textiles, as part of their school learning and teaching liaison role within the university. The case study was employed as the research framework (Yin, 2002; Stake, 1995) using quantitative and qualitative techniques to investigate what feedback students want and how they prefer to get it. A dual sector School (delivering both higher education and TAFE – Technical and Further Education - programs) was chosen for this study because the teaching staff expressed an interest in gaining a deeper understanding of the student perspective on feedback and it was seen as important to understand this within both the TAFE and higher education contexts. The study involved 83 student participants from the School (completing a voluntary, online survey) representing 7% of the total number of students enrolled in the school during semester two, 2008. While the response rate might appear low, others have also commented on the relatively low response rate experienced by research studies using online surveys (Winter & Dye, 2003/2004). The participants were enrolled in both higher education and TAFE programs with 27 enrolled in higher education and 56 enrolled in TAFE programs. The authors reviewed an existing survey instrument called the ‘Student Feedback Questionnaire’ developed by Rowe and Wood (2008) from Macquarie University in NSW. The survey tool was contextualized by involving students and staff in the development of it to ensure that it could be easily understood and that it used language that was relevant to both the discipline and the dual sector context. Student representatives
ATN Assessment Conference 2009, RMIT University
75
were consulted with a draft version of the survey and assisted in contextualizing the content of it. Survey questions were also sent to teachers for their feedback. As a result a new instrument called the School of Fashion and Textiles Student Feedback Survey was developed by adopting the original instrument with feedback gathered from students and staff and with permission from the original survey’s authors. An example of contexualisation that occurred with the survey tool includes using the term ‘teachers/lecturers/tutors’ to describe teaching staff in the survey because different terms are commonly used in higher education and TAFE contexts. However, for the purpose of this paper the term ‘teachers’ will be used. The instrument was developed into an online survey using a five point Likert scale to look at student feedback perceptions, values, preferences, and the frequency of current feedback. The survey also included one qualitative question to elicit suggestions about feedback from students. The link to the online survey was sent to students via email for them to complete and a time frame of one month was allocated for data collection. Descriptive quantitative data analysis was undertaken to evaluate feedback provided to students, and to gain an understanding of their perceptions, values and preferences regarding feedback. Agreement or disagreement with each question was calculated by adding together percentage responses to ‘agree’ and ‘strongly agree’ categories or by responses to ‘disagree’ and ‘strongly disagree’. Analysis of the qualitative data was undertaken by thematically coding the data collected via the open-ended survey question and identifying patterns and relationships of interest.
Findings and analysis The survey data provided a rich source of information about students’ perceptions of feedback and the preferences they have. In terms of addressing the specific research questions the data provided the following information: What feedback is provided by the School to students? Students at the School of Fashion and Textiles receive a range of feedback on their work including: individual written feedback, group written feedback, individual verbal feedback, group verbal feedback, peer feedback and feedback from themselves via self-assessment. The majority of their feedback comes in the form of individual verbal and written feedback and group verbal feedback from teachers. However, it should be noted that respondents also indicated that they receive some feedback via other forms such as group written feedback from teachers, peer feedback and self-assessment. What is the student’s perception of feedback? Students who participated in this study perceive feedback to be important. They believe receiving feedback indicates that staff care about their work, is a justification of their grade, and indicates what they need to do to improve their performance. They feel encouraged when they receive feedback and feel that they deserve it with 95% of students agreeing that they deserved feedback when they put so much effort into class work and projects. Feedback is understood to be an important form for staff to communicate expectations and it also motivates students to study. 87% of respondents agreed that feedback tells them what the expectations of the teacher are. Students have a personal connection to the feedback they receive indicating they perceive it is an evaluation of their strengths and weaknesses. In fact 91% of respondents agreed that feedback is an evaluation of their strengths and weaknesses. Respondents understand that communication via email constitutes feedback but interestingly, place less value on feedback given by the teacher via the Blackboard online discussion board (ie. through the Distributed Learning System), blogs and wikis. However, the reasons for this are not clear from the data gathered during this study. Our speculation is that online feedback might be perceived by students as less personal than face-toface feedback or hand written feedback. However, this is clearly an area which requires further exploration at a deeper level within future research studies. Table 1. Percentage breakdown from questions about students’ perceptions of feedback
I deserve feedback when I put so much effort into class and projects Feedback tells me what the expectations of the teacher/lecturer/tutor are Feedback is an evaluation of my strengths and weaknesses
Strongly Disagree
Disagree
Neutral
Agree
Strongly Agree
0%
1.6%
3.2%
31.7%
63.5%
0%
4.8%
7.9%
42.9%
44.4%
0%
1.6%
7.9%
50.8%
39.7%
ATN Assessment Conference 2009, RMIT University
76
How much do students value feedback? Feedback is extremely important to the students who participated in this study. An indication of this is the majority of students claim that they always collect their assignments and projects 95%, read the accompanying feedback (98%) and then use it to try to improve their results in the future (95%). In addition students value feedback regardless of whether they receive a low or high grade with only 10% agreeing that feedback is useful only when they receive a low grade. The high value students place on feedback was clearly conveyed through the data with more agreement being shown in this area than in other areas of the survey. Table 2. Percentage breakdown from questions about students’ perceptions of feedback
I always collect my assignments/projects I always read the feedback on my assignments/projects I use feedback to try and improve my results in future assignments/projects Feedback is only useful when I receive a low grade
Strongly Disagree
Disagree
Neutral
Agree
Strongly Agree
0.0%
0.0%
4.8%
25.8%
69.4%
0.0%
0.0%
1.6%
27.4%
71.0%
0.0%
0.0%
4.8%
32.3%
62.9%
43.5%
33.9%
12.9%
4.8%
4.8%
What preference do students have for feedback? Students in this study clearly expressed their preferences for feedback. They have a strong preference for individual feedback from the teacher, preferring specific information about what they did right or wrong on assignments and other submitted work, and they have a desire for feedback to be given progressively throughout the semester (93%). They prefer written feedback because they can refer to it later but there is still a strong interest in receiving verbal feedback. Interestingly respondents do not have a preference for typewritten feedback. They enjoy receiving general feedback in class because it helps them to learn independently (80%). They enjoy receiving feedback from their peers (64%) and when teachers prepare them by posting example answers/projects on Blackboard (75%). They do not view the grade as more important to their learning than the feedback and they prefer not to be just given the answers. Table 3. Percentage breakdown from questions about students’ perceptions of feedback Strongly Disagree
Disagree
Neutral
Agree
Strongly Agree
I like to receive feedback on all my projects progressively during semester General feedback provided in class helps me learn independently
0.0%
0.0%
6.8%
37.3%
55.9%
1.7%
6.8%
11.9%
57.6%
22.0%
I like it when I receive feedback on my work from my peers (classmates) I like it when teachers/lecturers/tutors prepare us by posting example answers/projects on Blackboard
1.7%
11.9%
22.0%
44.1%
20.3%
3.4%
6.8%
15.3%
42.4%
32.2%
In terms of addressing the final research question regarding ideas students have for improving feedback respondents offered a range of constructive suggestions. However, it is important to note that the students in this study are not overly dissatisfied with the feedback they currently receive on their work. More than half of respondents agreed that they receive enough feedback from their teachers, but indicated that there are key areas needing attention. The key issues identified through both the qualitative and quantitative data include concerns with the timing, frequency, quality and quantity of feedback. In addition the data revealed significant insights into student perceptions about the feedback form, peer feedback and self-assessment. Timing and frequency Students from this study are indicating they use feedback for developmental learning purposes and therefore the timing of feedback is critical. 86% of respondents agreed that feedback tells them what they need to do to improve their performance in a course. As mentioned previously 93% indicated that they would like to
ATN Assessment Conference 2009, RMIT University
77
receive feedback on their projects progressively during the semester. In the open response section of the survey one student commented on the importance of timing by saying: “I think every subject should be graded throughout the semester, allowing plenty of feedback and therefore the opportunity to achieve a HD. No student should be shocked or surprised at the end of a semester when the grade is significantly lower (or ‘Failed’) than what they expected.” The timing of feedback is critical. Sadler (1989) supports this notion and claims that feedback on formative assessment rather than summative assessment assists students in identifying the gap between their goals and their current knowledge and skill level. Table 4. Percentage breakdown from questions about students’ perceptions of feedback
Feedback tells me what I need to do improve my performance in a course I like to receive feedback on all my projects progressively during semester
Strongly Disagree
Disagree
Neutral
Agree
Strongly Agree
0.0%
3.2%
11.1%
33.3%
52.4%
0.0%
1.6%
3.2%
31.7%
63.5%
Quantity and quality Feedback quantity and quality seem deeply connected for the students who participated in this study. For example, while just over half of respondents agreed that they receive enough feedback from their teachers, a slightly smaller number indicated they receive enough information to make feedback useful. This suggests that the type of feedback they currently receive may not be adequate in fulfilling their needs. The survey data indicates that high quality feedback is valued by students and is considered important to learning and this is consistent with the literature on feedback and learning (Rowntree, 1987). Contrary to popular opinion that suggests students do not value or use feedback to improve their work (see for example Fritz, Morris, & Bjork, 2000), this study found that 98% of respondents indicated they use feedback to improve their results in future assignments and projects. Furthermore, 75% of respondents indicated that feedback motivates them to study. The participants had a great deal to communicate about the type of feedback they valued. In particular, they requested constructive feedback that they can apply to their learning. 93% of respondents agreed that specific feedback is better because it helps them to understand what they did right or wrong. Students expressed a desire for clear, constructive feedback about their work rather than “just a grade”. In particular, they have an interest in receiving feedback about their strengths and weaknesses to enable them to apply this to their learning and incorporate it into future assessment. As one student succinctly put it: “More written words, not just ticks.” And another: “A lot of the time feedback could be more thorough and specific. Sometimes feedback seems to be quite formulaic with a number of students receiving the same sorts of comments. I also think that more constructive and thorough feedback is necessary even when the mark is good, so that the student can still improve and work on their weaknesses.” Table 5. Percentage breakdown from questions about students’ perceptions of feedback Strongly Disagree
Disagree
Neutral
Agree
Strongly Agree
I always read the feedback on my assignments/projects
0.0%
0.0%
1.6%
27.4%
71.0%
Feedback motivates me to study
1.6%
6.3%
17.5%
34.9%
39.7%
Specific feedback is better because it helps me to understand what I did right and wrong in an assignment
0.0%
0.0%
6.8%
37.3%
55.9%
ATN Assessment Conference 2009, RMIT University
78
The feedback form Significantly, analysis of the quantitative data collected via the survey revealed that respondents do not regard online feedback from their lecturer as highly as hand written feedback or one-to-one verbal feedback. 81% of students agreed that written feedback is better (although they have mixed feelings about typewritten feedback) because they can refer to it later. However, students also rated the use of verbal feedback quite highly because they enjoyed the direct communication with their teachers (68% of respondents in agreement), including teacher-to-class verbal feedback. Respondents said that a negative aspect of verbal feedback is that they may not always remember it. Another interesting aspect about the format used to communicate feedback to students is the area of online feedback. As mentioned previously, respondents expressed mixed feelings by placing less value on the use of online feedback including comments by the teacher on the Blackboard discussion board, blogs and wikis. However, they do value email communication from the teacher as a form of feedback. Table 6. Percentage breakdown from questions about students’ perceptions of feedback Strongly Disagree
Disagree
Neutral
Agree
Strongly Agree
Written feedback is better because I can refer to it later
0.0%
3.4%
15.3%
44.1%
37.3%
I prefer verbal feedback because I can communicate with the teacher/lecturer/tutor and clarify information
3.4%
6.8%
22.0%
44.1%
23.7%
Peer and self assessment Respondents are surprisingly open to peer-assessment and self-assessment. While 92% of students agreed that individual feedback is better because they can clarify issues with the teacher, the data also suggests that the students who were surveyed have a realistic understanding of how often one-to-one teacher-to-student feedback can occur given time constraints and workload issues. In relation to this aspect, the authors were interested in capturing student perceptions and experiences of peer feedback and self-assessment through the survey. The students who were surveyed have not had extensive experience with the use of peer feedback or self assessment. However, they responded openly to the idea of participating in both. In fact, the majority of students who participated in the survey (64%) agreed that they liked it when they received feedback on their work from their peers. Table 7. Percentage breakdown from questions about students’ perceptions of feedback Strongly Disagree
Disagree
Neutral
Agree
Strongly Agree
Individual feedback is better because I can clarify any issues with the teacher/lecturer/tutor
0.0%
0.0%
8.5%
35.6%
55.9%
I like it when I receive feedback on my work from my peers (classmates)
1.7%
11.9%
22.0%
44.1%
20.3%
Discussion The results of this study suggest the discipline is a possible influencing factor in how students perceive and value feedback. Conducted within one creative/arts/design discipline, the School of Fashion and Textiles (albeit containing a number sub-disciplines), there was a distinct level of openness and maturity noticeable particularly in relation to the responses received in the open-ended question section of the survey. Moreover, respondents appeared open to participating in peer and self-assessment and do not expect all their feedback to be delivered on an individual level via their teachers. The authors, who have worked on other feedback projects in discipline areas distinctly different from this one, believe this level of openness to alternative forms of feedback could be related to the nature of the discipline itself. This observation is consistent with the literature on differences between the disciplines contributing to specific disciplinary ways of knowing and being (Becher, 1996; Huntley-Moore & Panter, 2003). Teachers in the School of Fashion and Textiles can capitalise on this apparent openness to alternative feedback forms by adopting a multi-faceted feedback strategy. Developing and implementing such a strategy would be a good use of time and resources in a highly pressurised time-poor environment characteristic of ATN Assessment Conference 2009, RMIT University
79
most current tertiary institutions and presents a real opportunity to build self-aware and self-regulated learners (Boud, Cohen, & Sampson, 1999). Students who participated in this study are open to a range of feedback forms including: individual, group, peer and self-assessment, and a multi-faceted feedback strategy incorporating all these aspects, instead of a heavy reliance on time-consuming individual feedback between the teacher and student, would be of immense benefit to both teachers and students. Further support for a varied strategy can be found in two other recent feedback studies conducted by McCallum, Bondy and Jollands (2008), and Price and O’Donovan (2007). McCallum et al. (2008) found that “interviews with highly rated staff showed staff use many and varied feedback methods” (p. 4). Developing a feedback strategy that is feasible and realistic to implement, incorporates student perceptions, values and preferences about feedback, and is sensitive both to time and resource demands on teachers would be a means of providing students with the kind of detailed, constructive feedback they value and require for effective learning. Successful implementation of such a strategy needs to also incorporate the management of student expectations about feedback. That is, receiving useful feedback that can contribute to learning is not only about receiving one-to-one feedback from the teacher. Valuable feedback that contributes to learning comes in a variety of forms, including from peers (Boud et al., 1999). The results of this study also indicate that there is a real need to build students’ perceptions about the value of online feedback in the development of a multi-faceted feedback strategy. As stated previously, respondents reported mixed feelings about the use of online feedback. However, work could be done to assist in changing this perception. As teachers develop a multi-faceted feedback strategy and educate students about the merit of various forms of feedback, discussion about the role and value of online feedback in terms of contributing to student learning could also be included. Our observation, both during this study and through our work as academic developers, is that students need to be educated about what constitutes feedback on their work. Additionally, it needs to be explicitly named as ‘feedback’ so that students become accustomed to the term itself and the meaning attached to it. This is important because the use of online feedback tools can possibly save valuable time for teachers and provide students with specific feedback about their work’s weaknesses, strengths and progress. Students who participated in this study identified a strong personal connection to the feedback they receive on their work, believing feedback is in an indication of their strengths and weaknesses. While it is encouraging that they value feedback and connect with it so deeply, there is also a need to discourage students from seeing feedback as a personal assessment but more about the work they have submitted or are creating. Within the framework of developing a multi-faceted feedback strategy teachers can contribute to helping create this change in perception by ensuring the feedback they give relates to the task rather than the person (Hattie & Timperley, 2007). Additionally, in developing such a feedback strategy, Hattie and Timperley’s (2007) three-feedback question model is a useful guide to frame the way in which feedback is given to students. In describing this framework they claim that feedback is most useful to student learning when the following three questions are used to guide its dissemination: “Where am I going? How am I going? And Where to next?.....The answers to these questions enhance learning when there is a discrepancy between what is understood and what is aimed to be understood.” (p. 102). The feedback literature recommends linking assessment rubrics and criteria to feedback to ensure that students understand the feedback being communicated and because they are both important for learning. The significance of assessment rubrics/criteria is supported by “two experimental research studies [that] have shown that students who understand the learning objectives and assessment criteria and have opportunities to reflect on their work show greater improvement than those who do not” (Fontana & Fernandes and Frederikson & White, as cited in Boston, 2002). However, in recommending the use of assessment rubrics to teachers as a way of providing more effective feedback, Sadler (2009) cautions about the limitations of using criteria-based marking (via rubrics) in disciplines involving creative and complex subject matter. This caution is worth careful consideration in disciplines such as the one in which our study was based.
ATN Assessment Conference 2009, RMIT University
80
Conclusion Although some of the concepts explored in this study have been tested in another (Rowe & Wood, 2008) the results of this research validate and extend the basic findings by showing that students in a creative/arts/design dual-sector discipline perceive and value feedback that is timely and provides specific detail about the strengths and weaknesses of the work. In particular students want feedback that is timely so that they can apply it to their work with the intent of improvement. Nicol and Macfarlane-Dick (2006) support the value of this type of feedback by claiming that effective feedback via formative assessment assists greatly in encouraging self-regulated learning. Significantly, this study highlighted that the broader disciplinary context in which students are studying influences how students value and perceive the importance of feedback on their work. Comparative studies have not yet been conducted to further test this idea but, nevertheless, the data gathered through this study strongly supports this notion. In a creative/arts/design discipline where students are often required to produce a tangible product, and where a culture of critical feedback already exists due to the nature and practice of the discipline itself, students perceive feedback to be valuable and are eager to apply it to improve their learning. Further studies in other creative disciplines may provide validation of this idea. In addition, this study revealed a level of willingness amongst participants to engage in self-assessment and peer-feedback activities. Students, particularly in the cohort that was the subject of this research, are conceivably more open to the use of peer feedback and self-assessment than most current teaching, learning and assessment practices possibly allow for. The study revealed that students are not necessarily unhappy with the amount of feedback they receive on their work but want a focus on quality for developmental learning purposes. This interest in feedback quality and detail aligns with the findings of another recent RMIT feedback study conducted by McCallum, Bondy and Jollands (2008). Furthermore, despite popular opinion that suggests students do not value or use feedback to improve their work, the results of this study revealed that respondents do use feedback to improve their learning. This finding supports that of another study conducted by Mahoney & Poulous (2004) at the University of Sydney that found through focus group discussions with students that “feedback directed [student] learning and allowed them to make necessary changes.” (p. 4). Somewhat surprisingly, this study revealed that students do not value feedback provided by online forms as highly as might be expected. This is particularly intriguing given that Generation Y is known for embracing online technology. The key is not to abandon online forms of feedback because of this perception but to develop students’ understanding that this form of feedback is genuine, can often provide valuable detail, and can contribute to improving their learning just as other forms of feedback are able to. Developing a positive perception of online feedback is one component of establishing a comprehensive, multi-faceted feedback strategy. Such a strategy involves cultivating and implementing a range of feedback methods, not just the repeated use of one method, such as teacher to student written feedback. The findings of this study have a wider application outside the one discipline that is its focus because they can assist teachers, lecturers, and tutors in understanding student perceptions, values and needs in relation to feedback. Further studies, including interdisciplinary research could explore some of the issues which surfaced during this study including the perception of feedback on student work in creative disciplines and the reasons why students perceive online feedback as being of less value than other kinds of feedback. The conclusions of this study further validate the findings of other recent feedback studies that endeavour to understand students’ perceptions of feedback (McCallum et al., 2008; Rowe & Wood, 2008; Rowe, Wood, & Petcocz, 2008). Moreover, our research extends the findings of these studies by exploring students’ perceptions, preferences and values about feedback in the context of one dual sector discipline area. They are offered as a contribution to the development of a deeper understanding of feedback, particularly from a student perspective, with implications for both policy and practice in tertiary education.
Acknowledgment We would like to acknowledge the students and staff at the School Fashion and Textiles, RMIT for allowing us to explore student perceptions of feedback as a means of deepening our understanding of this area in the interest of continuing to improve practice.
ATN Assessment Conference 2009, RMIT University
81
References Atwood, R., (2009, February 5). ‘HE in FE’ holds its own in National Student Survey. The Times Higher Education. Retrieved April 17, 2009, from http://www.timeshighereducation.co.uk/story.asp?storyCode=405247§ioncode=26. Becher, T. (1996, first published 1989). Academic tribes and territories: Intellectual enquiry and the cultures of disciplines. Buckingham: SRHE & Open University Press. Belski, I. (2007). Using Task Evaluation and Reflection Instrument for Student Self-Assessment (TERISSA) To Improve Educational Assessment and Feedback. In Proceedings AaeE Conference. Melbourne, Australia. Black, P., & William, D. (1998). Assessment and classroom learning. Assessment in Education, 5 (1), 7-75. Boston, C. (2002). The concept of formative assessment. Practical Assessment, Research & Evaluation, 8 (9). Retrieved June 11, 2009, from http://PAREonline.net/getvn.asp?v=8&n=9. Boud, D., Cohen, R., & Sampson, J. (1999). Peer Learning and Assessment. Assessment and Evaluation in Higher Education, 24 (4). BBC News (2007, September 12). Students bemoan lack of feedback. BBC News. Retrieved April 17, 2009, from http://news.bbc.co.uk/2/hi/uk_news/education/6990022.stm. Chen, L., (2007). UCTL Report on the 2007 AUSSE. Survey & Testing Unit, UCTL. Fritz, C., Morris, P., & Bjork, R. (2000). When further learning fails: Stability and change following repeated presentation and text. British Journal of Psychology, 91, 493-511. Hattie, J.A. (1987). Identifying the salient facets of a model of student learning: A synthesis of metaanalyses. International Journal of Educational Research, (11), 187 - 212. Hattie, J., & Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77 (1), 81-112. Higher Education Funding Council for England (HEFCE) (2007, September 12). Higher education survey reveals continued student satisfaction. Retrieved April 17, 2009, from http://www.hefce.ac.uk/news/hefce/2007/nss.htm. Huntley-Moore, S., & Panter, J. (2003). Does discipline matter? Issues in the design and implementation of management development programmes for heads of academic departments. In Proceedings HERDSA Conference. Christchurch, NZ. Mahoney, M.J., & Poulos, A. (2004). Strengthening the nexus between teaching and learning through increased attention to feedback to students: a research-led teaching approach. In Proceedings Australian Association for Research in Education Conference. Melbourne, Australia. McCallum, N., Bondy, J., & Jollands, M. (2008). Hearing each other – how can we give feedback that students really value. In Proceedings AaeE Conference. Yeppoon, QLD. Merry, S., & Orsmond, P. (2008). Students’ Attitudes to and Usage of Academic Feedback Provided Via Audio Files. Bioscience Education, 11. Retrieved June 20, 2009, from http://www.bioscience.heacademy.ac.uk/journal/vol11/beej-11-3.aspx. Nicol, D.J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31 (2), 199-218. Potter, A., & Lynch, K., (2008). Quality feedback on assessment: apple for the teacher? How first year student perceptions of assessment feedback affect their engagement with study. In proceedings 11th Pacific Rim First Year in Higher Education Conference. Hobart, Tasmania. Price, M., & O’Donovan, B. (2007). Making meaning out of assessment feedback – getting more than the message. In proceedings HERDSA Conference. Adelaide, Australia. Rowe, A.D., & Wood, L.N. (2008). Student perceptions and preferences for feedback. Asian Social Science, 4 (3), 78-88. Rowe, A.D., Wood, L. N., & Petocz, P. (2008). Engaging students: Student preferences for feedback. In Proceedings HERDSA Conference. Rotorua, New Zealand. ATN Assessment Conference 2009, RMIT University
82
Rowntree, D. (1987). Assessing students: How shall we know them? London: Harper Row. Sadler, D.R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18 (2), 119-144. Sadler, D. R. (2009). Indeterminacy in the use of preset criteria for assessment and grading. Assessment and Evaluation in Higher Education, 34 (2), 159-179. Stake, R. E. (1995). The Art of Case Study Research. Thousand Oaks: Sage. Van Der Stuyf, R. (2002). Scaffolding as a Teaching Strategy. Retrieved July 27, 2009, from http://condor.admin.ccny.cuny.edu/~group4. Vygotsky, L. S. (1978). Mind in society. Cambridge, MA: Harvard University Press. Winter, C., & Dye, V. L. (2003/2004). An investigation into the reasons why students do not collect marked assignments and the accompanying feedback. Centre for Learning and Teaching, University of Wolverhampton. Yin, R. K. (2002). Case Study Research. Design and Methods (3rd ed.). Applied Social Research Method Series (5). California: Sage Publications.
ATN Assessment Conference 2009, RMIT University
83
Authentic voices: collaborating with students in refining assessment practices Sue Burkill Education Enhancement, University of Exeter,
[email protected]
Liz Dunne Education Enhancement, University of Exeter,
[email protected]
Tom Filer Institute of Arab and Islamic Studies, University of Exeter,
[email protected]
Roos Zandstra Education Enhancement, University of Exeter,
[email protected]
The research outlined in this paper draws on the concept of ‘the cultivated community of practice’ (O’Donovan et al., 2006) to develop some principles of student engagement with respect to authentic assessment. The term authentic assessment is commonly used to refer to modes of assessment, but this paper takes a different approach to authenticity, describing how, when students take the lead in researching and refining assessment processes and practices, they become ‘authentic voices’. We present a case study of some of these principles as enacted by students involved in The University of Exeter’s Students as Agents for Change Project and show that they can become leaders in the design of effective assessment environments. An explicit critique is made of ‘tokenism’ which can occur when students are asked to participate in assessment reform and we argue for ‘deeper empowerment’. We acknowledge that this can create tension between staff and students and discuss whether the student voice can become compromised in these contexts. In a final ‘twist’ to the argument, we describe how involving students in the project can present ideal opportunities for the participants to undertake authentic assignments. One of the authors of this paper is an undergraduate and another a recent postgraduate student; both have been integrally involved in leading the research for this paper supported by two academic developers. Keywords: authentic voices; authentic assessment; empowerment; engaged collaboration.
Introduction “There is growing recognition that students have a major role to play in the enhancement of teaching and assessment. Universities and colleges are increasingly positioning students as engaged collaborators rather than inferior partners in assessment, teaching, course planning and the improvement of quality, and are using student representatives as central contributors to the business of enhancing the student experience”. (Ramsden, 2008, p. 5) Ramsden’s point is reinforced in a recent influential Parliamentary Report which suggests: “Students are in an excellent position to judge the quality of teaching and to identify the…action required…”. (House of Commons, 2009, p. 84)
ATN Assessment Conference 2009, RMIT University
84
The purpose of this paper is to describe, evaluate and exemplify a research methodology developed at the University of Exeter which involves student representatives acting as leaders and engaged collaborators (McCulloch, 2009) in shaping student experiences. It focuses in particular on students’ views about the quality of assessment and feedback – processes which students consistently tell us are the most disappointing aspects of their University experience (only 65% of students were happy with assessment and feedback experiences in the National Student Survey, 2009). The research reported here was designed to track how one student representative working with a postgraduate researcher was involved in research and making judgements which shaped changes in assessment practices. We are also able to demonstrate that this process, in itself, presents future opportunities for authentic assessment.
Authentic assessment: the received view The concept of authentic assessment is well established in the education literature (Guba & Lincoln, 1989; Herrington & Herrington, 1998; Stiggins, 1987; Wiggins, 1993). The term has come to mean a range of different things over the years (Cumming & Maxwell, 1999) but is typically defined as the selection of particular modes of assessment which ‘authentically allow a student to demonstrate (the) ability to perform tasks, solve problems or express knowledge in ways which simulate situations which are found in real life’ (Hymes, Chafin, & Gondor,1991). The principles of authentic assessment have been taken up by two major assessment projects in the UK: the Assessment for Learning (AfL) and the Assessment Standards Knowledge Exchange (ASKe) Centres for Excellence in Teaching and Learning (CETLs); as a result the positive benefits have become more widely disseminated. Underlying the increased popularity of authentic assessment is the considerable body of research which argues that in order to learn effectively students have to construct meaning from what they are doing (Biggs & Tang, 2007); authentic tasks serve as vehicles for such learning. Additional benefits are associated with explicit preparation for employment and the relevance of the tasks which enhance student motivation (Herrington & Herrington, 1998). Most significantly for this paper, authentic assessment increases student involvement in the design of their own learning: “Authentic assessments allow more student choice… in determining what is presented as evidence of proficiency. Even when students cannot choose their own topics or formats, there are usually multiple acceptable routes towards constructing a product or performance” (Mueller, 1999). One of the six core conditions of the AfL project is to ‘develop students’ abilities to direct their own learning, evaluate their own progress and attainments and support the learning of others’. Students may also be involved in the process of constructing an understanding of assessment criteria and developing a genuine and shared understanding of standards as exemplified in the ‘three step process’ designed by the ASKe CETL.
Authentic voices: involving students in shaping and leading on assessment and feedback processes Our research has led us to believe that there is a deeper way of conceptualising authenticity. Students provide a particular viewpoint or ‘gaze’ which is complementary to the staff viewpoint and is valuable in improving assessment processes. We have described this as the ‘authentic voice’. This thinking draws on work by Healey et al. (in press) who describe three interconnected levels of engagement through which students to have a voice in learning experiences: Engagement in their own learning experiences Engagement in quality assurance and enhancement Engagement in strategy development As we have illustrated above, in relation to assessment, the first level is well developed in the literature and in practice. Student engagement in assessment has also been taken to the second level, for example by the ASKe project (Price, O’Donovan, Rust, & Carroll, 2008) which has developed a ‘manifesto for change’ in which students are seen as members of a ‘cultivated community of practice’ (O’Donovan et al., 2006) actively collaborating in a dialogue about the quality of assessment environments. One of the tenents of the manifesto states: ATN Assessment Conference 2009, RMIT University
85
“Assessment standards are socially constructed so there must be a greater emphasis on assessment and feedback processes that actively engage both staff and students in dialogue about standards” (Price et al., 2008). Increasingly, students are engaged in quality assurance and enhancement in the UK through bodies such as the Higher Education Academy (HEA), Quality Assurance Agency (QAA) and National Union of Students (NUS) and Student Participation in Quality Scotland (SPARQS). For the HEA, Ramsden (2008) argues that: “Student involvement in quality processes should start from the idea of building learning communities. Practically speaking, this involves shaping student expectations of their role as responsible partners who are able to take ownership of quality enhancement with staff and engage with them in dialogue about improving assessment, curriculum and teaching…” (p. 16). However, this type of engagement does not necessarily result in students taking a fully developed role as partners in assessment design: “While institutions’ rationales for student engagement processes stem from a central concern to enhance the student experience, for many … institutions a ‘listening and being responsive’ rationale seemed to take precedence over a rationale that emphasised student engagement as being central to creating a cohesive learning community (and hence staff and students being viewed as partners in enhancing learning experiences)” (Little, Locke, Scesa, & Williams, 2009). In summary, there is plenty of evidence that, up to a point, students are engaged and are being empowered to take more responsibility for assessment. However, we would argue that most of these initiatives aim primarily to ensure that individual students can understand and achieve (better) in their own assignments. It is the third level of student engagement, in strategic processes, which is less well developed: “Research …suggests that when learners are engaged in shaping and leading their own learning and education this can result in benefits for all learners, educators, the institution and the education system as a whole” (Walker & Logan, 2008, p. 2; our italics). Without this deeper engagement students may continue to be ‘estranged’ from the environment in which they work (Mann, 2001). It is in these contexts that we believe students who become involved can genuinely be described as ‘authentic voices’. Healey, Mason O’Connor, & Broadfoot (in press) report on a situation where students work alongside academics in creating a university policy for learning, teaching and assessment. However, there is very little other evidence that students are involved in research into, and the strategic design of, assessment processes and practices and few published examples of how this can be implemented (the ASKe CETL’s ‘advice for students’ is an exception).
A move to deeper empowerment: the students as agents for change project Over the last year student representatives at The University of Exeter have been involved in influencing strategic change through a ‘Students as Agents for Change’ project in which they have acted as unpaid and independent researchers addressing learning and teaching issues. At The University of Exeter we engage students in a variety of ways in educational change. A diagram outlining the principles for student engagement (Fig. One) illustrates how we have conceptualised four approaches ranging from the more typical practices of engaging students as evaluators, partners/co-producers and decision makers to the more radical change agent role (the bottom right sector of the diagram). In the first three roles described there is a danger that empowerment could be seen as tokenistic (Burkill, 1997) providing a forum for listening but not really hearing (Linet, 2009) or by apparently responding to, but not really implementing, students’ ideas. Students who have these experiences may initially engage enthusiastically but ultimately wonder whether it has been worth the effort. They may ask whether they are making a real difference. The deeper empowerment (Burkill, 1997) associated with the change agent role involves students in taking ownership of the research process which, as we will show through a case study, potentially leads to practical and strategic changes in assessment processes and practices. McCulloch (2009) ATN Assessment Conference 2009, RMIT University
86
emphasises the importance of finding ways of reducing the distance between students and the ‘institutional education process’ and of addressing passivity amongst students by the use of active approaches such as those engaged in during this project. EMPHASIS ON THE STUDENT VOICE
STUDENTS AS EVALUATORS OF THEIR EXPERIENCE Students offer feedback, views and opinions and are listened to on an institutional basis, in order to build an evidence-base as a basis for enhancement and change. Decisions for action tend to be taken at subject and/or institutional level. eg:
STUDENTS AS PARTICIPANTS IN DECISION-MAKING PROCESSES Students engage in institutional decision-making, in order to influence enhancement and change. Decisions for action tend to be taken collaboratively with staff and students. eg:
Using monitoring devices such as questionnaire surveys or focus groups (external: NSS; internal: crossinstitutional/ subject or service-based), including formal procedures for complaints. Making use of informal evaluative feedback on a smaller
scale Voting through Guild representation systems (via SSLCs and other forms of representation: crossinstitutional /School/Guild practice). Listening to ‘faint’ voices and minority students.
Showing commitment to change through student/staff dialogue and offering solutions (via SSLCs and other forms of representation: cross-institutional /School/Guild practice). Involvement by students in committee structures through the whole institutional system, from representation on n Council to cross-institutional working groups to School Learning and Teaching committees. Supporting the writing of codes of practice (behaviour code in the Business School).
Integrating students into educational change
EMPHASIS ON THE UNIVERSTY AS DRIVER STUDENTS AS PARTNERS, CO-CREATORS AND EXPERTS
Students are collaborative partners in curriculum provision and professional development, in order to enhance staff and student learning. Decisions for action tend to be taken at subject and/or institutional level. eg: Reversing roles - students training staff in new skills, such as with new technologies, wherein students have the greater expertise (WebCT to Moodle; JISC Integrate project). Designing curricula and resources- negotiating/producing examination questions/question banks; setting assignments (PCMD, History); redesigning module provision and delivery (Engineering); producing induction material (Business School).
EMPHASIS ON THE STUDENT AS DRIVER STUDENTS AS AGENTS FOR CHANGE
Students are collaborative partners in pedagogic knowledge acquisition and professional development, with the purpose of bringing about change. Decisions for action tend to be promoted by students and engaged with at subject and/or institutional level. eg: Students setting their own agendas for research on learning and teaching, in collaboration with SSLCs and other such fora. Students engaging with research processes (data collection, collation, analysis, formal presentation) with support from experts. Students implementing their solutions, supported as appropriate by individual staff/ subject areas/ institution.
EMPHASIS ON STUDENT ACTION
Figure 1. Principles of student engagement: a framework for student engagement in educational design
Methodology: engagement by design Engagement by Design (Fig. Two) is a research process adapted from an approach used in the UK health sector (PenCLAHRC, 2008) which emphasises the importance of engaging patients in generating research questions. This was adapted to involve students in defining educational research agendas (1). A rapid review process (2) then allowed students to consult pedagogic experts (academics and academic developers) to find out whether or not research already existed which could help answer their questions. Students were subsequently engaged in doing the research (3), suggesting implementation strategies (4) and disseminating results (5). There are two reasons why this approach is thought to be particularly valuable. First, the key difference between this methodology and most others used to investigate learning experiences is that the students themselves are the drivers behind the research questions chosen. They are not simply responding or even collaborating (Ramsden, 2008), they are essentially leading the research process. Secondly, evidence from medical research (Peninsula CLAHRC, 2008) suggests that when patients are put in the position where they ATN Assessment Conference 2009, RMIT University
87
lead on generating research questions, unanticipated issues emerge and the outcomes can often be unexpected for the researchers they are working with; this has led us to further refine our definition of ‘authentic student voices’ to emphasise the significance of unexpected outcomes which can result when students take the lead on pedagogic research design. One student representative from each of ten subject areas selected an aspect of teaching and learning that was of concern to their consultative bodies (staff-student liaison committees or SSLCs), developed research questions, and planned their own methods of data collection. In addition to assessment and feedback (which is the focus of this paper), topics included seminar practices, employability, use of learning spaces, and engagement in lectures. A student-led conference was used to share findings with staff and students from across the University. An outcome of this process has been the development of jointly held goals and objectives which are to be taken forward in departmental and school education action plans in the next academic year.
Generating the Question
Structuring the Question
Students (1)
Answering the Question
Implementing the Answer
Evaluating the Implementation
Answer Clear
Implementation Group
Dissemination (5)
OR Focus groups etc.
Rapid Review Group (2)
Research Synthesis
Implementation Strategy (4)
Synthesis
OR Academic staff
Evaluation
Primary Research (3) Implementation
Figure 2. Engagement by design (based on a concept used by Peninsula Clahrc, 2008)
Case study: an investigation into assessment and feedback practices in one department The University of Exeter was in the top ten institutions in the UK National Student Survey (NSS) in 2008/2009. The department concerned, a small language and area studies centre, did well in the NSS scoring positive overall feedback of 84%. However, like other departments in universities across the UK, the assessment and feedback scores are significantly lower than scores in other areas of the survey. In addition, over the last few years assessment and feedback issues have consistently been raised by students through the departmental SSLC. Aspects of these lower assessment and feedback scores are being addressed through a Task & Finish Group to consider how to improve assessment processes. However, the SSLC and the student representative who took part in the change agent project felt there was scope for additional research which led them to chose assessment and feedback questions for their ‘engagement by design’ project (Zandstra & Filer, 2009). Data was collected in three different ways: an online questionnaire to all students resulted in 47 responses from across all year groups, two focus groups (each containing five students) were filmed/recorded and three staff took part in one hour interviews. Focus groups aimed to establish the main issues students had with assessment and feedback in order to focus the research; interviews were held with departmental staff and educational developers and were used to explore and validate suggested interventions. The Department concerned is structurally located in the School of Humanities and Social Sciences and the School’s Head of Education was interviewed to find out whether the issues students were raising in this project were already being addressed in a new School assessment and feedback strategy. The University’s standard research ATN Assessment Conference 2009, RMIT University
88
ethics procedures were adopted. The data from these surveys will be published elsewhere and therefore we only draw on outcomes which contribute to our argument for this paper. Students surveyed in this study indicated that they were pleased with many aspects of assessment and feedback, including the amount, level and modes of assessment. However there were areas where students felt that improvements could be made. The vast majority thought that first year students should be given earlier assignments and that these should be returned before the next was submitted. Language students currently get their work thoroughly corrected but thought that more comments indicating how they could improve would be beneficial. Some students were concerned about the consistency of marking and found that legibility of feedback was sometimes an issue. Students in the survey and in focus groups thought peer assessment would be a useful tool which could be used formatively. In the interview with the Head of Education it became clear that the School had already become aware of many these issues through a range of feedback mechanisms. For example, it was already planning to introduce an early assignment for first years which would be returned within the first six weeks. The consistency of marking procedures is already being addressed and pilot projects to introduce peer and self assessment to support students in their understanding of marking criteria were in preparation. However, this research approach (as hoped) did surface some unanticipated outcomes. These were priorities for students which staff had not identified from internal or external (NSS) quality evaluation processes. First there were issues around formative assessment. Students were sometimes unclear about whether assessment was formative or summative; a recurrent theme was the lack of clarity in student handbooks about this. Many also felt there was insufficient formative assessment, which they thought would be especially beneficial in the first year. Summative examinations held earlier in the year were also requested. These were counterintuitive outcomes for staff who, as is the case with many language programmes, felt that students were already potentially overburdened with formative assessment and unprepared for examinations early in the year. Secondly, personal tutors were identified as needing to play a greater role in interpreting feedback to allow students to get a better overview of how they are progressing in all modules. Students wanted personal tutors to support them more effectively in their ongoing academic development. Whilst the School has been planning to introduce feedback surgeries for individual assignments it became apparent that student were more interested in engaging with their own tutor in a more holistic way about the range of feedback they have been given across several assignments. Finally, this study found that these students are ambivalent about an extended role for technology in assessment; focus group participants saw it as supplementary to traditional modes of assessment – and they were not particularly interested in new and innovative modes of assessment. This outcome presents challenges in a School where large numbers have led staff to develop plans to adopt more efficient (and often technology driven) modes of assessment. The student researcher on this project highlighted the recommendations for the departmental assessment and feedback practices in his presentation at the student led conference (June 2009). He recommended that First year students have their initial assignment and formative feedback in the first six weeks of their studies on each module. This should be similar to the first summative assignment they would receive. For example, in the case of language modules a short version of the first exam would be useful. For nonlanguage students feedback on an essay style assessment would be most beneficial. All subjects should produce more transparent information in module handbooks to ensure that students are clear about how they will be assessed and what assessment counts towards their degrees. January exams are introduced, especially for language modules so that students are able to receive feedback on their exam performance early enough to make a difference to their final module scores. It should be made easy (using a technological solution which is currently under development at The University of Exeter) for personal tutors to have a full overview of the feedback students receive; it is hoped that personal tutors will be allocated time to ensure that this information is used effectively in meetings with students. ATN Assessment Conference 2009, RMIT University
89
It might be suggested that these outcomes are relatively minor. However, the point that we are making is that these are the outcomes students were hoping for and which they did not feel had been addressed in previous evaluative engagements with staff. The efficacy of the engagement by design research methodology is exemplified well by these recommendations. It is anticipated that the next stage in the strategic process will be enacted in the coming Academic Year. The outcomes of this student led research needs to be implemented by students and staff collaborating to take forward these recommendations, not necessarily in their entirety but at least through an agreed consensus. We anticipate that success will be dependant on trust between teachers and students (Furedi, 2009). In a recent discussion between two of the authors of this paper (Tom Filer & Sue Burkill, August 2009), it became clear that Tom felt the response of students to the research project had been positive; they felt empowered by the process and those who took part (he explained that some were apathetic and did not) were strategic students who felt they might gain from being involved. They did not feel threatened by the openness of the discussions –repercussions were not an issue – although this could have been a problem in a different context. Staff had also been positive, helpful and engaged towards the research and Tom felt that the recommendations were likely to be implemented. The extent to which this will happen depends on whether staff feel they are working in a performative environment (where students ‘needs’ are seen as paramount) or a professional environment (where there is shared responsibility in a ‘cultivated community of practice’) (Lines, 2009). There is also an issue about the extent to which staff in general recognise the authenticity of the ‘student voice’ in the design of assessment practices and the extent to which there can be a balance between staff voices and student voices in an area, such as assessment, where academic staff have little experience of sharing leadership of strategic change with students. This is one of the keys to the future success of the Student as Change Agent project. In the widest sense we need to help staff to accept that their professional role involves not just collaborative engagement with students but also acceptance that students can lead on assessment design. The staff -student relationship based on trust can be very fragile. We should not be overconfident about the outcomes of the project; in his role as Chair of the SSLC, Tom is currently engaged in a wider discussion about the (perceived) inadequacy of some departmental quality processes. This is causing tension which could undermine the emerging community of practice (O’Donovan at al., 2006) that has developed in the last year. It is clear that the fostering of collaborative practices is a long term process and that a one year project does not necessarily build sustainable cultures for joint strategic planning. To gain an insight into how this type of research can change the way students interact with institutions we discussed whether Tom now understood more about the way Universities assess students. He felt that his understanding had increased but, more significantly, he had come to understand how practices are intricately associated with a network of quality processes in the University. He expressed some frustration about this as it had undermined his early expectation that his involvement would lead to rapid change. As time went on he became more aware of the complexities involved and more tolerant of the slow pace of change. Interestingly, although the independence and autonomy of student representatives had been carefully preserved throughout the year, it had become part of Tom’s role to explain the lack of rapid progress to other students and he found himself in a position where he was almost ‘defending’ the system. He had gained a much deeper understanding of ‘the languages, culture and practices’ (Mann, 2001) of the institution, to such an extent that we discussed whether his was still an ‘authentic voice’ at the end of the process. It is intriguing to reflect on whether involvement in this project had somehow compromised the authenticity of his views. As the project moves into its second year (2009-10) we shall be tracking and researching the possibility that in some way the methodology adopted leads to ‘compromised authenticity’. With reference to the characteristics of the ‘engagement by design’ methodology there is no doubt that the leadership role Tom has adopted was a vital success factor in this project. On a personal note, Tom feels committed to taking this work forward, possibly as a sabbatical education officer, after he finishes his degree. He is particularly interested in investigating the deeper role of personal tutors in supporting students to understand their feedback –a theme that was an unanticipated outcome of his research.
ATN Assessment Conference 2009, RMIT University
90
A twist to the argument: authentic voices and authentic assessment The ten students who were involved in the Students as Agents for Change project have gained a range of knowledge and skills. They have undertaken a piece of focused research in which they identified the research question, designed the research methodology, undertook the primary data collection and presented the outcomes at a major institutional event. Working with others, including University senior managers (often outside their comfort zones), has developed their collaborative working and communication skills. Researching alongside an employed research assistant has developed leadership, management and negotiation skills. Finally, projects were undertaken outside the curriculum and during the busiest part of the academic year – hence, time management and work prioritisation skills were paramount. All the students were given the opportunity to complete the University of Exeter’s Award or Leadership Award¹. In addition, in this case study, the research process had provided personal insights into the operation of assessment processes which are invaluable. Tom had, for example, become aware of the paramount significance of assessment criteria and felt that this knowledge would allow him to be more strategic and successful in his final year modules. As this project proceeded, the team started to consider whether students could potentially achieve formal credit for the work they had done. We reflected on the characteristics of authentic assessment and felt that we should pursue this idea. In a useful summary provided by Park University (no date) the essential goals of authentic assessment are said to be, to: Enhance the development of real-world skills Encourage higher order cognitive skills (analysis, synthesis, evaluation) Promote active construction of creative, novel ideas and responses Encourage emphasis on both the process and product of learning Promote the integration of a variety of related skills into a holistic project Enhance students' ability to self-assess their own work and performance Typical modes of authentic assessment are problem based and inquiry based learning reports, portfolios, projects, demonstrations of mastery and performances. There is no doubt that the students involved in this project would have met these goals and could have achieved credit for their work using one of these modes of assessment. In discussion with Tom he made it clear that he had been happy to undertake the project without any formal credit and that the intrinsic motivation and employability potential (through his CV) of this work had been sufficient drivers for him. However, if there had been a work experience module available this would have been a distinct additional incentive. The outcome of this discussion is that we are planning to offer accreditation for students who take part in similar projects in the future. This raises a question about whether the opportunities for acting as co-researchers can be extended to a larger cohort of students. All students who have given feedback through the project and the wider student population will have benefited indirectly from this research; their experiences should have been improved as a result of the implementation of the recommendations. We are investigating whether a bigger group might be involved in the research itself and have plans for a larger number of projects involving more individual in 2009-10. We have experience of involving bigger groups in our Medical School where students can elect to take a module in their third year through which they research medical educational issues and make a presentation to a small peer group. This approach could potentially be made more widely available using a generic work experience module which would be accredited through all programmes. In 2009-10 we shall be discussing this proposal with students and staff.
Conclusions There have been several important outcomes of this project. We have presented evidence that the approach adopted in the case study has resulted in the identification of issues which, when addressed, will improve the student assessment experience in one department in one University. We have also illustrated that the research methodology used provided a way of capturing the ‘authentic voices’ of students and gave unanticipated ATN Assessment Conference 2009, RMIT University
91
insights which make it worth pursuing more widely. The project has created an ‘emerging community of practice’ where staff and students are engaging in planning strategic developments at the departmental level. Students have gained skills and a deeper understanding of the nature of departmental pedagogies. In the process they have become more politically literate. More broadly, similar outcomes have been repeated in nine other departments. However, there are challenges ahead if the project is to become more widely accepted and more deeply embedded. There is, of course, an element of risk in this approach; institutions and their teaching staff have to be ready to critically evaluate existing processes, admit that there are problems and be prepared to rethink some practices; this is time consuming and may be threatening. Students risk diverting time from academic study as they develop their pedagogic literacy and potentially place themselves in vulnerable positions if they do not get the communication with staff right. Finally, institutions have to decide whether the resources and the incentives are there to ‘scale up’ this approach for more departments in the same university and beyond. On balance the gains from the processes are likely to far outweigh the problems and as Ramsden (2008) suggests there are ‘real prizes’ in the engaged collaborative approach. This project has illustrated that students can take the lead in these collaborations and that the value they bring results from the authenticity of student voices.
Notes ¹ Two University extra curricula awards in which students record and reflect on their skill development beyond the curriculum.
References Assessment for Learning (AfL) CETL What is assessment for learning? Retrieved from http://www.northumbria.ac.uk/cetl_afl/research/toolkit/whatisafl. Assessment Standards Knowledge exchange (ASKE) CETL 1,2,3 leaflets and advice for students leaflets. Retrieved from http://www.brookes.ac.uk/aske/resources.html. Biggs, J., & Tang, C. (2007). Teaching for quality learning at university (3rd ed.). Oxford, England: Oxford University Press (Society for Research into Higher Education). Burkill, S. (1997). Student empowerment through groupwork: a case study. Journal of Geography in Higher Education, 21, 89-94. Cumming J.J., & Maxwell G. S. (1999). Contextualising authentic assessment. Assessment in Education: Principles, Policy and Practice, 6, 177-194. Furedi, F. (2009). Now is the age of the discontented. Retrieved from http://www.timeshighereducation.co.uk/story.asp Guba, E., & Lincoln,Y. (1989). Fourth generation evaluation. Newberry Park, CA: Sage Publications. Healey, M., Mason O’Connor, K., & Broadfoot, P. (submitted January 2009). Engaging students in the process and product of strategy development for learning, teaching and assessment: An institutional example. International Journal for Academic Development (forthcoming). Herrington, J., & Herrington, A. (1998). Authentic assessment and multimedia: how university students respond to a model of authentic assessment. Higher Education Research & Development, 17, 305 – 322. House of Commons (2009). Innovations, Universities, Science and Skills Committee Report on Students and Universities. [HC 170-1] Retrieved from www.parliament.uk/ius. Hymes, D., Chafin, A., & Gondor, R. (1991). The changing face of testing and assessment: Problems and solutions. Arlington VA: American Association of School Administrators. Little, B., Locke, W., Scesa, A., & Williams, R. (2009). Report to HEFCE on student engagement. Retrieved from http://www.hefce.ac.uk/pubs/rdreports/2009/rd03_09. Lines, A. (2009). From performativity to professionalism: Lecturer’s responses to student feedback. Teaching in Higher Education, 14, 441-454.
ATN Assessment Conference 2009, RMIT University
92
Mann, S. (2001). Alternative perspectives on the student experience: Alienation and engagement. Studies in Higher Education, 26, 7-16. Merikel, M. L. (n.d.). Assessing Student Performance and Understanding. Retrieved from http://oregonstate.edu/instruction/ed555/zone5/zone5hom.htm. McCulloch, A. (2009). The student as co-producer: learning from public administration about the studentuniversity relationship. Studies in Higher Education, 34, 171-183. Mueller, J. (1999). Retrieved from http://jonathan.mueller.faculty.noctrl.edu/toolbox/whatisit.htm. National Foundation for Educational Research. (2006, November). The Voice of young people: An engine for improvement? Scoping the evidence. Northern Office. Halsey, K., Murfield, J., Harland, J., & Lord, P: Authors. National Student Survey (NSS) (2005-09). From http://www.thestudentsurvey.com. O’Donovan, B., Rust, C., & Carroll, J. (2006). Staying the distance: The unfolding story of discovery and development through long-term collaborative research into assessment. [Electronic version]. Brookes e-Journal of Learning and Teaching, 1 (4). Retrieved from http://bejlt.brookes.ac.uk/vol1/volume1issue4/academic/odonovon_etal.pdf. Park University Incorporating authentic assessment. Retrieved from http://www.park.edu/cetl/quicktips/authassess.html. Peninsula Collaboration for Leadership in Applied Health Care and Research (PenCLAHRC) (2008). Engagement by Design. Retrieved from http://clahrc-peninsula.nihr.ac.uk. Price, M., O’Donovan, B., Rust, C., & Carroll, J. (2008). Assessment standards: a manifesto for change. [Electronic version]. Brookes eJournal of Learning and Teaching, 2. Retrieved from http://bejlt.brookes.ac.uk/article/assessment_standards_a_manifesto_for_change. Ramsden, P. (2008). The Future of Higher Education Teaching and the Student Experience. Retrieved from http://www.heacademy.ac.uk/resources/detail/ourwork/policy/paulramsden_teaching_and_student_exp erience. Rust, C., O’Donovan, B., & Price, M. (2005). A social constructivist assessment process model: How the research literature shows us this could be best practice. Assessment and Evaluation in Higher Education, 30, 231-240. Stiggins, R. J. (1987). The design and development of performance assessments. Educational Measurement: Issues and Practice, 6, 33-42. Walker, L., & Logan, A. (2008). Learner Engagement: A review of learner voice initiatives across the UK’s education sectors. Bristol: Futurelab. Wiggins, G. P. (1993). Assessing student performance. San Francisco: Jossey-Bass Publishers. Zandstra R., & Filer T. (2009). Case study 6 – Institute of Arabic and Islamic Studies: Report on assessment and feedback. Retrieved from http://as.exeter.ac.uk/support/educationenhancementprojects/change/projects2008-09.
ATN Assessment Conference 2009, RMIT University
93
Does the summative assessment of real world learning using criterionreferenced assessment need to be discipline specific? Kelley Burton School of Law, Queensland University of Technology,
[email protected]
This paper synthesises the existing literature on the contemporary conception of ‘real world’ and compares it with similar notions such as ‘authentic’ and ‘work integrated learning’. While the term ‘real world’ may be partly dependent on the discipline, it does not necessarily follow that the criterion-referenced assessment of ‘real world’ assessment must involve criteria and performance descriptors that are discipline specific. Two examples of summative assessment (court report and trial process exercise) from a final year core subject at the Queensland University of Technology, LWB432 Evidence, emphasise real world learning, are authentic, innovative and better prepare students for the transition into the workplace than more generic forms of assessment such as tutorial participation or oral presentations. The court report requires students to attend a criminal trial in a Queensland Court and complete a two page report on what they saw in practice compared with what they learned in the classroom. The trial process exercise is a 50 minute written closed book activity conducted in tutorials, where students plan questions that they would ask their witness in examination-in-chief, plan questions that they would ask their opponent’s witness in cross-examination, plan questions that they would ask in reexamination given what their opponent asked in cross-examination, and prepare written objections to their opponent’s questions. The trial process exercise simulates the real world, whereas the court report involves observing the real world, and both assessment items are important to the role of counsel. The design of the criterion-referenced assessment rubrics for the court report and trial process exercise is justified by the literature. Notably, the criteria and performance descriptors are not necessarily law specific and this paper highlights the parts that may be easily transferred to other disciplines. Keywords: real world, authentic, criterion-referenced assessment, work integrated learning, assessment
Introduction The paper tackles the challenge of conceptualising ‘real world’, ‘authentic’ and ‘work integrated learning’, and applying such constructs to two contemporary examples of summative assessment (court report and trial process exercise) in the discipline of law. This paper considers whether both assessment tasks exhibit the same degree of authenticity and relevance to the real world, and questions whether criterion-referenced assessment rubrics for real world tasks can be designed in such a way as to enable cross-fertilisation amongst disciplines.
Real world Assessment is a “powerful influence on how students learn”, and linking assessment to future work, motivates student learning (Bryan & Clegg, 2006, p. 44; Gulikers, Bastiaens, & Kirschner, 2006, p. 340) and is “best suited for meeting the educational needs of students with diverse learning styles” (DeCastroAmbrosetti & Cho, 2005, p. 58). Innovative assessment tasks in the 21st century are commonly labelled as real world, authentic and work integrated learning but what does these concepts really mean and do they overlap? The notion of real world has been used synonymously with “in-the-wild tasks”, which suggests that the assessment is messy, unplanned or unstructured, which is quite common in life (Boud & Falchikov, 2007, ATN Assessment Conference 2009, RMIT University
94
pp. 75 & 83). Where assessment is structured and cannot be described as ‘in-the-wild’, it cannot automatically be assumed that the assessment is not promoting real world learning. In fact, it very well could be and is more appropriately called, a tame task (Boud & Falchikov, 2007). The notion of real world is situated within the labels of authentic and work integrated learning, which will be explored in turn below.
Authentic Boud and Falchikov (2007), two leading commentators on assessment in higher education, state that authentic assessment involves “assessment practices that are closely aligned with activities that take place in real work settings, as distinct from the often artificial constructs of university courses” (p. 23). They explain authentic assessment with reference to an apprentice, who needs the opportunity to conduct tasks that are significant to such a craftsperson outside of school. In doing so, they emphasise the importance on what happens in the real world, which may depend partly on the discipline and what a professional in a particular field is expected to exhibit. Gulikers et al. (2006) state that assessment can only be authentic if it resembles what it is supposed to, and this makes it a relative notion. They suggest that authentic assessment should be based on the “students’ current or future professional practice” (p. 340). Correspondingly, Keyser and Howell (2008) state that authenticity reflects the real world and not the “staid classroom environment” (p. 4). The Queensland University of Technology (2008), which markets itself as the university for the real world defines authentic in its Manual of Policies and Procedures as “simulates as closely as practicable professional or workplace practice” (4.4.2). Three indicia for authenticity have been identified in the literature, that is, development of knowledge, inquiry and value beyond university (Guliker et al., 2006). These features are not discipline-specific, and provide guidance on how to judge authentic assessment across all disciplines and highlight the importance of how an assessment task helps a student in the real world. Keyser and Howell (2008) track four generic features of authentic assessment that also apply across disciplines, that is, “ 1) involve real-world problems that mimic the work of professionals; 2) include open-ended inquiry, thinking skills, and metacognition’ 3) engage students in discourse and social learning; and 4) empower students through choice to direct their own learning ” (p. 5). The Australian Centre for the Study of Higher Education suggests that students value authentic assessment because it is ‘real’ and reflects what they need to demonstrate in the workplace (James, McInnis, & Devlin, 2002). Similarly, American scholars, Frey and Schmitt (2007), describe authentic assessment tasks as ones “that specifically address real-world applications” (p. 406). Similarly, Newman, Brandt and Wiggins (1998) state that authentic assessment involves problems that are relevant to the real world and they have a sense of realism. The idea of a sense of realism is a theme that emerges in Gronlund’s work (2003), which refers to the “appropriate degree of realism” (p. 124), and can be mapped out on a continuum. For example, at one end of the spectrum could be real world and authentic assessment tasks and at the other end could be inauthentic or academic assessment tasks that are not relevant to a profession or work outside university. This continuum overlooks the fact that some people choose to work in academia after their university studies and undervalues the importance of academic assessment tasks. Possibly the notion of academic assessment tasks is a myth because most (if not all) assessment tasks can be linked to either one or more generic skills or discipline-specific skills. In any event, Frey and Schmitt (2007) offer a non-exhaustive list of criteria for judging the authenticity, and thus the sense of realism associated with an assessment task including, “nature of the stimuli, the complexity of the task, conditions, resources, consequences, and whether the specific tasks or activities are determined by the student or the assessor” (p. 410). These criteria are not discipline-specific and may be applied across all subject areas. Applying the criteria to a range of assessment tasks will mean that some items of assessment are more authentic or real world than others. Traditional types of assessment, such as final exams and tutorial participation, have been described as inauthentic, but Biggs (2003) claims that this is inappropriate and that as a result performance assessment is more acceptable than authentic ATN Assessment Conference 2009, RMIT University
95
assessment (p. 156). Frey and Schmitt (2007) suggest that traditional assessments are “not inauthentic, [but] ... simply less direct and, probably, less meaningful to students” (p. 410). Academics can put a fresh face on traditional assessment by, for example, setting exams based on problems in the real world and putting the student in the role of a professional, and in the context of law schools this may include solicitors, prosecutors, defence counsels and judges. Boud and Falchikov (2007) provide some useful examples of authentic assessment including “‘real-life’ tasks, exhibitions, interviews, journals, observations, oral presentations, performances, portfolios, patchwork texts and simulations” (p. 184). Keyser and Howell (2008) note that assessment tasks reflect the real world in varying degrees and that this has contributed to blurred boundaries of authentic assessment. The complexity of an assessment task may, for example, range along a continuum in the following order: solving real world problems using principles, secondary sources and lecture notes; observing what happens in practice and recording this in a journal; writing about what the student would do in practice; actively mirroring what happens in practice in a classroom environment; or engaging in practice in the real world. The importance of authentic assessment having consequences in the real world has also been propounded by Boud and Falchikov. Similarly to Frey and Schmitt’s (2007) criteria for authenticity, Gulikers et al. (2006) canvass the degree of authenticity by engaging a “five-dimensional framework” which are “(a) the assessment task(s); (b) the physical context in which the assessment takes places; (c) the social context of the assessment; (d) the result or form that defines the output of the assessment; and (e) the assessment criteria” (p. 341). These dimensions largely coincide with Frey and Schmitt’s criteria for authenticity raised above, but one dimension that is new is the assessment criteria, and this should reflect what a professional is expected to do. The criterion-referenced assessment rubrics for two examples of real world assessment tasks will be presented and discussed below. Gulikers et al. (2006) also tackle the notion of authenticity using a continuum approach with “artificial and decontextualised” at one end and “authentic and situated” at the other (p. 337). They argue that “[b]ridging the gap between learning and working is a salient issue in the 21st century” and recognise the importance of preparing students for dynamic workplaces (p. 338). They hinge authenticity to what is relevant to the workplace or real world, and suggest that this perception may change from person to person, and thus is subjective rather than objective. Further, what is relevant to the real world today may not be important tomorrow, and thus the notion of authenticity needs to be fluid over time to adapt to changing workplace needs. Once again, the authenticity of assessment is directly linked to the real world. Sometimes the notion of authentic is framed with reference to performance assessment and one view is that all performance assessment is authentic (DeCastro-Ambrosetti & Cho, 2005). According to Biggs (2003), authentic requires “active demonstration” (p. 156). Similarly, Mueller (2008) states that the “measurement of skills is particularly well suited to authentic assessment because meaningful demonstration of skill acquisition or development requires a performance of some kind” (p. 18). A contradictory view is that not all performance assessment is authentic (Frey & Schmitt, 2007). If this latter perspective is taken, the assessment is only authentic if it has impact beyond university. Once again, this brings the discussion back to the overlap between the constructions of authentic and real world. Performance assessment has been defined as “to measure a skill or ability” and authentic assessment means “to measure ability on tasks which represent real-world problems or tasks” (Frey & Schmitt, 2007, p. 417). Notably, the construction of authentic is directly underpinned by the real world, and performance assessment measures skills, which irrespective of whether they are generic or discipline-specific, they are most definitely integral to the real world. As a result, this paper will treat authentic and performance assessment as synonymous. Bryan and Clegg (2006) suggest that authentic assessment occurs when the assessment is aligned with the learning outcomes. This alignment is commonly known as “constructive alignment” or “intrinsic validity” (Bloxham & Boyd, 2007, pp. 27 & 34). Bryan and Clegg’s (2006) definition of assessment assumes that the learning outcomes reflect the real world. In order for assessment to be truly authentic, the assessment should be aligned with learning outcomes, which must reflect the needs of the professional in the real world (Gulikers et al., 2006). Consequently, when designing authentic assessment, the task should be linked to skills (both generic and discipline-specific) and graduate capabilities that are expected to be shown in the real world. ATN Assessment Conference 2009, RMIT University
96
Work Integrated Learning (WIL) Murphy and Calway (2008) define work integrated learning (WIL) as including “hands-on work experience and instructional learning in a real-world setting that assumes a level of explicit knowledge/skill on the part of the learner and the exchange of tacit knowledge/skill from the real-world to the learner” (p. 433). They argue that WIL improves “professionals’ engagement and motivation, knowledge and understanding, performance and action, reflection and critique, judgment and design, commitment and identity” (p. 439). It follows that tasks assessing WIL embrace real world, authentic and situated learning, rather than inauthentic, academic, artificial and decontextualised learning. Once again, it is evident that the real world has an obvious role to play in authentic assessment and WIL. The Queensland University of Technology (2008), which is advertised as the university for the real world, states in its Manual of Policies and Procedures encapsulates WIL as, “exposing students to the complexity and context of professional practice and can occur: On campus through structured authentic activities and assessment derived from specific learning
objectives in units; In simulated workplace setting on campus; As work experience in the industry/professional workplace; or As a community-based learning activity which will normally involve some work off campus”
(4.4.3). The Queensland University of Technology’s (2008) definition of WIL is much broader than Murphy and Calway’s (2008) construction and permits WIL to occur on campus through merely authentic tasks or simulated workplaces. In fact the degree of authenticity attached to these four bullet pointed examples may be reflected on a continuum in this order (from inauthentic to authentic, and thus represent their relevance to the real world). Only the last two bullet pointed examples from above fall within Murphy and Calway’s (2008) conception, which will be adopted here. Overall, real world learning is situated within both authentic assessment and WIL, and now the criterionreferenced assessment of two real world assessment tasks in the discipline of law will be discussed.
Criterion-referenced assessment of real world assessment tasks LWB432 Evidence is a final year core unit of the law degree at the Queensland University of Technology (QUT). It interrelates some of the principles and legal authorities from criminal law and civil law, but is not strictly speaking a capstone subject because it has its own body of law (Boud & Falchikov, 2007). Where a unit has discipline-specific learning outcomes, skills and graduate capabilities, academics can better justify the need for different and innovative authentic assessment tasks than would be offered by traditional assessment practices (Bryan & Clegg, 2006). Some of the skills and graduate capabilities developed in LWB432 Evidence are reasonably generic and would be relevant to a range of disciplines, for example, problem solving; critical thinking; information technology literacy; effective communication; life-long learning; work independently and collaboratively; and professional, social and ethical responsibility. However, the learning outcomes for LWB432 Evidence place great emphasis on discipline knowledge such as evidentiary principles and the trial process, and the subject area of evidence lends itself to innovative and authentic assessment tasks, for example, a court report and trial process exercise, which have been integrated into this unit at QUT. As a result, academics should steer away from using solely generic forms of assessment such as exams, tutorial participation and oral presentations. The court report and trial process exercise are important to the role of counsel. The court report requires the students to observe what happens in the real world and prepare a report on that experience, whereas the trial process exercise requires the students to simulate what happens in the real world, albeit in a written format in a classroom rather than in an oral format in a court room. As discussed above, according to Gulikers et al. (2006), the physical location of the assessment task is an important dimension of authentic assessment. From this perspective, the court report is a better example of authentic assessment because it is physically
ATN Assessment Conference 2009, RMIT University
97
undertaken in the court room (real world), whereas the trial process exercise occurs in the classroom. However, location is not the only factor taken into account when determining authenticity and the nature of the task, particularly mimicking a professional in the real world, is prioritised by the literature (Frey & Schmitt, 2007; Gulikers et al., 2006). Thus, arguably the trial process exercise, which requires students to actively demonstrate or mimic what a counsel does in practice, is more authentic than the court report, which simply requires the students to observe rather than mimic a professional (Biggs, 2003; Boud & Falchikov, 2007; Frey & Schmitt, 2007; Gulikers et al., 2006; James et al., 2002; Keyser & Howell, 2008). These examples of assessment do not involve hands-on work experience and thus do not fall within Murphy and Calway’s (2008) conception of WIL discussed above. The purpose of the court report is to encourage students to become familiar with attending court, and to make connections between what they learn in the classroom with what actually happens in the real world (court room). Students are advised in advance of court etiquette, that is, how to act appropriately as a member of the public watching a court case. Whilst the students are in the court room they are required to complete in handwriting or typewritten a one page (double sided) court report template, which directs their attention to the role of the judge and counsel, competence and compellability of witnesses, special measures put in place to make the process less stressful for certain classes of witness, order of proceedings and different types of evidence. This assessment task intends to deliver the following skills and graduate capabilities: critical thinking, written communication, discipline knowledge, working independently and professional responsibility. The purpose of the trial process exercise is to enhance the students’ understanding of the trial process and better prepare them for the real world. It involves a 50 minute written activity completed in the classroom where students plan questions that are appropriate for their witness in examination-in-chief, their opponent’s (fellow student’s) witness in cross-examination, and their own witness in re-examination based on what was argued in cross-examination. It also assesses whether the students can identify the proper grounds for making objections. This assessment task develops written communication, discipline knowledge, working independently and professional responsibility. After receiving instruction through lectures and prescribed readings on the trial process, the students engage in formative assessment, for example, face-to-face tutorials in weeks 5 and 6 that enable students to observe the trial process, learn how to prepare questions for the different stages in the trial process, and object to an opponent’s questions. The tutorials are supported by self-directed learning exercises, which are released online at the end of weeks 5 and 6, and they reinforce what students learn about the trial process exercise in the face-to-face tutorials. These tutorials are further followed up with an online mock trial program, which is completed in week 7 by students on their own computer or a computer on campus. This program shows students video clips of a mock trial; quizzes students on their knowledge and skills; offers links to the relevant legislation and case law; and provides students with detailed feedback on all correct and incorrect answers. The number of times the students access the online mock trial program is not tracked and their attempts are not supervised by the teaching staff. Thus, the formative assessment consists of face-to-face tutorial questions, self-directed learning exercises and an online mock trial program. The formative assessment of the trial process is followed by summative assessment. As noted by Dunn, Morgan, O’Reilly and Parry (2004), summative assessment “comes at the end of a systematic and incremental series of learning activities that have formative assessment tasks” (p. 19). The trial process exercise is integrated in week 8 and the court report is embedded in week 11, both of which are summative assessment, each worth 20% of the unit. The court report and trial process exercise are summatively assessed by criterion-referenced assessment. Frey and Schmitt (2007) note that just because criterion-referenced assessment is used as opposed to normreferenced assessment, it is not necessarily concluded that the assessment is performance-based, but this argument takes a very narrow view of performance because generally assessment requires students to do some sort of task. Certainly the court report, and especially trial process exercise, may be described as performance-based, and as stated above they are both authentic assessment tasks. Criterion-referenced assessment may be equally applied to inauthentic or authentic assessment. The “[a]uthentic assessment of skills does not require a rubric, but the use of rubrics can increase the consistency ATN Assessment Conference 2009, RMIT University
98
of application of the criteria” (2008, p. 19). As justified by Burton (2006), criterion-referenced assessment, which judges an assessment task against prescribed criteria, increases the validity, reliability and transparency of assessment, compared to norm-referenced assessment, which compares an attempt at an assessment task against other attempts by the same cohort and distributes the marks on a pre-determined bell curve (Burton, 2006; Dunn et al., 2004). The themes of validity, reliability and transparency informed the content of the rubrics developed in this paper. The court report criterion-referenced assessment grid is provided in Rubric 1. It does not simply list criteria that are marked holistically, merely indicate how the marks will be allocated to the criteria, nor simply offer a continuum or Likert scale for grading the criteria, but rather prescribes performance descriptors for each criterion (Dunn et al., 2004). This detailed framework has been chosen because it results in many benefits for student learning including explicitly advising students what is required in advance; facilitating worthwhile feedback; ensuring greater consistency in marking assessment by a team of markers; demonstrating the alignment between the criteria for the assessment task and the learning outcomes, skills and graduate capabilities; and requiring students to focus on the learning outcomes, skills and graduate capabilities (Burton, 2006; Burton & Cuffe, 2005). Consistent with an argument made by Burton and Cuffe (2005), and Macdonald (2003), criteria should stem from learning outcomes, including skills. Marks are not specifically shown on the rubrics below, but generally the performance standards are attributed the following percentages in the Law School at the Queensland University of Technology: excellent = 85-100%; good to very good = 65-84%; satisfactory = 5064% and poor =