Design of a Computer-assisted Assessment System for Classroom

0 downloads 0 Views 809KB Size Report
Classroom Formative Assessment .... generated in the classroom and during homework in ..... spreadsheets created by a student on Google Docs are stored.
Design of a Computer-assisted Assessment System for Classroom Formative Assessment Peter Reimann

Susan Bull

Organizational Development & Educational Management MTO Psychologische Forschung und Beratung GmbH Tübingen, Germany [email protected]

Electronic, Electrical and Computer Engineering University of Birmingham Birmingham, UK [email protected]

Wolfgang Halb

Matthew Johnson

DIGITAL JOANNEUM RESEARCH Forschungsgesellschaft mbH Graz, Austria [email protected]

Electronic, Electrical and Computer Engineering University of Birmingham Birmingham, UK [email protected]

Abstract— We describe a number of high-level design decisions that we found essential for a Computer-assisted Assessment System that is to be deployed in school classrooms for supporting formative assessment by teachers and self-assessment by students. In addition, the system needs to provide information to parents. Our design decisions comprise the use of the Open Learner Model approach to make diagnostic information available to the various stakeholders, the use of a modelling methodology to describe assessment methods declaratively (glassbox), and the decision to embed assessment in a flexible manner into current and emerging learning environments. Implications for system architecture are also described.

both; it is less of a system feature than of the purpose to which the diagnostic information is put. 2.

The purpose is to a large extent influenced by the audience, or the stakeholders, of the assessment outcomes. A CaAS needs to be designed so that it can accommodate multiple audiences’ needs, and multiple stakeholders’ needs.

3.

Data sources and context of the assessment: While for CbAS the assessment context is highly controlled (a testing situation), and the item format fully known at design time, CaASs will be deployed in highly varying contexts, and will ideally take into account a variety of information resources. In particular, the basic object of conventional assessment—the “item”—is not sufficient for CaAS. They need to include methods that make use of performance data (e.g., log files, e-portfolio entries), thereby realizing embedded assessment, in addition to/instead of stand-alone, or explicit, assessment that CbAS support.

4.

Assessment design: A front-to-end computational assessment system should support the construction of “items”, their deployment, and analysis. While that is challenging enough for CbAS, for CaASs it becomes even more so because of the loss of the “test item” as the basic object of system design. Instead, we have to use the notion of an “observation with diagnostic relevance”. Such observations will need to undergo more transformations than “test items” before they can become diagnostically useful.

5.

Pedagogic decision making: One and the same CaAS would ideally be useful for different decision makers, with the main cases being the student (selfguided learning based on feedback), the teacher

Keywords-computer-assisted assessment; open learner models; formative classroom assessment

I.

INTRODUCTION

The notion of a computer-assisted assessment system (CaAS) and the introduction of such approaches into educational systems such as schools requires rethinking some of the core assumptions behind assessment systems. Since a CaAS is assisting somebody in assessment, the focus needs to be on (i) the assessment goals that actor has, (ii) the audience or stakeholder of that assessment, and (iii) the context within which that actor performs the assessment. We see the main difference between a CaAS and a CbAS (Computer-based Assessment Systems, including e-assessment systems) in the fact that, while the use context of CbAS is known at design time, that of a CaAS is not, requiring an approach to the design of CaA systems that takes into account the variability of the use situation. In particular the following differences between CaAS and CbAS need to be taken into account: 1.

Purpose of the assessment: While the purposes of formative and summative assessment are often pitched against each other, a CaAS can be used for

using diagnostic information to adapt learning paths, and the system (adaptive systems, such as Cognitive Tutors [1]). In this paper we describe some of the implications for developing an approach to CaAS design in the context of the NEXT-TELL project. II.

DESIGN DECISIONS FOR CAAS SUPPORT IN NEXTTELL

NEXT-TELL (www.next-tell.eu) is an Integrated Project in the ICT challenge of the European Commission’s 7th Framework Program. Our vision of the 21st Century classroom is that of a technology- and data-rich environment that supports teachers and students to use various sources of information generated in the classroom and during homework in pedagogical decision-making. Such an information infrastructure will improve instruction, diagnosis, workflow, and productivity as well as enhance collaboration and communication among students, teachers, and other stakeholders, especially parents. Teachers in particular will be supported in their function as diagnosticians who have to make decisions constantly and rapidly in a highly dynamic and complex environment. Hence, one of the central goals of NEXT-TELL is to provide computational assistance to teachers’ formative assessment of students’ contributions in the classroom and in the course of homework. We are particularly interested in designing and developing tools that support teachers in embedded assessment, assessment that builds on the information available in a school’s ICT infrastructure as well as in students’ wider ‘information ecology’, thus lowering the need for additional explicit assessment. Our main design decisions to date are (1) to use Open Learner Models (OLM) for displaying assessment information, (2) to employ a glass box approach to assessment, and (3) to embed formative assessment in ICT-rich learning environments. A. Open Learner Models for flexible provision of information to multiple stakeholder groups Opening the learner model was suggested by Self [2] in the context of Intelligent Tutoring Systems, and has since been taken up in a range of research on adaptive teaching systems (see [3] for an overview). Such adaptive systems maintain a dynamically updated representation of a student’s knowledge (skills, abilities, compentencies) in order to guide the system’s pedagogic decision making (e.g. feedback to provide, which content/task/problem to present next). Opening up this internal representation of the learner for the learner has a number of purposes [3], amongst them: •

The user’s right to view electronic data about themselves.



Improve the accuracy of the Learner Model: by letting students validate the model and contribute additional information.



Increase learner trust in a system.



Facilitate self-monitoring, promote learner reflection, and support planning.



Provide a source of information for formative and summative assessment.

By generalizing the notion of ‘model’ (so that it can comprise a broad range of diagnostic information on learning relevant (latent) constructs), and ‘learning environment’ (so that in addition to student and application it can contain teachers, peers, and possibly other actors, all involved in pedagogic decision making), we suggest that OLMs can play an important role outside of their original field of invention. In classrooms and home environments that are technology rich, learners’ competencies can be modelled in many ways, building on many--and diverse—sources; OLMs constitute the means to tie these aspects together and make them accessible to various user categories, for various purposes. 1) OLM visualisation OLMs have been presented in a variety of ways. Fig. 1 gives just four examples: extent of understanding represented in a concept map and a hierarchical tree structure based on prerequisites [4]; a simple representation of an overview of knowledge at topic level [5]; and smiley faces indicating knowledge, for use by children [6]. Further examples from various authors can be found in the overview by Bull and Kay [3]. The important factor is that the OLM should be understandable by the user. Therefore it is usually presented in a different form from that represented inside an adaptive teaching system, and used by the system, in its decisionmaking. In addition to OLMs being defined as learner models that are ‘open’ to user inspection, OLMs may also allow direct interaction, probing and/or user control over the contents. Regardless of the learner modelling technique used, the structure of the underlying learner model, and the presentation format of the model, learners can have some kind of interaction with their model beyond simple viewing. Illustrating that this can be achieved with simple and detailed OLMs, and models with different underlying structures, are Flexi-OLM [4] (top of Fig. 1) and a recent extension to OLMlets [7] (bottom left of Fig. 1). In NEXT-TELL there are a variety of stakeholders relevant to the open learner model: students, teachers, parents, peers, school administrators, policy-makers and researchers. These stakeholders are likely to have different needs, and so may require different interfaces and different access to the learner model contents. We further consider the various stakeholders in the next section.

Figure 1. Example OLM visualisations

2) Accommodating the information needs of different stakeholders The system is intended for use in schools and for use inside and outside of a classroom setting. Teachers are the facilitators of students’ learning and have great influence over activities the student undertakes for educational purposes. In the classroom setting students often work alongside and sometimes collaboratively with other students (their peers) when completing work. Outside of the classroom, work is often completed at home, where parents have an influential role. Students’ learning is ubiquitous and not necessarily confined to a specific time or place. In NEXT-TELL, being able to capture and model much of this learning, is a primary aim.

requirements of primary stakeholders who do not have personal contact with the student (school administrators, policy makers, researchers etc.) relate to the monitoring of learner models for tertiary purposes, whether informing on the efficacy of policy, substantiating claims or validating hypotheses. As part of the research in the NEXT-TELL project, innovative OLM visualisations will be developed and empirically analysed that are appropriate for the various stakeholders’ information needs, and appropriate to their decision making context. Two contexts important for teachers are classroom decision making (high time pressure, many parallel information streams), and planning of individual learning paths (less time pressure, information on a single student).

Table 1 describes some of the functionality different stakeholders might expect from their open learner model interface(s). The common theme is the transparency and up to date nature of information about students’ learning and how this can further endorse current roles, activities and practices. With each of the key stakeholders, practices such as planning, reflection and supporting knowledge transition are all important, particularly in classroom and home scenarios. The

The decision to use the OLM approach to make the outcomes of formative assessment accessible to (and in some situations modifiable by) users raises the question of how the information that gets visualised in the OLM is generated. NEXT-TELL uses a range of formative assessment methods, briefly introduced in the following section.

TABLE I.

Stakeholder Student

Teacher

Parent

OPEN LEARNER MODEL REQUIREMENTS FROM THE PERSPECTIVE OF EACH STAKEHOLDER

Requirement • • • • • • • • • •

Other Students (Peers)

• • • •

School Administrators Third Parties

• • •

Policy Makers



Researchers



Support for decision making in real time A source of reflection (stimulus for metacognition) Support for planning Support for collaborative learning Support for decision making in real time Support analysis/reflection on students' learning (context of pedagogical projects, professional development, school development) Provision of a diagnostic of student ability Allowance for, and supplement to planning Support for professional and personal development Information about a child's progress (interpretable without a pedagogical knowledge prerequisite) Help to identify where to administer help/support Support for collaborative learning Encouragement for communication An extension of social/group skills (collaboration, organisation, management etc.) Information on students' progress Encouragement for teachers' professional/personal development Support proficiency claims about students to third parties (e.g. potential employers, university selection committees) The ability to monitor the efficacy of current policy and inform new policy Support responses to research hypotheses relating to pedagogical strategies, interaction preferences, the effect of time and the variance between students

B. Glass-box approach to formative assessment The information about KSAs (knowledge, skills, abilities) visualised in an OLM can come from many sources. In NEXTTELL, we consider in particular teachers, students, (in the role of self and peer assessors), parents, and software applications (computer-based assessments systems, KSA modelling tools). These can all produce diagnostically relevant information that are candidates for visualisation in the OLM. An important design consideration in NEXT-TELL is that we believe that all assessment methods, independent of who employs them (teacher, student, parent, software) should adhere to certain quality criteria, in particular concerning their validity and reliability. For establishing validity, we build on the Evidencecentred assessment Design methodology [8], and for establishing reliability of assessments in NEXT-TELL the users need to document their assessment process, thus creating provenance data [9]. In short, we require that humans as well as computational assessment services describe how they, starting from observations on what learners do in the course of their learning activities (performance) and from the artefacts produced in the course of learning activities, come to conclusions about learners’ KSAs. These conclusions are related to performance by a process of Collecting, Filtering, Transforming, Diagnosing, and Combining diagnostic information (see Fig. 2).

In sum, using this approach we can achieve the situation that formative assessment is methodologically comparable with the quality standards used for summative assessment; and that the assessment methods realized by software applications, in addition to following the same evidentiary logic as the humandeployed methods, are sufficiently explicated (“glass-box”) so that a human consumer of that information (teacher, student, parent) can make sense of the diagnostic data the application computes. (See NEXT-TELL deliverables at www.next-tell.eu for fuller details.) We treat any assessment process as an instantiation of an assessment model (indicated by the box on the left in Fig. 2). Technically, we use the Open Models approach [10] for modelling formative assessment processes (as well as learning activity sequences). In effect, then, our approach to the design of a CaAS consists of putting at the core of the Assessment System an interpreter for declaratively represented Assessment Models. This is analogous to the distinction between an assessment item and the assessment engine in IMS QTI. However, the assessment model for embedded assessment is more complex than the item model for explicit assessment because of the need for Collecting, Filtering, and Transforming. This also provides significant challenges for the integration into a run-time environment.

Figure 2. An explicit inference process links OLM facets to learners' performance.

C. Embedded assessment: tracking and evaluating learning performance The replacement of “assessment items” with “observations of diagnostic relevance” as the basic object necessitates a complex data processing stack, comprising Collection, Filtering, Transformation, Diagnostic Interpretation, Combination, and Display of learning relevant data. These steps need to be connected to the information providers available in a specific learning environment. In the project, we use as proof of concept various combinations of Cloud services such as Google’s office applications, furthermore the Moodle LMS, the Mahara e-portfolio tool, virtual world environments such as OpenSim or SecondLife, and Facebook. Data collection is being achieved by adaptors to these services and applications, as well as supporting students in tracking their learning themselves. Fig. 3 shows the conceptual architecture. As introduced above, an important component is a library of assessment models, the elements of which can be instantiated in the form of human activities, software services, and a combination of these. Not mentioned so far, but also available are models of learning activity sequences (C7). Both model types, and certainly the latter, can and will be developed by teachers in a design phase, which is however not covered in this paper. The model of the learning activity sequence is interpreted by the Activity Stepper (C2), a basic workflow engine that helps to orchestrate sequences of learning activities that extend across specific learning applications (e.g., a combination of steps requiring looking at a Moodle page, writing some text in Google docs, and entering an artefact into Mahara). Assessment methods can be integrated into an activity sequence model, and to the extent that they are sufficiently detailed will get instantiated at run time. The Collection part of the assessment model will describe what data to record, and the other parts (Filter, Transform, …) will (semi-)automatically

translate observations into “values” of pedagogically relevant parameters that can be displayed in the OLM for a specific student. For example, an assessment model could describe how to analyse a solution to a math problem in order to identify how well a certain math skill is mastered by a student. The NEXTTELL Portal (C13) will provide a central entry point to the various tools and platforms supported by NEXT-TELL. The architecture has been designed to allow for easy integration of further tools and platforms that are not covered in the initial release. Interfaces have been defined that enable virtually every web-based tool to feed data into the tracking infrastructure, represented as Learning Environment DBs (C5) in Fig. 3. For completeness, the full relations and components are listed in the Appendix. In the first release of NEXT-TELL tools, interfaces to the above mentioned tools are provided. Documents and spreadsheets created by a student on Google Docs are stored along with their revision history which allows not only assess to the final product but also the intermediary steps. Activity information from traditional systems such as Moodle or Mahara is included in the tracking infrastructure as well, which allows analysis of a student’s progress across different webbased platforms and tools. Furthermore, data from virtual worlds such as OpenSim or SecondLife is stored from virtual class sessions. This includes avatar positions and interactions as well as public chat. As experience from early experiments showed there is also a very high interest coming from students in using Facebook as an interaction and communication platform. Therefore it has been decided to also allow the use of Facebook as a front-end to the NEXT-TELL tools and analyse data from Facebook posts and Facebook apps. In general the open architecture of NEXT-TELL allows us to easily plug-in various applications for the collection of data as well as for presentation to the users. This open approach ensures that the resulting platform is ready for future trends and requirements.

Figure 3. Description of the architecture of the NEXT-TELL system.

Figure 4. OLM views for students

Figure 5. OLM comparison views for teachers

III.

INITIAL NEXT-TELL OLM VISUALISATIONS

As stated previously, the OLM is central to NEXT-TELL. We therefore show our early OLM examples which are quite simple in nature, allowing for input from a range of CaAS data sources (more complex representations will be considered later in the project). Figure 4 illustrates some of the current OLM visualisations available. As indicated above, there are many stakeholders relevant to children’s education in NEXT-TELL. We describe Fig. 4 with reference to the student, but these views of the learner model (i.e. the OLM), and similar representations, can also be used by other stakeholders where appropriate. Fig. 4 provides a range of interfaces designed to help students to remain (or become) aware of their progress and levels of understanding achieved. This aims to support their own decision-making with reference to their learning, increasing their self-assessment, planning and other metacognitive and independent learning skills. In the skill meters, the green filled portion indicates the extent of an individual’s current understanding of each topic listed. The smileys show similar information, but indicated by faces, and a traffic light metaphor is also available to show this type of information. An alternative to the more highly graphical formats is the table, where highlighted cells show a learner’s level of understanding of each topic. Sparklines are used to show a learner’s development over time, both as an individual, and in comparison to the rest of their peer group. A histogram also provides this information, but specifically for the current state of knowledge. In addition to the above, teachers may need to locate more precise comparison data, to help in their decision-making. Fig. 5 shows two examples of this: a histogram with student names indicating their level of understanding, and a ranked list. This may facilitate both the teacher’s support of individuals; and their management of groups (e.g. forming sub-groups). Other representations will be required for some of the stakeholders. For example, policy makers will likely benefit from a more coarse-grained geographical display to compare regional data. In more specific learning situations, more detailed presentations (see e.g. top of Fig. 1) may be helpful to break down topics into more conceptual levels. Other information such as social graph data may be relevant in situations of peer collaboration – both for the students involved and for their teachers.

IV.

CONCLUSIONS

Computer-assisted assessment systems that support teachers in formative assessment and students in selfassessment will need to be designed and deployed differently from the more traditional computer-based assessment systems. As part of the requirements process in the NEXT-TELL project, we have identified in particular the need for an open, user needs driven access to diagnostic information, the need for a glass-box approach to the development of assessment methods, and the need for embedded assessment. Not mentioned yet, but a major concern, is students’ and teachers’ privacy. While transparency of pedagogic decision making is commendable--and a necessity in light of the growing accountability demands placed on schools and teachers--it also raises numerous questions regarding data protection and privacy. Hence, in addition to research on assessment and learning, research on privacy will be an important topic in NEXT-TELL’s on-going research. ACKNOWLEDGMENT The research leading to this paper has been supported by the European Community under the Information Society Technology (IST) priority of the 7th Framework Programme for R&D (FP7) under contract number 258114 NEXT-TELL. Disclaimer: This document does not represent the opinion of the European Community, and the European Community is not responsible for any use that might be made of its content. REFERENCES [1]

[2] [3]

[4]

[5]

Koedinger, K.R. and A. Corbett, Cognitive tutors, in The Cambridge Handbook of the Learning Sciences, R.K. Sawyer, Editor 2006, Cambride University Press: New York. p. 61-77. Self, J. Bypassing the intractable problem of student modelling. in Proceedings of Intelligent Tutoring Systems. 1988. Montreal. Bull, S. and J. Kay, Open learner models, in Advances in intelligent tutoring systems, R. Nkambou, J. Bordeau, and R. Miziguchi, Editors. 2010, Springer: Berlin. p. 318-338. Mabbott, A. and S. Bull, Student preferences for editing, persuading and negotiating the Open Learner Model, in Intelligent Tutoring Systems: 8th International Conference, , M. Ikeda, K. Ashley, and T.-W. Chan, Editors. 2006, Springer: Berlin. Bull, S., Quigley, S. & Mabbott, A. Computer-based formative assessment to promote reflection and learner autonomy. Engineering Education: Journal of the Higher Education Academy Engineering Subject Centre, 2006. 1(1): p. 8-18.

[6]

Kerly, A., R. Ellis, and S. Bull, CALMsystem: A conversational agent for learner modelling. Knowledge-based sysems, 2008. 21(3): p. 238246. [7] Ahmad, N., et al. A Role for Open Learner Models in Formative Assessment: Support from Studies with Editable Learner Models. in Proceedings of Workshop on Technology-Enhanced Formative Assessment, EC-TEL 2010. 2010. [8] Mislevy, R.J. and M.M. Riscontente, Evidence-centered assessment design, in Handbook of test design, S.M. Downing and T.M. Haladyna, Editors. 2006, Lawrence Erlbaum: Mahwah, NJ. p. 61-90. [9] Groth, P., et al., An architecture for provenance systems, 2006. [10] Karagiannis, D., W. Grossmann, and P. Hoefferer, Open Model Initiative. A feasibility study, 2008, University of Vienna: Vienna.

APPENDIX Component (C) C1: assessment method calculation engine C2: activity sequence execution engine C3: domain model repository C4: learner model repository C5: learning environment database(s) C6: ePortfolio database C7: Activity model combination

C8: domain model C9: learner model C10: open learner model GUI C11: ePortfolio GUI C12: activity stepper GUI C13: NEXT-TELL mashup/portal Relation (R) R1: domain model import interface R2: data collection interfaces R3: activity data collection interface R4: ePortfolio database connector R5: activity stepper GUI interface R6: querying calculation results R7: learner model GUI interface R8: ePortfolio GUI controller R9: mashup central controller R10: activity model import R11: assessment model import R12: replication and combination interface R13: feedback integration interface

Suggest Documents