Student Competency Visualisation for Teachers

0 downloads 0 Views 662KB Size Report
We present some of the Next-TELL project (http://www.next-tell.eu) visualisa- tions. .... such as Google Docs, virtual world interactions or Moodle quizzes,.
Student Competency Visualisation for Teachers Matthew D. Johnson1, Susan Bull1 and Michael Kickmeier-Rust2 1

University of Birmingham, UK Technical University of Graz, Austria [email protected], [email protected], [email protected] 2

Abstract. This paper introduces several activity visualisations and competencybased open learner model views, suggesting that a competency approach can help teachers in their use of such data. We present examples from Next-TELL. Keywords: activity visualisation, open learner model, teachers, competencies.

1

Introduction

Competency frameworks (describing skills or abilities) are increasingly being used in a variety of educational settings, for example, for: language learning [1]; geography [2]); STEM literacy [3]; student facilitation of meetings [4]. Such frameworks can also cross subjects – the latter example could apply in language learning classes as well as C21 skills; competency frameworks relating to experimental design or conducting experiments could be applicable across scientific fields, and so on. Many visualisations have been described to support interpretation of student activity and progress, for example as dashboards to allow easy access to relevant information (see [5]). Visual analytics approaches often give activity data such as word counts, participation and collaboration levels. This data provides information about what students have done. In addition, teachers can benefit from information about students’ competencies. This is the focus of this paper: (i) taking learning analytics data and (ii) integrating it into competency-based visualisations to further support teacher interpretation of student data.

2

Visualisations for Teachers

We present some of the Next-TELL project (http://www.next-tell.eu) visualisations. Figure 1 (left) gives the example of keyword search terms that may be optionally specified to review the content of a discussion forum or google document. The keywords are presented as a bar chart, showing the relative frequencies of the words. The use of keywords can be used to help indicate whether a discussion is on topic, matches a specific theme or is using appropriate language. For example, in an English as a Second Language context, teachers could explore whether connectors (e.g. furthermore, consequently, however) are being used. Figure 1 (right) shows a word cloud

Figure gure 1. Word counts: keywords and frequency

Figure 2. Interaction patterns

Figure 3. Document revisions

visualisation. The size of the word represents the frequency of its occurrence. Stop words are used to remove content with little semantic meaning. Up to 100 words are presented. This can be used to determine themes of discussion, whether students are addressing each other directly (e.g. using names), and also whether certain issues that are expected in discussion are not in the foreground, or are even being avoided. Other information can also be visualised. The threads plot (left of Figure 2) shows users’ contributions to threads within a discussion, across time. The x-axis shows time, and the y-axis shows word count, for the post that has been submitted. The nodes show the number of words that a user has contributed to a thread, across time. The visualisation also shows the level of activity each thread has. This visualisation works well with smaller data sets (e.g. a specific discussion). It may be used, for example, to identify users that regularly initiate discussion, and whether a given student often replies to their own posts, or whether there are specific patterns between groups of users. The network graph (right of Figure 2) shows which students are involved in the same threads/discussion. Each student is shown here as a coloured node, allowing discussion ‘groups’ and participation levels to be easily seen. The graph is force directed, and so nodes may be moved around and rearranged, as desired by the teacher. Figure 3 gives the example of visualisations relating to document revision. The top part shows a chronological list of document revisions for different categories, that can also be shown graphically (bottom of Figure 3). This allows easy switching between visualisations so, for example, teachers can focus on readability, length, complexity, etc., over time, gaining an overview of various aspects of document development. These visualisations from the Next-TELL Project, and the many other teaching analytics visualisations available, while allowing teachers to examine a range of activity information according to a specific purpose, are often less adept at clearly identifying user competencies. Data produced is often activity-oriented, as described above. This can offer important and targeted visualisations for specific needs. However, teachers still need time to interpret the information, and to identify what it actually means from a learning perspective. For example, a word cloud or bar graph that shows word frequency does indeed provide accurate data on frequency. Pie charts can show proportions, and other methods such as line graphs can show data over time. Performance can also be easily visualised where scores or other performance indicators are available. However, to meet the increasing interest in competency-focussed education, we offer additional support using an open learner model (OLM). OLMs visualise information that has been inferred about a user’s current knowledge or skills, based on their activity; often with the purpose of facilitating metacognitive behaviours in students or supporting teacher planning or decision-making for individuals or groups [6]. In the Next-TELL OLM, the data can come from a variety of sources. These include automated data from other tools, and also manual self, peer and teacher assessments. Eight OLM visualisations are currently provided in Next-TELL, each showing subtly different aspects of the same underlying learner model information relating to student competencies: skill meters, tables, smiley faces, histograms, word clouds, radar plots, treemaps, and network diagrams. Each visualisation may be turned on or off using the preferences page (upper left of Figure 4). Several of these visualisations are also illustrated in Figure 4. On the radar plot, one competency is displayed per

axis. The further away from the centre, the stronger the competency. Data from di different sources are displayed together on the same axis, highlighting the differences between, for example, teacher-assessment, teacher student self-assessment, peer-assessment, assessment, and automated mated inferences. In the competency network, the size and brightness of a node shows the competency strength. The lines between nodes show relationships onships between sub- and super-competencies. competencies. All are linked to the black node. Clicking on a

Figure 4. OLM visualisation preferences and visualisations

node will show/hide hide sub-competencies, sub improving readability or allowing greater detail to be seen.. The nodes are “force-directed” “fo and may be moved. The word cloud information is in two sections. The upper blue cloud shows competencies, with the stronger ones in larger text; the lower red cloud shows weaker competencies – the larger the red text, the weaker the competenc competency. y. This is especially useful for teachers to quickly identify strong and weak competencies, as may be developing during a classroom exercise. The size of each area in the treemap represents competency level. Drill down to sub-competencies competencies is possible, when a teacher wishes to investigate sp specific competency sets further. With the skill meters (also showing the full screen), tthe proportion of green in the coloured bar represents represent the strength of each competency. cy.

Figure 5: Description of modelling

As stated ed above, OLM data may come from various sources, including other tools and user-contributed contributed assessments. For electronic data that is not in competency form, that therefore requires additional transformation, an intermediate tool is available avail ble for teachers. ProNIFA can take data from specific subject-focussed subject focussed learning tools, more general tools such as Google Docs, virtual world interactions or Moodle quizzes, zes, amongst others. Using Competence-based Competence Knowledge Space Theory [7], it provides vides a heuristic-based inferencing erencing mechanism to convert learners’ interactions into comp competencies,, also allowing teachers to define their own rules if they so wish. To give a simple mple example, using a chat log: [Rule1] Who=Teacher What=Excellent, . ASkill=1;2 AUpdate=0,4 (If the teacher says "Excellent" and a name (in a certain context), the probabilities of skills 1 and 2 for that learner are increased by 0.4.) The resulting quantitative competency comp data is forwarded through the OLM API. Other activity data can also be transformed: transforme we are currently working on competencies cies for writing, for example. Teacher-defined Teacher keywords such as in Figure ure 1 could be matched with student text to provide evidence contributing to the OLM for ‘focusing focusing on topic’; topic document revisions as in Figure 3 could cou give evidence about writing strategies. Given the range of sources of data, the t OLM can provide a description of how the value is arrived at, with relation to the underlying evidence (shown in Figure 5). ). Two

aspects of the process are described: how the evidence is weighted with respect to time; and how the evidence is combined according to weightings that teachers may (optionally) specify. Different colours are used and textual description is given at each level, stating what is calculated at each point in the algorithm. Thus, should the teacher wish to trace the sources of information for competency levels, they can identify the specific activities, tools or manual assessments contributing to each value. They can therefore maintain their competency focus, but also easily return to activity data.

3

Summary

This paper has argued the benefits of not only using activity or activity process visualisations, but also modelling and combining such information into a competency framework approach. This is applicable within and across many subjects, and competency frameworks are increasingly being used in educational contexts. OLMs have been providing visualisations of learning for many years, and the increased data now available through the many learning analytics approaches can help us to create richer and potentially more accurate models, as the sources of information are broadened. At the same time, the increasing use of visual analytics in learning can become more focused and, perhaps, more easily used by teachers.

Acknowledgement This project is supported by the European Commission (EC) under the Information Society Technology priority FP7 for R&D, contract 258114 NEXT-TELL. This document does not represent the opinion of the EC and the EC is not responsible for any use that might be made.

References 1. Council of Europe.: The Common European Framework of Reference for Languages, http://www.coe.int/t/dg4/linguistic/Cadre1_en.asp. Accessed 18 March 2013. 2. Rempfler, & Uphues, R.: System Competence in Geography Education Development of Competence Models, Diagnosing Pupils’ Achievement, European Journal of Geography 3(1), 6-22 (2012) 3. Bybee, R.W.: Advancing STEM Education: A 2020 Vision, Technology and Engineering Teacher 70(1), 30-35 (2010) 4. Reimann, P., Bull, S. & Ganesan, P.: Supporting the Development of 21st Century Skills: Student Facilitation of Meetings and Data for Teachers, Proceedings of the Workshop Towards Theory and Practice of Teaching Analytics, EC-TEL’12 (2012) 5. Verbert, K., Duval, E., Klerkx, J., Govaerts, S., Santos, J.L.: Learning Analytics Dashboard Appliclations, American Behavioral Scientist (2013) 6. Bull, S. & Kay, J.: (2013). Open Learner Models as Drivers for Metacognitive Processes, in R. Azevedo & V. Aleven (eds), International Handbook of Metacognition and Learning Technologies, Springer, New York, 349-365 (2013) 7. Doignon, J., & Falmagne, J.: Knowledge Spaces. Berlin: Springer Verlag, (1999).