Deploying and Visualising Teacher’s Scripts of Small Group Activities in a Multi-Surface Classroom Ecology: a study in-the-wild. Roberto Martinez-Maldonado, Andrew Clayphan, Judy Kay School of Information Technologies, The University of Sydney 2006, NSW, Australia {
[email protected],
[email protected],
[email protected]}
ABSTRACT There is a fast growing interest in the use of interactive surfaces in collaborative learning contexts. These devices hold the promise to enrich a typical collocated class by enabling learners to interact with digital content while maintaining face-to-face mutual awareness. However, little has been done to help teachers deploy and monitor the learning scripts for their planned small group activities in a classroom enhanced with these kinds of devices. We present an approach for deploying and visualising the teacher’s script for small group idea generation and problem solving activities in a multi-surface classroom ecology that is composed of multiple interactive tabletops, public vertical displays and a teacher’s dashboard. We frame our study by drawing on design guidelines for classrooms with multiple interactive surfaces and combine these with principles of scripting and orchestration of learning activities. The paper presents the results and experiences of the implementation of our approach, in an authentic deployment of our classroom ecology, held over 8 weeks in a semester, involving 150 university students and 4 teachers. The paper concludes with remarks about the strengths and shortcomings of our approach, to be taken into account by learning practitioners and designers. Keywords: Interactive tabletops; CSCW, CSCL, groupware; collocated collaboration; scripting; in-the-wild.
1. Introduction Learning in groups can enhance a student’s thinking capacity by triggering particular learning mechanisms that cannot be activated by individual learning (Dillenbourg, 1998). Developing strong collaboration skills can result in improved critical thinking, reduced task workload, increased retention, and a more positive attitude towards the subject (Berland et al., 2009; Johnson et al., 1986). Additionally, collaboration skills are often considered to
1
be key requirements for value generation beyond the classroom, into the workplace, particularly for problem solving (Roschelle et al., 1995), expertise sharing (Ackerman et al., 2013) and idea generation (Osborn, 1953) activities. There is fast growing interest in the use of interactive surfaces in collaborative learning contexts (Dillenbourg et al., 2011a; Evans et al., 2014). This includes research on educational interfaces based on technologies such as tablets, tabletops and whiteboards; and also on the interconnection of these technologies to create digital classroom ecologies. These emerging devices open up possibilities for both remote and collocated collaboration, in and out of the classroom. Their affordances offer promise to enhance communication that can be used to create ubiquitous opportunities for learning (Jones et al., 2004). For example, by using mobile touch devices students can have access to digital resources, receive contextualised learning materials, and communicate with peers or dialogue with tutors. By contrast, embedded pervasive devices, in the form of large shared interactive surfaces can enrich the physical contexts that surround students at home, in the lab or in the classroom. The research fields of Computer-Supported Cooperative Work (CSCW) and Computer-Supported Collaborative Learning (CSCL) have both studied the processes of group learning in educational (CSCL) and community or workplace (CSCW) settings (Schmidt et al., 2013). In fact, research on Human-Computer Interaction (HCI/CSCW) (Kharrufa et al., 2010a; Kreitmayer et al., 2013; Piper et al., 2009; Scott et al., 2003) and the Learning Sciences (including CSCL) (Betcher et al., 2009; Dillenbourg, et al., 2011a; Evans, et al., 2014) has shown the potential of using pervasive shared devices, such as interactive whiteboards and tabletops, for supporting students to collaborate and teachers to monitor small group work. Teachers often conduct small group activities because, when such activities are designed effectively, they may positively reduce social loafing and also facilitate the management of the class so they can provide feedback more effectively (Michaelsen et al., 1997). However, conducting small group activities in the classroom may require teachers to manage a more complex learning environment and design, where different groups develop different solutions and advance at different paces (Cohen, 1994). These emerging surface technologies can provide novel ways to design activities that teachers and researchers can take advantage of to improve instruction and learning (Dillenbourg et al., 2010). While interactive whiteboards have been more widely used for whole-class instructor-led learning (Higgins et al., 2007) (e.g. a teacher lecturing a class), interactive tabletops, in particular, have been regarded
2
as devices that can potentially support small group collaboration (Ryall et al., 2004) in the classroom (Dillenbourg, et al., 2011a). Tabletops hold the promise to enrich a typical small group collocated setting by offering a relatively large display interface, suitable for group work, enabling users to have similar opportunities of participation to directly interact with digital content, while maintaining mutual awareness and face-to-face communication (Dillenbourg, et al., 2011a; Evans, et al., 2014; Müller-Tomfelde et al., 2012). Although research in the lab has demonstrated a significant number of positive features of tabletops to support collaboration (Bellucci et al., 2014; Benko et al., 2009), as mentioned above, there has also been an over-expectation of the benefits for conducting authentic activities in the classroom (Dillenbourg, et al., 2011a). Similarly, as with other learning technologies, tabletop hardware, and most educational software, does not necessarily provide a direct improvement in learning (Müller-Tomfelde, et al., 2012; Smith et al., 2005). A comprehensive rethinking on ways in which teachers can link their pedagogical intentions with the technical features of the surface technologies is still needed (Higgins et al., 2011). In particular, even though supporting awareness has been regarded as an important part of CSCW and CSCL research (Ganoe et al., 2003; Gross, 2013; Phielix et al., 2010), little has been done to enhance teachers awareness in the classroom (Bull et al., 2012; Gutiérrez Rojas et al., 2011; Martinez-Maldonado et al., 2013b). Previous research work on tabletops in education (Do-Lenh, 2012; Kharrufa, 2010), including our first hand experience (Martinez-Maldonado et al., 2011b; Martinez-Maldonado et al., 2012c), has emphasised the importance of providing teachers with light-weight, key information (for quick reference) about the progress of the groups working at each table to be used to provide informed feedback. However, not much attention has been posed to enhancing their awareness at a class level. This includes supporting how teachers can effectively deploy the learning tasks intended for the class and monitor the execution of the lesson scripts. The design, deployment and monitoring of class scripts have been largely investigated in CSCL (Dillenbourg, 2002; Prieto et al., 2014), but not much work has been done in the classroom, in comparison to online learning settings (Jeong et al., 2010). It is therefore timely to investigate the functions and infrastructure that can be facilitated by a multi shared-device classroom to support teacher’s deployment and awareness of lesson scripts under authentic, in-the-wild learning conditions. We present an approach for deploying and visualising teacher’s scripts of small-group collaborative activities at a multi-surface classroom ecology. This setting is composed of multiple interactive tabletops, public vertical displays and a teacher’s handheld dashboard (see Figure 1). The overarching aim is to develop a
3
solution that can allow teachers to monitor the deployment of their learning designs. Specifically, we validate that our approach can provide teachers with tools for both controlling and also monitoring the in-class enactment of the class script in the multi-surface classroom ecology shown in Figure 1. The paper presents results and experiences from an in-the-wild study where our approach and the associated technological infrastructure were deployed in two authentic university courses. This involved 150 students enrolled in Information Technology courses and 4 teachers. The learning activities were authentic; meaning that they were designed by the teacher and linked to the curriculum. The tasks consisted of problem solving and idea generation. Both tasks have been shown to be suitable for small-group collaboration at the tabletop (Clayphan et al., 2011; Martinez-Maldonado, et al., 2013b). We frame our approach and our in-the-wild study drawing on design guidelines to deploy multiple surfaces in the classroom; and principles of classroom orchestration. The contribution of this paper is two-fold: 1- Deployment. The approach, the technological infrastructure and the specification to translate teachers’ designed learning tasks into deployments in our technology-enhanced classroom. For example: a teacher may want to deploy a class script consisting of two small group problem-solving tasks with a brief whole-class reflection activity in between, and a whole-class sharing task at the end. To do this, the teacher may require students to use two different tabletop applications and adapt both to the pedagogical objective. Our approach addresses this need. 2- Awareness. Making visible to the teacher the script (which is not necessarily designed by the same teacher) at runtime (this means in the classroom, while the class is happening). We do this by providing visualisations that can inform in-class teachers about the progress of the enactment of the script in their
Figure 1: Our multi-surface classroom ecology composed of: 5 interactive tabletops, 3 public vertical displays, a teacher’s handheld dashboard and touch identification and logging capabilities.
4
class; and an alarm that warns teachers if they take more time for a task, beyond what was originally planned for in the designed learning script. The rest of the paper is organised as follows. Section 2 describes the principles and guidelines that our approach is grounded upon. This includes literature about scripting, orchestration and tabletop-based classroom ecologies. Section 3 describes the context of our in-the-wild study. Section 4 describes our approach to scaffolding the design, deployment and visualisation of the class script in our classroom ecology. Section 5 describes the design of the study and the results of the interventions. Section 6 lists our final remarks, including the limitations and potential of our approach with pointers for future research work.
2. BACKGROUND 2.1
Teacher’s Support in the Classroom
During the last decade, much research has been done to develop educational theories, principles and tools to assist learners and to understand their learning processes, through either individualised or collaborative learning (Baker et al., 2009; Jeong, et al., 2010). However, little attention has been paid to teachers, in particular, to help them design or adapt their learning activities using emerging learning tools (Yacef, 2005). Teachers have a crucial role in the classroom as coordinators of all the resources involved in the collocated environment (Dillenbourg et al., 2011b). But they also have an important role in the design and execution of the planned learning activities even if they are not physically present or delivering the activities in the classroom (Prieto et al., 2011) (for example, this occurs in blended learning environments, where face-to-face classroom sessions are combined with computer-mediated activities, or when tutors deliver the class on behalf of the main teacher). Particularly, this scenario is very common in tertiary education, where lectures are commonly delivered to more than a hundred students concurrently, who are then divided into smaller groups, for tutorial or lab sessions (Kottasz, 2005). In the latter case, the main teacher has to perform a series of design tasks before the class. These include for example: designing the activities; planning the workflow of the class; choosing relevant learning materials and tools; writing the instructions for the students; and defining an ideal script for the enactment of the class. Then, during the class, the teacher attempts to adhere to the planned script, but many times s/he will need to improvise according to the conditions and unexpected events that may occur in the classroom (Dillenbourg, et al., 2011b). Examples of these events include: students arriving
5
late; students not understanding instructions; particular students being left behind; or the need to tune the duration of the activities which may have been over or under estimated in the script design. Although considerable research on interactive tabletops has targeted educational contexts (Evans, et al., 2014), little has been done to provide direct support to teachers. Research on tabletops for education has mostly focused on: understanding how students collaborate around the tabletop (Mercier et al., 2012); the impact of the interface design on collaboration (Marshall et al., 2008); and how students can be supported in the absence of a teacher (Morris, 2006; Morris et al., 2005). Some attempts have been made to aid teachers using interactive tabletops by presenting them with runtime visualisations and alarms generated from the logs of activity of the groups working at each tabletop to decide which group to attend to next (MartinezMaldonado, et al., 2013b). Other approaches that have considered the role of the teacher using tabletops have included the use of visual analytics (Al-Qaraghuli et al., 2013) and teacher’s dashboards (Martinez-Maldonado, et al., 2012c) for post-hoc analysis of group collaboration; and the use of visualisations for teacher’s post hoc self-assessment and reflection (Martinez-Maldonado et al., 2012b). Our approach goes beyond this previous work by proposing the infrastructure for translating the teacher’s designs into a format that can be understood by the tabletop-based digital ecology, allowing the teacher to control and be aware of the progress of the designed script, at runtime and for post hoc analysis. 2.2
Scripting
Scripts in Computer-Supported Collaborative Learning (CSCL scripts) are often used as instructional scaffolds to structure the interaction between collaborators, enforcing certain activities that should facilitate effective collaborative learning (Weinberger et al., 2009). Overall, the type of scaffolding that scripts usually offer can range from strict assignment of specific responsibilities and roles to learners, to more relaxed guidance by setting up associated activities and the sequencing of these (Dillenbourg, 2002). Dillenbourg (2002) defines two types of scripts: Macro and Micro scripts. On a Micro-level, scripts guide the low level interactions of students by providing them with relatively detailed instructions about the roles, actions, tasks and activities that are intended to be performed (Kollar et al., 2006). By contrast, Macro-scripts are concerned with organisational issues of collaborative learning such as description of groups, roles and how possible learning activities can be linked (Weinberger, et al., 2009). In this paper, we focus on this latter level of scripting.
6
Macro-scripts can structure and link the activities of a class or a number of classes; individual and collaborative learning phases; and face-to-face and remote settings (Weinberger, et al., 2009). We focus on those scripts that include the activities of one session in the classroom, which we will refer in this paper as the lesson script. It has been found that designing and deploying lesson scripts are important to target higher levels of student performance and teacher’s accountability (Gunter et al., 2001). An important feature of macro-scripts for face-to-face collaborative learning is that the sequencing of activities is often made explicit by the teacher, thereby learners receive clear instruction concerning when to engage in an activity (Kollar, et al., 2006). Designing this kind of high level lesson sequences are common tasks that teachers already do by themselves for a range of different subjects, course content, and levels of education (Gunter et al., 2001). For example, a classic tertiary education scenario may start with the teacher designing a script that integrates the resources referenced in a lecture to be deeply discussed in the face-toface lessons, first within small groups, and then at the class level. Additional literature may be discussed in these sessions and specific material may need to be prepared beforehand (e.g. a problem or case posed to students that requires study of the lecture material in order to be solved). At the end of the face-to-face lesson, the teacher may finalise with a reflection activity and save the student’s outcomes in an online system for further consultation. In this paper we address two dimensions of scripting included in the class macroscript that can be illustrated by the example described above: i) the sequencing of learning activities at a class session level (the macro-script of a class or lesson script), and ii) the particular activities, resources and technological applications and hardware required for the sub-tasks (the resources and tools at each stage). From a CSCW perspective, the notion of lesson script can be compared to that of workflow, and the scripting systems to coordination mechanisms (Cabitza et al., 2013). From this perspective, there is also a separation between process description and process execution, which can be compared to the teacher’s activities of class design and enactment of such design, respectively. The notion of workflow is very broad and it was originally conceived for contexts with business processes oriented to the reduction of costs and increase of competitiveness (Cabitza, et al., 2013). Whilst a lesson script is a type of workflow, and the CSCL scripting tools can be considered a type of coordination mechanism, the requirements to enact scripts in the classroom are more specific. For this, we refer to situations where an instructor has the responsibility to help students achieve determined learning goals, which are not necessarily linked with reduction of learning time or resource usage but actual learning or collaboration. This can be illustrated with the concept of flexibility in a
7
classroom scenario. A lesson script, to be enacted in the classroom, often defines a relaxed workflow of ordered activities (Dillenbourg, 2002) giving flexibility to teachers to adapt the initiation, termination or the order of some tasks, allowing particular students or groups to advance at a different pace or skip some tasks. By contrast, workflows in manufacture or other industrial areas may be complex and contain various alternative workflows that can be modelled as state networks (Cabitza, et al., 2013). Our approach takes a CSCL perspective by providing teachers the means to enact a designed lesson plan in the form of a main workflow, allowing them to decide, in runtime, the initiation, termination or pace of the tasks defined in design time. 2.3
Classroom Orchestration
Another principle that our approach grounds upon is the metaphor of orchestration (Dillenbourg, et al., 2011b). Orchestration describes the role of the teacher in terms of the design of learning activities and the runtime management of the classroom resources, learning processes and teaching actions (Dillenbourg, et al., 2010). These teaching actions may include tasks of monitoring, provision of feedback, improvisation and assessment (Prieto, et al., 2011). Orchestration can be described as a loop of awareness and regulation: the teacher monitors the state of the classroom, compares its state to some intended scenario, and performs actions to reach such a state (Dillenbourg, et al., 2011b). Prieto et al. (2011) performed a comprehensive analysis of literature on orchestration, identifying that a key component of orchestration is designing and planning the learning activities that will be monitored and coordinated in the classroom. Our approach takes into account the information that can be captured by the tabletops to inform teachers with key indicators that they can use to re-configure the class script on-the-fly or to adapt the activity for future sessions. Another aspect identified by Prieto et al. (2011) includes the management and regulation of the resources available in the classroom (technological, physical and human) to achieve the objectives of the learning activity. In our approach we address this by providing the means for the teacher to have control of the digital technology and activities deployed in the classroom through a teacher’s dashboard. Another aspect of the definition of orchestration is that both the learning scenario and the technology have to allow change and adaptation of the script, to accommodate unplanned events during the enactment of the learning activities. This can be done by teaching mechanisms or through the technology
8
itself. One of the main aspects of orchestration that our approach contributes to is to provide teachers with mechanisms to improve their awareness of the state of the classroom and the activities workflows. The orchestration approach defines two key required processes: state awareness and workflow manipulation (Dillenbourg, et al., 2011b). According to this, the technology itself does not need to perform complex analysis or automated actions, but instead it should provide filtered key information about the classroom state leaving the diagnosis of such data to the teacher. Dillenbourg & Jermann (2010) defined a number of factors that provide a teacher-centred and integrated view of technologies for classroom orchestration. Next, we describe 10 (out of 15) of these key factors that directly apply to the learning context targeted in this paper: 1. Teacher-centred. Teachers should design the learning scenario, the activities and lead the collective management of resources in the classroom. 2. Flexibility. Teachers also should be able to change the learning scenario or the script if needed, and the technology should provide the flexibility to allow this. 3. Control. The technology should provide teachers with the means to keep control over the class resources and students. 4. Integration. The technological resources should be accessible and consistent in all individual, small group or class level activities. The products from the student’s activity should also be accessible after class. 5. Linearity. The method is a simple sequence of activities that almost all students will perform at almost the same time. In this way it is easy to be explained to the students. 6. Relevance. The activities should be designed according to their impact as specified in the regular curriculum. 7. Physicality. In contrast with other models that mostly address networked learning spaces, classroom orchestration refers to the concrete physical space in the classroom. 8. Awareness. The technology should help teachers to be aware of the state of the student’s activities and any trend or pattern of behaviour that may be relevant. 9. Minimalism. The functions offered by the technology should be simple but effective, providing services that are not already provided by other tools in the classroom or that do not empower classroom activities compared with not having such technology. 10. Sustainability. The approach can be easily repeated and adopted by the teacher and be implemented in the classroom. 2.4
Multiple Surface Classroom Ecologies
For the case of the introduction of surface devices into the classroom, there has been considerable advancements in research and practice using interactive vertical whiteboards, that have been commonly used to lead whole class activities (by contrast to small group activities) in many modern classrooms (Higgins, et al.,
9
2007). However, it has been reported that one of the main issues that teachers have to face with the introduction of these technologies is the ICT support that is needed for them to translate their class designs into actual deployments (Smith, et al., 2005). In the case of interactive tabletops, Kharrufa et al. (2013a) also reported the need for providing teachers with tools to manage the technology by themselves. Additional challenges may be triggered by introducing multiple interactive surfaces in the classroom without a careful solution design. For example, it may affect the teacher’s awareness and produce a larger orchestration load because the teachers have to manage not only the students and the activities from an epistemic perspective but also the technology and the learning spaces created by them (Kharrufa et al., 2013b). This means that the technology needs to offer orchestration functions. We can say that the tools need to be orchestrable by the teacher. This refers to such educational technology that can be adapted or configured by teachers, before or during its usage, providing them with more flexibility to tune the tools according to their pedagogical intentions (Tchounikine, 2013). The most significant work on orchestration at a multi-tabletop classroom is SynergyNet (AlAgha et al., 2010; Higgins, et al., 2011). This is a laboratory setting used for experiments with elementary school students in extracurricular problem solving activities. It has four multi-touch tabletops, teacher tools and an interactive whiteboard that allows the movement of content from any table to the vertical display. In terms of orchestration, the authors focused on exploring different ways teachers can have access to classroom controls. The authors explored how the teacher was able to visualise, interact with or control each group’s tabletop from: a teacher console fixed at one of the corners of the room; through a tablet; or using a series of predefined gestures in the air. Notably, those studies were conducted in laboratory conditions, with the students and their teachers working on tasks outside their regular curriculum and the teacher was not involved in the design of the class activity. Do-Lehn’s work (2012) evaluated the usability of some orchestration tools used in a similar learning environment with four non-touch tangible tabletops that were able to keep track of fiducial markers attached to physical objects. The system offered ways to help a teacher manage the classroom and get information about the progress of the task at each table. This was done through a wall display that showed progress visualisations and allowed the teacher to have access to controls to compare two groups’ tables. The author also explored the use of paper cards that a teacher could use to control individual tables.
10
A third multi-tabletop environment is Tables-in-the-wild (Kharrufa, et al., 2013a). This consisted of up to 7 small tables that were deployed in a primary school and used for 6 weeks with the purpose of carrying out observations of students’ behaviours and teacher’s interaction with the technology in order to identify the main needs in terms of classroom orchestration. The results of the study were a series of recommendations for designing multi-tabletop settings deployed in the classroom highlighting aspects such as the need for tools to help teachers enhance their awareness and control over the technology. MTClassroom (Martinez-Maldonado, et al., 2012b) is a multi-tabletop ecology designed to help teachers orchestrate the classroom and enhance their awareness of each group within the class. This environment was the first to provide in-class touch differentiation so it can log each individual student’s activity (MartinezMaldonado et al., 2011a). It was also the first in linking the learning activities to the curriculum and deploying the technology in an authentic classroom (Martinez-Maldonado, et al., 2013b). This environment also features a teacher’s dashboard called MTDashboard, which can show live indicators of collaboration and alarms of detected misconceptions to the teacher. The MTClassroom also allowed connection of services to provide access to the data captured by each tabletop to provide teachers or researchers with a means for automatically detecting student’s strategies using data mining techniques (Martinez-Maldonado et al., 2013d). In this paper, we ran our studies using an updated version of the MTClassroom. The requirement of interconnection and management of the digital classroom ecology can also be extended to the use of other surface devices such as handheld tablets or whiteboards. One example of this is EvoRoom (Sharples, 2013), an immersive classroom using multiple interactive whiteboards. This allows the deployment of learning activities for teaching topics on biodiversity and evolution by using simulation and posing problems that students are asked to solve. Another example is given by Twiner et al. (2010), who presented work on classroom orchestration by providing tools for the teacher to control the class using an interactive whiteboard. Another such, is UniPad (Kreitmayer, et al., 2013), a multi-tablet environment used to run classroom-based simulations. It was comprised of four tablets that allowed students to interact face-toface, a smartphone that the teacher could use to control the activity on the tablets, and a vertical display for whole-class discussion. The use of these kinds of smaller-scale devices, compared with tabletops and whiteboards, is important because it demonstrates that alternative technologies can be used for small-group classroom activities in the near future.
11
3. CONTEXT OF THE STUDY IN-THE-WILD As a foundation to define our approach, we first introduce the context of our study. This was conducted as part of two courses taught at the School of Information Technologies, at the University of Sydney in Semester 2, 2013. The courses were “Human-Computer Interaction” (for third-year undergraduates) and “Pervasive Computing” (for postgraduates). A total of 108 and 42 students were enrolled in each course respectively. Each course had weekly lectures that all students were expected to attend. Students were divided into 1-hour tutorials consisting of 20-25 people, with one session per week, where they worked on either: small group activities; or their design projects, with support from the main teacher or class teachers. Each course had 6 and 2 tutorials respectively. Tutorial activities were designed to support students’ learning and to provide opportunities for applying concepts covered in the lectures. These activities usually required students to work in groups and typically included a reflection task that involved the entire class at the end of the tutorial. From Week 4, all tutorial classes were held in the MTClassroom (Martinez-Maldonado, et al., 2012b), a classroom comprised of 5 interactive tabletops, each designed for face-to-face work for groups of up to 5 students. This environment is available at the University of Sydney and has previously been used to conduct small-group activities for a number of regular classes or casual workshops as requested by teachers. Also from Week 4, students in both courses, and within each tutorial, were organised into smaller groups of 3, 4 or 5, working together for the rest of the semester. Each group was assigned a specific tabletop that they remained at, for each tutorial session and each student was given a seat number that was the same for all the learning activities and tutorials in the classroom. During the semester, three different tabletop applications were used: concept mapping, idea generation and meeting support. The study presented in this paper concerns the concept mapping and idea generation activities organised during Weeks 5, 6, 7, 9 and 13, because they are representative of orchestrable applications by the teachers. More detail about these applications are presented sub-sections 3.2 and 3.3. The same weekly lesson script ran in each of the 6 tutorials for the HCI course; and a different lesson script was designed for the 2 tutorials of the Pervasive course. The activities in both courses were designed by the same main teacher. A total of four teachers were involved in the enactment of the tutorials for these courses. For the tutorials of the HCI course, the main teacher was in charge of conducting one class and 2 other class teachers (tutors) had 2 and 3 classes respectively. All teachers had previous teaching experience in
12
the host university and had deep understanding of the topics covered in the course they had to teach. For the Pervasive course, the same main teacher conducted one tutorial and another class teacher the second tutorial. Figure 2 lists the tutorial classes that our study focuses on, including the learning activities and the learning goals in the selected weeks.
Figure 2: Scripted lessons for tutorials in the MTClassroom during the semester for the two subjects. For each, the following are described: the learning activities, the topics and the number of tutorials.
3.1
Multi-tabletop Classroom
The version of the MTClassroom used in our study consists of five 46” interactive tabletops with a keyboard for each user (that can be removed for activities that do not require them). Each tabletop is enriched with CollAid (Clayphan et al., 2013b; Martinez-Maldonado, et al., 2011a), a system that can differentiate who is touching each part of the interactive surface.
Figure 3: MTClassroom ecology of the digital learning technologies. The five tabletops are interconnected and synchronised so all the activity logs are recorded to the same repository and so that all the tabletops can be controlled by a central orchestration service. The classroom also
13
features three public vertical displays connected to the same orchestrator service. In this way, these displays can be used as regular projectors (for example, to show teacher’s slides with instructions) or run applications that can show the content of specific tables for whole-class discussion on-demand. Figure 3 illustrates the components of the MTClassroom while a teacher is enacting one of the tutorials. The teacher can orchestrate and manage the technology though a handheld device that runs the MTDashboard (Martinez-Maldonado, et al., 2013b). This tool allows the control of the technology, for example: a teacher may block all the tabletops – to influence students to pay attention to instructions being given; to start activities; to control the classroom script; or to send student’s artefacts from the tables to the vertical displays. Additionally, the dashboard interface can show live visual indicators of the progress of the task or levels of participation within each tabletop. Overall, the MTClassroom is the hardware and software infrastructure that provides connectedness to the multiple tabletops in the classroom, the sensing technology, the data structures and the teacher’s dashboard. Different learning applications can be loaded onto the tabletops. For example: ScriptStorm (used for Idea Generation) and CMate (used for Concept Mapping) are two learning applications that were used in the study described in this paper. 3.2
Activity Type 1: Brainstorming - Idea Generation and Categorisation
One of the activities conducted in the tutorials was based on the well-established principles of the brainstorming technique (Osborn, 1953). This technique has been widely used to assist the spontaneous generation of ideas from all members of a group, commonly with the purpose of coming up with creative solutions for a posed problem. Osborn (1953) defined the technique based on four explicit principles: focus on quantity, withhold criticism, welcome unusual ideas and combine and improve on existing ideas. The technique was used in the tutorials to support collaborative ideation and interaction design, building on all
Figure 4: Idea Generation and Creative Design Activity. Left: a group using ScriptStorm in the classroom. Right: partial screenshot of the interface while a group categorises ideas.
14
group members’ contributions and promoting a balance between quantity and quality, to creatively explore ideas for the topics: “Types of user tasks (for the group project)” (HCI – Week 5); “Tasks to gain formative feedback on lo-fi prototypes” (HCI – Week 7); “Group project goals” (Pervasive – Week 5); and “Tasks for users to do in your evaluation (of your prototypes)” (Pervasive – Week 6) (refer to Figure 2 for a complete listing). To support the student’s generation and categorisation of ideas, we used the application called ScriptStorm (Clayphan et al., 2013a). This is a brainstorming application that can be configured to work with multiple keyboards connected, assigning one input device to each student since physical keyboards have proven superior to on-screen keyboards to type large amounts of text (Benko, et al., 2009; Clayphan, et al., 2011). The tool provides two basic types of consecutive tasks: idea generation, where students create as many ideas as possible in a limited time; and idea categorisation, where students create categories to group ideas that are similar or belong to the same domain. We will demonstrate that it can additionally support a number of elaborated tasks besides the sole generation of ideas (see Figure 4, left). The original design of this tool was not for the classroom; however, it was easy to integrate this application to operate in the classroom. More details about the linkage of the application to our classroom ecology will be described in the next section. The design of ScriptStorm is heavily based on macro-scripting and the application can easily allow the definition of sub-tasks that can be added, reordered or omitted. The provision of this form of scaffolding has low learnability overhead for students (Clayphan, et al., 2013a), which is important for classroom activities that commonly have strict and defined time limitations. The tool allows students to perform the following tasks: 1. Negotiation: this is an optional task that is commonly added before the Idea Generation or Idea Categorisation task. It provides students with an interface to explicitly agree about configurations of the interface, such as activating the colour feedback indicating who did what (see Figure 4, right) or inactivating touch input during the idea generation phase to force the group to focus on the task and not on rearranging ideas. 2. Idea Generation: this is a core activity of the brainstorming phase where students use the keyboards to add ideas in parallel, ideally sparking from each other’s ideas to generate more creative ideas. 3. Idea Categorisation: this is the second core activity that allows students to group ideas according to the criteria they agree on. Students can create their own categories, delete them and associate ideas with them (see Figure 4, right).
15
4. Instructions: This optional task can be configured by a teacher to show the instructions of the next activity on the screen while s/he gives additional information to the class either verbally or through text handouts. 5. Reflection: This optional task can be designed by the teacher, who can define expected ideas that may be relevant for the class topic. The task asks students to assess if their generated ideas match that of the teacher’s expert knowledge. This aims to encourage students to think about the ideas they generated. This task also provides feedback on the idea generation activity, indicating a basic set of ideas the teacher intended students to identify. This task can only be completed after the idea generation or idea categorisation task. 6. Questionnaire: This optional task shows individual questionnaire boxes in front of each student. The teacher can design a questionnaire before the class to be displayed during class (commonly the last step of the learning activity). These are usually true or false questions or Likert style responses. 7. Sharing: This optional task establishes an active link between the teacher’s dashboard, the ScriptStorm instances at the tabletops and the vertical displays so the teacher can send the brainstorm output from a specific tabletop to the public displays on-demand. 3.3
Activity Type 2: Concept Mapping - Problem Solving
The second activity that we focus on in this study is concept mapping. This is an educational technique created by Joseph Novak (1990) that provides an excellent means for learners to externalise their understanding about almost any domain or posed problem. For small group work, it particularly offers students the opportunity to discuss ideas, present knowledge from multiple angles, identify misconceptions, reach agreement, or agree to disagree (Chaka, 2010; Gao et al., 2007; Novak, 1995; Stahl, 2006). Concept maps are directed graphs in which the nodes represent the concepts of the problem. These are defined as perceived regularities in events or objects of a domain (Novak et al., 2008). For example: usability technique; cognitive-walkthrough; or users. Propositions are indicated by a labelled edge linking two concepts. The direction of these links indicates the reading direction, generating a meaningful statement between the two concepts and the linking phrase. For example “cognitive-walkthrough does not require users” is a proposition that links the concepts ‘cognitivewalkthrough’ and ‘users’ through the linking phrase ‘does not require’. Concept mapping was used in the courses to build solutions about the following topics: “Characteristics of GOMs and Think-Aloud” (HCI – Week
16
7); “Types of prototypes and usability methods” (HCI – Week 13); “Critique on the think-aloud method” (Pervasive – Week 9); and “Revision of the main concept of pervasive computing” (Pervasive – Week 13) (refer to Figure 2 for a complete listing). To support student’s concept mapping at the tabletops, we used CMate (Martinez-Maldonado et al., 2010). This application does not require keyboards connected (Figure 5, left). Its design allows a teacher to pre-define a list of concepts and linking words that students can use to create their map. This reduces the need for excessive typing and helps students consider the intended issues (Martinez-Maldonado et al., 2012a). Students can use an on-screen keyboard to promptly edit the concepts and linking words. CMate provides a number of support options for the teachers. It allows the teacher to define a teacher’s map as a representation of expert knowledge that is intended to be included in the student’s solutions. This representation can be matched automatically during runtime, to show to teachers, the extent to which each group has matched the teacher’s perspective, a useful feature when the teacher needs to consider which group to attend to next. The teacher can also define a list of misconceptions to be used by the system to trigger alarms, so the teacher can be aware of any groups that may have potential misunderstandings. Finally, if the teacher considers that the students may need some initial inspiration, they can define a scaffolding map, so the students do not need to build a whole concept map from scratch. These maps (teacher map, misconceptions, and scaffold map) can be created using a freely available third party tool called CMaptools (using its native CXL – an XML-like format). CMate is significantly less scripted than the previous application. The tool provides four basic tasks: 1. Brainstorming concepts: this is an optional task that can be commonly added at the beginning of the concept mapping activity. It provides students only with the option of adding the main concepts for the
Figure 5: Problem Solving through Concept Mapping. Left: a group using CMate in the classroom. Right: a closer view of a concept map under construction.
17
concept map and arranging them according to the layout that best describes their intended solution (e.g. hierarchical or concentric layouts). 2. Linking (or concept mapping): this is the main concept mapping task where students can add concepts, create links, re-orient the map and overall build their solution in the form of a concept map. Links and concepts are coloured according to the student who created the item (see Figure 5, right). 3. Reflection: This optional task can be added by the teacher so the system automatically highlights possible surface-level mistakes in students’ maps (e.g. the same concept added more than once or automatically detected misconceptions). This task was not added to any of the scripts in our study. 4. Sharing: similarly to ScriptStorm, this task allows the teacher to conduct whole-class discussion about specific group’s concept maps sent to the wall displays on-demand.
4. CONCEPTUAL AND TECHNICAL APPROACH In this section we describe our approach for designing, deploying and visualising teacher’s scripts of smallgroup idea generation and problem solving activities at our classroom ecology. A requirement for the digital ecology being able to deploy the teacher’s designed activities is that, the tools should be orchestrable (i.e. configurable/adaptable) to some extent. This goes beyond orchestration tools (i.e. non-adaptable). As defined by Tchounikine (2013), an orchestration technology can support teachers in orchestrating (e.g. managing and monitoring) the learning activities or perform automated actions to reduce teacher’s orchestration load. By contrast, an orchestrable technology allows teachers to configure or adapt the use of the technology for different purposes, before the class and/or while the class is being conducted. This can help teachers’ target the technology to a range of pedagogical objectives rather than restricting the teacher to specific technology usage (Tchounikine, 2013). 4.1
Design Guidelines
In general, the search for improved learning and productivity in the classroom by introducing new technologies has delivered very modest benefits to teaching practice, effectiveness of instruction or improved quality of student’s learning (Cuban, 1986; Fabos, 2001; Muir-Herzig, 2004). There has been a recurring over-expectation of the effects of introducing new technologies, such as radio, films, television and different kinds of computing tools, in educational environments (Cuban et al., 2001). This is a problem, since effort has been devoted to design and deploy educational technologies that have often failed to provide the intended learning benefit. In
18
order to define an effective solution and allocate infrastructure to help teachers design, deploy and visualise their classroom designs at runtime, we scaffolded our approach on sound (HCI/CSCW) design guidelines (Kharrufa, et al., 2013b; Scott, et al., 2003) and the most recent surveys about the use of interactive tabletops (Bellucci, et al., 2014) and the particular usage in CSCL environments (Evans, et al., 2014). Next, we describe the principles that motivate the definition of our approach. 1- Using tabletops for small-group activity: interconnected tabletops. Tabletops may support collaborative learning more effectively for certain activities than other shared surface devices such as vertical displays or interconnected computers (Kharrufa et al., 2010b; Rogers et al., 2004). Tabletops enhance awareness of others’ actions, a situation that may not occur if people interact with multiple remote input devices like pointers or mice (Hornecker et al., 2008; Verma et al., 2013). They are commonly suitable for small-groups of more than two people; whereas the use of shared tablet devices are generally restricted to pairs (Kreitmayer, et al., 2013). They may offer more equal opportunities of contribution and quick transition of who has control; while in other devices such as interactive whiteboards the person who interacts with the device, typically holds control, or may occlude the display making it hard for others to interact at the same time with the device (Rogers, et al., 2004; Scott, et al., 2003). 2- Dividing the process into stages: lesson scripts. Kharrufa et al. (2013b) recommended that dividing activities in the classroom using surface devices, allows the provision of instructions and scaffolding for reflection at stage boundaries and it is helpful to structure large problems into a set of smaller ones. Whilst some forms of scripting have been implemented in single-tabletop environments (Kharrufa, et al., 2010a; Martinez-Maldonado et al., 2013c; Shaer et al., 2011), it had not yet been explored in multi-tabletop environments (Bellucci, et al., 2014). 3- Provide a private teacher space: teacher’s dashboard. Kharrufa et al. (2013b) proposed that the factors that have the largest impact on the outcome of a classroom session are the teacher’s awareness and control over the learning activities. The provision of a private space should not only include information about the activity that is happening in each tabletop (AlAgha, et al., 2010) but simple key indicators of the progress of the task at a class level. It has been suggested that the script of the class should be explicitly defined and displayed in the teacher’s private space helping teachers assess the overall progress of the class (Kharrufa, et al., 2013b). Our work is the first study in-the-wild to address this.
19
4- Support sharing of resources across spaces: a digital classroom ecology. As mentioned above, interactive whiteboards are suitable for whole-class learning; they can serve as a source of joint attention that the teacher can use to deliver instructions or clarifications (Kharrufa, et al., 2013b). Tabletops are most compelling for small group work whilst handheld devices can serve as private interfaces as required (Evans, et al., 2014). This ecology of devices is more or less orchestrable depending on the degree of interconnection and capabilities of resource sharing between devices, but also it depends on the degree in which the teacher can modify, adapt or design their learning activities for deployment on such devices (Tchounikine, 2013). Our approach aims to formalise the interactions among the ecology of devices through the definition of a lesson script at design time. This allows us to offer the teacher the option of defining small group tasks at the tabletops (linked with specific materials for the classes) and also to use the vertical displays for guiding tasks at a class level. 5- The need for an infrastructure for device ecologies. Bellucci et al. (2014) reported the most recent review of literature about the use of interactive tabletops highlighting the need to develop infrastructures and consider interaction design for creating device ecologies. We argue that the definition of scripts may help a facilitator or instructor orchestrate the technology to meet user requirements. This may be extended to scenarios where the orchestration is not necessarily centralised in one user but distributed among users (Sharples, 2013). Our approach in this paper precisely focuses on presenting the infrastructure (Contribution 1) and then to validate the impact of providing teachers with additional services such as the opportunity to effectively control and monitor the progress of the classroom script at runtime (Contribution 2). 4.2
Approach In our target environment we have to deal with different pieces of technology. For example, the tabletop
applications provide different levels of orchestrability. As described in the previous section, CMate and ScriptStorm provide different affordances and allow possible tasks to be arranged in different orders. Alternatively, the MTClassroom can itself, be an orchestrable ecology where the teacher’s dashboard plays an important role for supporting teacher’s control and awareness, but also, where lesson scripts can be defined to tune the ecology to serve different pedagogical intentions. In previous studies with the MTClassroom, the teacher did not have any flexibility to define or configure the digital environment for specific learning activities
20
(Martinez-Maldonado, et al., 2013b; Martinez-Maldonado, et al., 2013d). As a result, the learning activities were fixed and the teacher completely depended on ICT support to configure the application to be used. There was no protocol to follow so other tabletop applications could not easily be adapted to be orchestrated by and used in the MTClassroom ecology. This has been the same typical scenario for all other attempts of using surface devices in the classroom where the tools were not orchestrable (Do-Lenh, 2012; Higgins, et al., 2011; Kharrufa, et al., 2013a; Kreitmayer, et al., 2013). To achieve our goal, we designed mechanisms to make the MTClassroom ecology an orchestrable tool, so it can provide the capabilities to allow teachers to tune and adapt the affordances of the technology for their intended learning activities. We do this by providing a means to explicitly define lesson scripts that can be deployed by the digital classroom ecology. Unlike mainstream Workflow Management Systems (Abbott et al., 1994), scripts for classroom activities are commonly restricted by the limitations imposed by the classroom conditions such as class duration, number of students and availability of technology (Graham et al., 2006). In this way teachers are not commonly able to make drastic changes to the class script in real-time. The motivation of our work builds on our previous work with multi-tabletop classrooms (MartinezMaldonado, et al., 2012b) that focused on supporting a teacher accomplish a post-activity assessment of the enactment of their intended learning design. In that previous work the teacher wanted to know about each group's progress on the task, and how the script played in the classroom. The teacher wanted this so that she could, at a glance, see how the class as a whole was progressing and so she could see indicators of the progress of each group in creating group solutions, as well as an overall level of activity. We acknowledge that there was a parallel influence in that the teacher was learning about the affordances of the technology whilst transforming them into a form that served her needs. Our approach is grounded on the principle that some of the elements involved in the implementation and use of an orchestrable tool include three main teacher activities: 1) Defining the pedagogical objectives; 2) Designing the lesson script of the learning activities; and 3) Conducting the script in the learning environment (and possibly modifying or adapting the script in real-time) (Tchounikine, 2008, 2013). Figure 6 illustrates our approach in terms of these three processes. The first action consists of the teacher defining the learning and pedagogical objectives of a lesson or group of lessons (Figure 6-1). This includes making decisions about what kind of activity or activities need to be conducted, how they will be conducted in
21
the target setting, and the resources available or those that are required. The mental processes associated with this teachers’ activity and the design tools to aid teachers in this process are beyond the scope of our approach. However, the teacher needs to translate these pedagogical intentions into a form that can be understood by the orchestrable digital ecology. The MTClassroom Orchestration service is the element of the infrastructure that is in charge of the execution of the script by linking all the software elements running on the tabletops, walls and the teacher’s dashboard.
Figure 6: Conceptual approach to translate teacher’s design into a deployment and visualisation of the lesson script. Finally, the MTClassroom database contains information about the students in each tutorial, the group’s formation and is also the repository of the log events captured by all the devices in the digital classroom ecology. Sub-sections 4.1, 4.2, and 4.3 describe the technical details of our approach to help teachers design, deploy and visualise at runtime, their lesson scripts for our multi-surface classroom ecology. 4.3
Scripting the Lesson
In order to design the lesson script to be run in the classroom, the teacher can explicitly define the structure of the learning activities and tasks. This is the second teacher’s activity illustrated in Figure 6-2 (scripting the classroom activity). Our approach provides the XML-based specification to represent the teachers design. The output is a lesson script (an XML file) that can be run in the MTClassroom. This lesson script allows the
22
definition of: the learning activities; the applications that will be used to support students in these activities; the sub-tasks within these activities; the scheduling; associated resources needed for the activities; and what will be shown on the teacher’s dashboard. Figure 6 depicts the lesson script, which caters for the integration of additional resources depending on the requirements of each application. In the same figure, the central element that links the “Scripting” and “Conducting” teacher activities is the MTClassroom Orchestration service. This element of our infrastructure can read the lesson script and then deploy the activities in the classroom. Our approach allows the definition of a main workflow of tasks that can be deployed consecutively in the classroom environment under the control of the class teacher, with the flexibility for the teacher to alter any of the initially planned timings. This flexibility is provided, as the actual events in a classroom are not predictable (Dillenbourg, et al., 2010). For example, the teacher may discover several students have a misconception that is important to address; the teacher would then alter their initial orchestration plan to deal with that, at the same time bearing in mind the need to keep the class in synchronisation with all the other tutorial groups. In this sense, our approach needs to deal with a very specific context, which consists of university tutorial classes with very tight time limits and teachers needing to ensure that learning activities that deal with the curriculum, making best use of the limited time available with the tutor and learning groups in the tutorial class. Table 1 shows a simplified excerpt of an example lesson script. This is an XML file that can be generated manually or using a specific design tool. As our study is focused on investigating how the class scripts can be deployed and visualised in the classroom, the scripts were manually translated into a XML format based on the main teacher’s requirements. We illustrate the specification of our script with a scenario. In this case, the teacher aims to conduct a 1-hour tutorial with two main learning activities: first students will be asked to build a concept map about a case and then they will brainstorm about possible goals for their semester project. The teacher wants to use the MTDashboard visualisations to get information about the participation of students in each table and also about the runtime script (see Line 4 in Table 1, node “script”, properties “show_in_orchestrator” and “visualisation” (Line 5), more details about the visualisations will be provided in the upcoming sub-sections). The property “tutorial” (Line 7) in the node indicates to the MTClassroom Orchestration service the specific tutorial session that will be running. In this way this service can obtain from
23
the MTClassroom database, information about the students who should be at each seat (e.g. group membership, full names, student identifiers). For the first activity the teacher will use the CMate concept mapping application; and for the second activity, ScriptStorm (see Lines 8 and 30 ). The concept mapping activity is sub-divided into 3 tasks: Instructions (Line 9), Linking (Line 14) and Sharing with the whole-class (Line 22). For the idea generation activity the teacher chose to have 4 tasks: Instructions, Idea Generation, Idea Categorisation and Reflection (not shown in Table 1 for simplification). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
planned_duration="15" show_in_orchestrator="1" orchestrator_text="Concept Mapping" >
Table 1 Example lesson script XML file depicting the specification of our approach. The example includes the use of CMate (with three tasks) and ScriptStorm (details not included for simplicity). For each task the teacher can define the planned duration (planned_duration, in minutes), whether the task will be shown in the teacher’s dashboard (show_in_orchestrator) and, if so, the name that will be displayed (orchestrator_text). The parameter of the node , “description”, creates a link between the
24
tasks of the script (nodes ) with functions of the learning application (CMate in this case). In other words, this allows the ability to plug into other orchestrable tabletop tools. The property “uses_wall” associated with a triggers an application that will be ready to receive commands from the teacher’s dashboard to show a specific tabletop’s content on the public displays (see Table 1, line 26). Finally, the teacher can add any resources that may be associated with specific tasks. For example, for concept mapping it is possible to include: the sources of expert knowledge; misconceptions; and an initial scaffolding map (this is mapped in lines 18, 19 and 20, for the case of the CMate application). 4.4
Conducting the Lesson Script: Deployment
Once in the classroom, the teacher conducts the learning activities by communicating instructions to the students so they can complete the intended learning activities. In this process, the teacher often needs to tune or adapt parts of the original lesson script according to the way the class is functioning at runtime (Tchounikine, 2013). As broadly described in the previous section, the MTClassroom Orchestration service uses the script specification to allow the teacher to deploy the activities to the target technologies.
Lesson Script
MTClassroom Orchestration
Deployed Script
(service)
Vertical displays
Visualising
Managing
Dispatchers
Running applications and activating tasks
Getting session information
MTDashboard (UI)
Interactive tabletops Logging activity and getting session information
Getting group’s indicators
MTClassroom Database
Figure 7: Deployed script in the MTClassroom digital ecology. Figure 7 illustrates the technological infrastructure to deploy the lesson script and how the different keywords in the lesson script provide information to deploy information or trigger actions in the target technology. First, each interactive tabletop has controlling processes, which we call dispatchers, which are controlled by the
25
Orchestration service to: select and launch applications; restart the system; or automatically shift from one application to another. If a lesson script contains learning activities using two different tabletop applications, these services can automatically stop the current application and start the next one on all the tabletops simultaneously, for the transition between one task belonging to one application and another task belonging to a second application, so the teacher does not have to deal with stopping and re-launching applications at runtime. For example, in Table 1, when the sequence of the enacted script finishes the last task of the first activity - line 22: task description="sharing"- and finds another node, the dispatchers will automatically handle the transition at the tabletops (e.g. they will close CMate and open ScriptStorm for all the tables simultaneously). A similar dispatcher controls applications to be launched on the wall displays when a task has the property “uses_wall” activated (see Figure 7, left). In the same way, the properties in the script can control what is to be shown in the MTDashboard (see Figure 7, right, “show_in_orchestrator” and “visualisation”) which itself gets logging information from the database. The MTDashboard allows the teacher to advance the sequence and control the duration of each task in the lesson script. The next section provides details about the user interface, highlighting the ways we visualised the deployment of the lesson script. 4.5
Monitoring the Lesson Script: Visualisation The original version of MTDashboard (Martinez-Maldonado, et al., 2013b) provides a means for teachers to
control multiple tabletops in the classroom. For this, it provided the teachers with controls that one specific tabletop application could follow (see Figure 8, left). However, in the study described in this paper, the controls manage the sequence of the script independently of the application(s) to be used in the classroom. Figure 8 also shows the controls that teachers can use to send the content from specific tables to the wall displays (these functions are active when the task is linked to the use of the wall as indicated by the script). Finally, from previous studies, a set of visualisations of small group participation and task progress can be shown in the dashboard. In the example shown in Figure 8, multiple radars of students’ participation are displayed, showing the number of touches that each student performs at each table. More details about these types of visualisations are provided in (Martinez-Maldonado, et al., 2013b). The focus of our study is on the visualisation of the lesson script being deployed in the MTClassroom. We present two visualizations that can
26
help teachers enhance their awareness of the enactment of the lesson script at runtime. These are a) the Runtime scripting timeline and b) the Overtime alarm visualisations. Controls of the Lesson script
a) Run-time Scripting Visualisation Start
Instructions
Idea Gen.
Summary
Send to the Wall
Categorisaton
Instructions
Concept Map.
Sharing
End
Small group visualisations
06:25 out of 05:00
b) Over-time Alarm Visualisation
Controls to Send Content to the Wall on–demand
Figure 8: Visualisations to monitoring the enactment of the script added to the MTDashboard (MartinezMaldonado, et al., 2013b): a) Runtime scripting and b) Overtime alarm visualisations. Runtime script visualisation. This visualisation offers a simplified view of all the tasks that are in the plan as a sequence of events. For simplification, it does not show details about the names of the learning applications being used, duration of sub-tasks and other information contained in the lesson plan. The first example in Figure 9-A corresponds to the script described in Section 4.1 (Table 1). The blue progress bar represents the tasks that have been completed and the current task (e.g. in Figure 9-A, the current task is Idea generation). The progress bar advances each time the teacher selects the option “Next Phase” from the MTDashboard (see Figure 8), triggering the relevant actions in the digital ecology according to the “description” property of the node in the script. Figures 9-B and 9-C show real visualisations from two sessions, HCI – Week 7 and Pervasive – Week 5 respectively. Figure 9-B shows all tasks corresponding to both brainstorming and concept mapping activities for the HCI course from that week. The red section of the progress bar indicates that the teacher spent more time than planned for that task at runtime (e.g. in Figure 9-B, the teacher spent more time explaining instructions after the idea generation task, than initially allotted for). Similarly, Figure 9-C depicts the tasks of a richer
27
brainstorming learning activity where the teacher took more time than expected giving initial instructions and allowed extra time for the actual idea generation task as well.
A Start
Instructions
Concept Mapping
Sharing
Instructions
Start
Instructions
Idea Gen.
Instructions
Start
Instructions
Negotiation
Idea Gen.
Categorisation
Reflection
End
Sharing
End
B Categorisation
Instructions
Concept Mapping
C Instructions
Idea Gen.
Instructions
Categorisation
Reflection
Questionnaire
End
Figure 9: Examples of Runtime script visualisations for the lesson scripts in different tutorials. A: Lesson script from example tutorial in Section 4.1 (Table 1); B: Brainstorming and concept mapping lesson script for Week 7 of the HCI Course; C: A more detailed brainstorming activity for Week 5 of the Pervasive Course. Overtime alarm visualisation. This visualisation is a timer that shows both the elapsed and the planned time for the current task during runtime (see Figure 10, left). This aims to help the teacher be aware of the time management, designed for the class. This is especially important for cases where the teacher enacting the script did not design the lesson script. The visualisation turns red to indicate to the teacher that the time allowed for the current task has been exceeded (Figure 10, right). When this happens, even when the teacher advances the class to the next task, a mark of the exceeded time will remain in the Runtime script visualisation (as described above).
4:32 out of 5:00
6:25 out of 5:00
Figure 10: Overtime Alarm Visualisation that warns the teacher when the current activity has lasted more than planned. 5. ANALYSIS AND DISCUSSION In this section we describe the results of the analysis of the data collected from our in-the-wild study where the technological infrastructure and the lesson scripting approach described above were put into practice. As described earlier, the MTClassroom ecology, enhanced with our scripting infrastructure to make the technology orchestrable, was deployed in the HCI and Pervasive courses. 5.1
Sources of evidence
In order to demonstrate the usefulness of our approach, we focus on exploring the impact of making the lesson script visible to teachers and warning them about over-time classroom tasks during runtime. For this,
28
we conducted semi-structured interviews with the class teachers, at mid-semester and at the end of the courses, to understand their preferences regarding the information that was made available to them and how these data influenced them to vary the enactment of the tutorials. We focus our interviews on the three class teachers who were not involved in the design of the plan. In addition, to not being involved in the design, they had also not used the MTDashboard before. We additionally inspected quantitative information about the enactment of the tasks of the script to triangulate the qualitative evidence from the interviews. 5.2
Conditions
Even though the study was framed within a realistic and in-the-wild scenario, we defined a series of conditions for some of the weekly tutorials. Two main conditions consisted of showing (Condition V) or hiding (Condition NV) both the Runtime scripting timeline and the Overtime alarm visualisations. We counter-balanced these two conditions among tutorials, teachers and courses. As the teaching load of the teachers was different (teaching 1, 2 or 3 sessions), the cross-validation was not perfect, but provides a good base to understand the impact of showing the visualisations, within the challenges imposed by a real in-the-wild classroom study. The distribution of these two conditions among tutorials is presented in Table 2. We identify each teacher with a letter. The letter A is for the main teacher who designed the activity and who conducted the first tutorial for the HCI course in Week 5 and the last tutorial for the Pervasive course in Weeks 5 and 6. The class teachers are identified with the letters B, C and D. The semi-structured interviews with the class teachers were conducted in Week 6. As a result of these interviews, the visualisations of the script (Condition V) were always provided thereafter, for the rest of the tutorial sessions (e.g. for Weeks 9 and 13), as requested by the teachers. Thus, the tutorials in Weeks 9 and 13 are not relevant to this paper because no conditions were tested. Course/ Week
Time/ Tutor
Condition
HCI Week 5
9am-A 10am-B 11am-C 12pm-C 1pm-B 2pm-B
NV V NV V NV V
Course/ Week
Time/ Tutor
Condition
HCI
9am-B 10am-B 11am-C 12pm-C 1pm-B 2pm-B
V NV V NV V NV
Week 7
Course/ Week
Time/ Tutor
Condition
Pervasive Week 5
3pm-D 4pm-A
V NV
Pervasive Week 6
3pm-D 4pm-A
V NV
Table 2 Weekly tutorial sessions where the two scripting visualisation conditions were tested: showing (Condition V) or hiding (Condition NV) scripting visualizations. In parallel with these conditions, the dashboard presented different small group indicators to the teachers. An example of one of these small group visualisations was presented in Figure 8. Two other group visualisations can be displayed on the MTDashboard. Figure 11 shows the three visualisation types: 1) radars of participation
29
(described earlier); 2) a visual indicator of the size of the student’s solution; and 3) a textual indicator of the size of the student’s solution. It has been demonstrated that this information makes an impact on the decision making process of teachers to provide feedback to certain groups (Martinez-Maldonado, et al., 2013b). However, the information shown in these indicators is not linked and does not make an impact on the enactment of the script. More information about these visualisations can be found in (Martinez-Maldonado, et al., 2013b). These visualisations were randomly shown to and used by the teachers during the semester, making up to 6 different versions of the MTDashboard that the teachers had contact with, when combined with our script visualisation conditions. We present these as a matrix in Figure 11. We explored if teachers prioritised whether to be informed about the runtime lesson script or about the progress of each small group. In sub-sections 5.3 and 5.4 we present the results of this approach.
Figure 11: Six versions of the teacher’s dashboard. Two conditions: with (V) and without (NV) script visualisations. Three types of small group’s visualisations (Participation, Solution size, and Text).
30
5.3
Teacher’s preferences: to be informed about the runtime scripting versus small group’s progress
Teachers were asked about their preference regarding the different types of information that they could see on the MTDashboard. For this, they were presented with all of the different versions of the MTDashboard that they used during the semester (the six versions of the MTDashboard in Figure 11). Teachers were asked to rank the three versions that they found most useful. The ranking was from 3 to 1 where 3 represented the version they most preferred. They were also asked to justify their responses. The objective was to learn whether they prioritised information that enhanced their awareness of the lesson script, compared to the small-group visualisations – which would allow them to know which group may have needed closer attention or feedback. Figure 12 shows the results of the teacher’s rankings accumulated (in the vertical axis) for each of the conditions shown in Figure 11 (horizontal axis). These show that teachers adopted the script visualisation and the overtime alarm very well, with the version of the MTDashboard that included such visualisations preferred by the three class teachers who had not designed the class script. In only two out of the nine rankings (3 rankings from each of the three teachers), did two different teachers consider that the visualisation of group’s size of their solution would be more important than being aware about the script. One of the teachers justified her answer as follows: “it was really important that I could see the script visualisation in all conditions of the dashboard, along with the information about the size of student’s solutions… those are the things I wouldn’t be able to see by myself”. Another teacher added: “[without using the dashboard] I can be relatively aware about what students discussing, but not about the script or the progress of student’s solutions”.
Figure 12: Cumulative rankings of the three class teachers regarding the two conditions of script visualisations and the three types of small group’s visualisations shown in the MTDashboard. The ranking was from 3 to 1 where 3 represented the version they most preferred (cumulated in the vertical axis) for each condition shown in Figure 11 (horizontal axis).
31
During the post-tutorial interviews with the class teachers, all gave very positive feedback about the use of both the runtime script visualisation and the overtime alarm visualisation. For the first visualisation, one of the class teachers stated this as follows: “the information provided by the script visualisation was very useful and easy to understand”. Another of the class teachers added “looking at the script visualisation helped me have a better idea about the tasks I was supposed to conduct”. Teachers also highlighted the importance of the overtime alarm visualisation as “it conveys key information, similar to the script visualisation, with the difference that I can read the time elapsed for each task”. One teacher commented on the importance of being aware of the planned duration of the tasks following, explained as follows: “it was useful to know what was the expected duration of the current stage (task)”. Teachers also explained that both visualisations regarding the runtime lesson script were valuable and complemented each other: the runtime script visualisation informed teachers about the enactment of the whole script without showing details of each task; and the overtime alarm visualisation depicted information about the current task, informing the teacher when the planned time was consumed. Overall, teachers preferred the condition of the dashboard where both visualisations about the script were shown. Teachers justified this preference and positively argued for both forms of visualisations discussed above. One of the teachers described an example of how s/he used these visualisations to enact the runtime script: “the script visualisation was very useful in general since it tells you where are you up to in the tutorial, it was useful to see if you are going okay with the timing for each task; and the [overtime alarm] visualisation was useful to be aware of the amount of time, to do the current task”. Teachers also reported other uses of the visualised script, such as knowing about the progress of the students when combined them with the visual indicators of the group work. One of the teachers explained this as follows: “it was also useful to be aware of the partial time in a task and link it to the student’s activity to realise that, for example, [if] a group generated 50 ideas in the first 5 minutes and another group has produced only 10 [ideas], it made me be aware about the different pace of groups related to time“. This demonstrates that teachers were open to use the visualisations related with the class script and that they found them useful to keep track of the tasks designed by the main teacher to be enacted in the classroom. However, a careful triangulation of evidence is needed to know if indeed the teachers’ behaviour was affected as a result of providing them with these visual aids. This is discussed in the next sub-section.
32
5.4
Analysing the impact of the script visualisations on the runtime lessons
This analysis consisted of measuring the extent to which the enactment of the classes matched the planned script. Figure 13 depicts the duration of the different tasks (horizontal axis) of the planned lesson script and the runtime scripts for the six sessions of Week 5 in the HCI course (vertical axis). The graph also indicates the class teacher who conducted each tutorial as a reference (e.g. represented with the letters A, B, C and D). We focus on the first part of the tutorials which consisted of a brainstorming learning activity (to be ideally conducted in 33 minutes). The main tasks in the original plan included 5 minutes for idea generation (Figure 13, first line, green horizontal bar), 5 minutes for idea categorisation (red bar) and 14 minutes for reflection activities to be performed at each tabletop (dark blue bar). Overall, the plan was that the teacher would spend a total of 33 minutes of the class for the whole learning activity (including instructions and a questionnaire at the end of the activity) so students could work on their projects for the rest of the class. Tasks such as instructions and negotiation have been unified to simplify the graph.
Figure 13: Brainstorming learning activity for the HCI course in Week 5 For the condition where the script visualisations were shown on the MTDashboard (Condition V), we can see that the class teachers adapted the designed script giving more time to students for the idea generation process (see green bars in tutorials at 10AM, 12PM and 2PM, almost exactly 10 minutes each). Similarly, the duration of the idea categorisation was longer than planned (from 5 to 8.5 minutes in average for Condition V). However, these main two stages were somewhat similarly enacted for the tutorials where the script visualisations were not provided (Condition NV), with key differences in the other tasks such as the provision of instructions, the reflection and the questionnaire. The median of the difference between the duration of each planned task and its enactment for Condition NV was 15% overtime. By contrast, for the tutorials in
33
Condition V the median was -6.9%, meaning that, overall, individual tasks were adapted by teachers so the entire runtime script’s duration was similar to that as planned. We ran a Mann-Whitney's U test to evaluate the significance of this difference. We found a significant effect of the script visualisations (the total mean difference between each task’s planned duration and their enactment was 19.1% (±52) for Condition NV and 13.6% (±30) for Condition V; U = 232, Z = 2.76, p < 0.05). Whilst the tutorial at 11AM was not different from the ones in the Condition V, there were anomalies with the other two tutorials. For the tutorial at 9AM, the duration of the learning activity extended above 40 minutes. And for the tutorial at 1PM, the class teacher skipped the last of the tasks. These anomalies are not necessarily associated to problems of teacher’s awareness linked to not having access to a visual representation of the script. In the teacher’s interviews, they expressed that “if it was needed I had to adjust the plan of the script many times depending on what happened in the classroom”. So we found for example that the first tutorial may have been extended because of students arriving late and that the teacher was not aware of the time s/he had for the last task of the activity making a decision of reducing the duration of the whole brainstorming activity. Given the limited number of cases that the weekly tutorials provide at this level of analysis (whole class script), we further analyse the situation of other weeks. Figure 14 shows a similar analysis for Week 7 of the HCI course. This time, each class teacher was present in both the V and NV tutorial conditions. In this case, the tutorials consisted of a brainstorming activity and concept mapping activity (Figure 15). The brainstorming activity was to be enacted first, ideally for 18 minutes (see Figure 14, first line, Lesson Script). In this case, we visibly observed a difference between the conditions. For condition V, even though the three tutorials were overtime, the class teachers gave similar amounts of time to each activity keeping the overall activity below the 25 minute mark. This means that 1) the lesson script may have not been properly calibrated to give adequate time to the idea generation and categorisation tasks (one of the teachers involved in these tutorials specifically said that “sometimes the timing for a planned task [as in the main teacher’s design] was not right”); 2) all teachers adapted the runtime script giving more time for the main brainstorming activities and reduced the time for the questionnaire; and, 3) as indicated by two of the teachers, the visualisations helped the teachers to be aware about the tasks they had been overtime in, to compensate accordingly using time from other less important tasks (as in this case, the questionnaire). Teachers confirmed that the visualisations enhanced their awareness to make more informed decisions about how to adapt the script in runtime. One of the teachers explained this as follows: “the script
34
visualisation specially helped me figure out when I had to compensate the duration of certain tasks, for example, if I could see many red sections (overtimed tasks) I knew I had to consider that for the current stage”. For Condition NV, generally the idea generation task took longer than in the tutorials in the Condition V (13.2 and 8.6 minutes respectively). This caused an extension of the activity beyond 28 minutes (as in the case of the tutorials at 10AM and 2PM); or a large reduction of the time for idea categorisation and reflection, which are important tasks in this learning activity (tutorial at 12 PM). We found a significant effect of the script visualisations on the matching between each planned task and the enactment (the total mean difference between each individual task’s planned duration and their enactment was 36.2% (±34) and 78% (±85) over the planned time for Conditions V and NV respectively; U = 36.5, Z = -3.66, p < 0.01).
Figure 14: Brainstorming learning activity for the HCI course in Week 7. For Week 7 of the HCI course, the second activity, concept mapping, was also scripted with multiple tasks. Figure 15 represents the three tasks that represented the concept mapping activity (instructions, concept mapping itself, and sharing each table’s solution with the class). For each tutorial we have coloured in dark brown the previous brainstorming activity (details previously shown in Figure 14). According to the plan, after the brainstorming activity, the teacher would have enough time to explain the instructions of the problem to solve (8 minutes), allow students to build their concept maps (25 minutes) and a whole-class activity using the vertical displays so each group would explain and justify their maps (7 minutes). Results show that for Condition V class teachers managed to keep constant durations for the three tasks in order to finish the tutorials exactly on time (less than 60 minutes). We can also note that, even when the brainstorming activity in each tutorial finished at different times, in the three tutorials the concept mapping activity exactly started at minute 30 (see red vertical rectangle in Figure 15). The duration of the concept
35
mapping task was shorter (an average of 19 minutes compared with 25 minutes in the plan), with teachers prioritising the sharing task, giving more time for it (an average of 10 minutes compared with 7 minutes in the plan). By contrast, for the case of Condition NV, the lack of awareness provided by the scripting visualisations significantly reduced the time that teachers used for explaining the instructions and for conducting the sharing task (U = 162.5, Z = -2.65, p < 0.05). This generated problems in the 10AM Tutorial, where the class teacher had to use part of the 11AM tutorial time slot. In the tutorial at 2PM there was almost no time for the Sharing task, therefore, that important learning task was skipped.
Figure 15: Concept mapping learning activity for the HCI course in Week 7 (as the second activity of the week after Brainstorming, see Figure 2). Particularly, for the case of class teacher C who taught in the tutorials at 11 AM and 12 PM, s/he managed to compensate the tasks of the learning activities delivering the runtime script similarly for both conditions. We acknowledge that different class teachers would react differently to visualisation support and some class teachers may need help to different extents (Martinez-Maldonado, et al., 2012c). However, in this case, the class teacher had contact with the script visualisation in the first tutorial of the day (at 11AM) so s/he could have just replicated the same tutorial without any visual aid. In Week 5, when s/he had the visual support in the second tutorial (at 12PM), the delivery of the runtime script for the Condition NV was longer by 6 minutes than the script for the Condition V (see Figure 13, tutorials at 11AM and 12PM). Similarly, we could not find a relationship between tutorials held by the same class teacher. This may highlight the impact of making the script explicit and visible from design to runtime. For the case of the Pervasive course, it needs to be analysed case by case. Figure 16 shows the match between the planned brainstorming activity scheduled for weeks 5 and 6 of this course. Similar to the HCI
36
course, a different week featured a different brainstorming topic. The plan was that this activity would last 19 and 23 minutes respectively (see Figure 16 and Figure 17). There were two enactments of this script, one by the teacher who designed the activity (A), and one by a class teacher (D). No relationship can be sought given the small sample. However, we can observe that the designer teacher also varies the runtime script. For example, in Week 5, the main teacher did not give time to explain the instructions of the activity (0.1 minutes for instructions therefore s/he had to explain the activity while students already had the user interface ready for idea generation) and also shortened the idea generation task below the planned time (Condition NV). For the class teacher who received the script visualisations in Week 5 (Class teacher D), even though the tutorial was overtime, s/he managed to follow more rigorously the plan, giving time to each of the stages of the learning activity.
Figure 16: Brainstorming learning activity for the Pervasive course in Week 5. In Week 6 (see Figure 17), the main teacher (A) had access to the visualisations of the script. As a result, the main teacher kept the runtime script closer to the planned script, following the structure of the task of the learning activity.
Figure 17: Brainstorming learning activity for the Pervasive course in Week 6.
37
6. CONCLUSIONS AND FUTURE WORK We presented an approach at the intersection between the CSCW and CSCL fields for deploying and visualising teacher’s scripts of small-group collaborative activities in a multi-surface classroom ecology. This was instantiated in an authentic deployment of university level tutorials held during one semester in the MTClassroom. We drew upon a set of qualitative and quantitative evidence collected during the enactment of the teacher’s designed scripts and from multiple interviews with the class teachers. The technological infrastructure that was built based on the proposed approach provides means for teachers to deploy and monitor the execution of the lesson script in the classroom on-the fly. This allows the teacher to adapt the use of the surface devices and the educational software and, at the same time, be aware of the status of the execution of planned tasks that are needed to meet the learning and pedagogical objectives. In the next subsection we list our closing remarks. Then, we finalise with a discussion of the scope of our approach, potential research strands of future work and the limitations of the study described in the paper. 6.1
Remarks Positive impact of showing script visualisations in runtime: The classroom technology can also provide
information to the teacher that is not easily available during runtime. We provided teachers with visual information about the progress of the runtime script with warnings for cases when certain tasks were over the planned time. Teachers preferences favoured the information about the runtime scripting visualisations over indicators of the small-group participation and progress. These two types of information can be helpful for the teacher in different planes: the first, to regulate and adapt the planned activities according to the state of the class; the second, to identify groups that may need closer attention or feedback. We also demonstrated that showing the runtime scripting timeline and the overtime alarm visualisation helped teachers to either: 1) keep the duration of the tasks similar, as to what was prescribed in the plan, 2) adapt the duration of tasks in runtime to prioritise the activities that they considered more important, or 3) expand the duration of the tasks in cases when they noticed that the duration of certain tasks was not accurately calibrated in the plan. In all cases, our approach proved useful for enhancing teacher’s awareness of the class script, especially for class teachers who had to enact the script but were not involved in the design. Nevertheless, even the teacher who designed the activity responded differently to the condition when the scripting visual aid was not provided.
38
Towards a digital classroom ecology: The orchestrability and visibility functions provided by our technological infrastructure are made possible by the affordances of data capture and connectivity of the MTClassroom. This highlights the importance and need for a structured integration of different learning technologies. Although this paradigm of educational technology integration has mostly focused on online learning tools (Prieto et al., 2012), it is also important for collocated learning sessions and, in particular, for deploying surface technologies in the classroom (Bellucci, et al., 2014; Martinez-Maldonado et al., 2013a; Muñoz-Cristóbal et al., 2014). Our approach contributes to this vision by showing, through a real instantiation in-the-wild, the importance of a technology architecture that can afford supporting teachers so they can have a degree of control over the devices used in the classroom so they can focus on the design and management of the learning activities, and the provision of feedback to students. 6.2
Scope, Limitations and Future work
Generalisation of our approach: the aim of our approach is that it can be generalised to other learning contexts that require some degree of scripting. From very basic scripting, where multiple applications can be used and orchestrated on multiple surface devices for small-group work (e.g. tabletops, tablets, whiteboards), to more complex scripts where the learning activity is divided into tasks (e.g. applications that scaffold student’s activities, negotiation and reflection). For example, in the semester covered by our study, a third less-scripted tabletop application, Well-Met, was referenced by our technological infrastructure and used in the classroom. The protocol to translate teacher’s design into deployments can be used as a general solution to connect multiple applications to the classroom ecology so the teacher can conduct different learning activities. Generalisation of our findings: We demonstrated the usefulness of visualising the lesson script to enhance teacher’s awareness of the management of the small-group tasks. However, even though the study included 4 different educators, over two different courses, a more comprehensive study can explore other aspects of classroom orchestration that are out of the scope of this paper but should be considered for future exploration. This includes, for example, testing other user interface designs for the teacher’s dashboard, a deeper analysis of design patterns for lesson scripting, and the use of third-party tabletop applications. Further work should also be done to include a broader range of educators using the system including roles such as learning designer, teachers, assistants, etc.
39
Studying deployments in-the-wild: Implementing our approach in an authentic classroom scenario, including real teachers, students and learning activities linked to the curricula offers both limitations and advantages. The main limitation is that it makes it difficult to establish experimental conditions and control certain variables. For example, it was not possible to compare the use of the infrastructure with a control group (e.g. a more traditional class where tabletops were not used), since the teacher had to offer similar learning conditions and opportunities to all students so no student receives any (dis)advantage. By contrast, deploying our system in-the-wild demonstrates the feasibility of our approach to be effectively implemented in real scenarios and not only in the lab. We demonstrated that we can aid real teachers under authentic classroom conditions, despite all of the possible unexpected events that may occur in a regular class, such as time restrictions, students arriving late, students asking questions, etc. Use of third-party design tools: Designing learning activities is often a demanding task that usually requires the use of tools and external representations (Goodyear et al., 2010). Whilst this paper focused on the enactment of a class script, further work still needs to be done in regards of the design of the scripts. We showed how our proposed architecture can serve as a basis for a teacher to deploy and orchestrate learning tasks by themselves or with little ICT personnel support. Plug-ins for Learning Design tools (e.g. WebCollage (Villasclaras-Fernández et al., 2009)) are feasible to be developed to present teachers with a UI that can help them model their lesson scripts and then automatically generate the script file (Prieto, et al., 2012). Further exploration needs to be done to evaluate how a design tool can be adapted and be used by the teacher to produce their class designs to be translated into the specification proposed by our approach. Teacher’s handheld dashboard: In our study, the script visualisations were displayed on the MTDashboard which was deployed in a handheld touch device. Using a tablet to show key information about the status of the classroom to the teacher in real time has been recommended by Kharrufa et al. (2013b) and MartinezMaldonado et al. (2013b) as it provides privacy and mobility affordances. Overall, teachers identified both advantages and disadvantages of carrying a tablet to control the classroom ecology. Teachers generally liked the opportunity of having access to live information at any time (e.g. one of the teachers said in the posttutorial interviews: “I liked having the information conveyed in the tablet”). Another teacher explained the advantages of the portability of the dashboard as follows: “it is good having the tablet because you decide when to check visualisations no matter where in the classroom you are”. However, further research needs to be done to explore other ways in which this information can be displayed, as teachers pointed to problems
40
such as the restriction imposed by using one hand to carry the tablet (e.g. one of the teachers said that “the problem is that you cannot use one of your hands, this makes it not easy at times where you need both hands to revise work or other devices that students may want you to check”). One of the teachers also explained that, in some cases, a more intrusive alert may have been appreciated: “I would have liked to receive even more feedback, such as say, vibrating the tablet or some sound when I was overtime. Even though the information was helpful, sometimes students were asking questions and it was not easy to realise that I was overtime”. Flexibility: For the study described in this paper, our approach offers runtime script control at a class level. It does not offer individual control of the script at a table level or alternative workflows. However, literature supports the idea that in higher education environments there are various constraints (e.g. limited class duration, class size, location, and availability of technology) that can make very difficult to make transformative changes to the planned workflow (Graham, et al., 2006). In our post-tutorial interviews, one of our teachers said: “sometimes it may be useful if I can advance individual tables to the next task or even let students advance it themselves”. This functionality can be supported by our current script specification; however, the mechanisms of control need to be more flexible to allow teachers or students control the enactment of the class script at a table level. Do Lenh‘s (2012) approach, using Tinker lamps in the classroom, allowed teachers to stop and advance individual tables using paper cards that the teacher could place on the tables. However, this approach did not allow the teacher to define the class script; this was totally pre-defined by the researcher. A combination of our approach with that kind of functionality would enhance both perspectives of functionality. Post-analysis of the visualisations for reflection and re-design. The scripting data presented in this paper can have other uses that should be considered for future research. This includes using the information about the enactment of the script to reflect about possible re-configuration of the script and re-design. The main teacher may use visualisations to see what had happened across classes. This might be used in several ways. For example, if the first class was treated as a test run, the teacher can review its timing and then revise the script for the rest of the classes that week. Even if that were not possible, as in the case of the classes scheduled close together (as it occurred in the study described in this paper), the visualisations in the results section of this paper can serve to inform the revisions to the learning design for the future years. In either case, the forms of data our system provides would not otherwise be available to the main teacher; the best
41
they could expect would be debriefs from the class teachers. As they had many things to keep in mind as they taught, it would be unlikely that they could accurately recall such information. REFERENCES
Abbott, Kenneth R., and Sunil K. Sarin. (1994). Experiences with workflow management: issues for the next generation. In Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work (CSCW), Chapel Hill, North Carolina, USA, 22-26 October 2012. New York: ACM. pp. 113-120. Ackerman, Marks, Juri Dachtera, Volkmar Pipek, and Volker Wulf. (2013). Sharing Knowledge and Expertise: The CSCW View of Knowledge Management. Computer Supported Cooperative Work (CSCW), vol. 22, no. 4-6, August 2013, pp. 531-573. Al-Qaraghuli, Ammar, Halimah Badioze Zaman, Azlina Ahmad, and Jihan Raoof. (2013). Interaction patterns for assessment of learners in tabletop based collaborative learning environment. In Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration (OZCHI 2013), Adelaide, Australia, 26-29 November 2013. New York: ACM. pp. 447-450. AlAgha, Iyad, Andrew Hatch, Linxiao Ma, and Elizabeth Burd. (2010). Towards a teacher-centric approach for multi-touch surfaces in classrooms. In Proceedings of the International Conference on Interactive Tabletops and Surfaces 2010 (ITS 2010), Saarbrücken, Germany, 710 November 2010. New York: ACM. pp. 187-196. Baker, Ryan S., and Kalina Yacef. (2009). The State of Educational Data Mining in 2009: A Review and Future Visions. Journal of Educational Data Mining (JEDM), vol. 1, no. 1, Fall 2009, pp. 3-17. Bellucci, Andrea, Alessio Malizia, and Ignacio Aedo. (2014). Light on horizontal interactive surfaces: Input space for tabletop computing. ACM Computing Surveys (CSUR), vol. 46, no. 3, January 2014, pp. 1-42. Benko, Hrvoje, Meredith Ringel Morris, A.J. Bernheim Brush, and Andrew D. Wilson. (2009). Insights on interactive tabletops: A survey of researchers and developers, Report: MSR-TR-2009-22. Redmond, USA: Microsoft Research. Berland, Leema Kuhn, and Brian J. Reiser. (2009). Making sense of argumentation and explanation. Science Education, vol. 93, no. 1, January 2009, pp. 26-55. Betcher, Chris, and Mal Leicester. (2009). The interactive whiteboard revolution: Teaching with IWBs. Camberwell, Australia: ACER Press. Bull, Susan, Barbara Wasson, Matthew D Johnson, Dean Petters, and Cecilie Hansen. (2012). Helping Teachers Effectively Support Group Learning. In Proceedings of the Workshop on Intelligent Support for Learning in Groups - Conference on Intelligent Tutoring Systems (ITS'12). Chania, Crete, Greece, 15 June 2012. pp. 1-4. Cabitza, Federico, and Carla Simone. (2013). Computational Coordination Mechanisms: A tale of a struggle for flexibility. Computer Supported Cooperative Work (CSCW), vol. 22, no. 4-6, August 2013, pp. 475-529. Chaka, Chaka. (2010). Collaborative Learning: Leveraging Concept Mapping and Cognitive Flexibility Theory. In P. L. Torres & R. d. C. V. Marriott (Eds.), Handbook of Research on Collaborative Learning Using Concept Mapping. Birmingham , UK: Idea Group Inc pp. 152-170. Clayphan, Andrew, Christopher Ackad, Anthony Collins, Bob Kummerfeld, and Judy Kay. (2011). Firestorm: A brainstorming application for collaborative group work at tabletops. In Proceedings of the International Conference on Interactive Tabletops and Surfaces 2011 (ITS 2011), Kobe, Japan, 30 Oct - 2 Nov 2012. New York: ACM. pp. 162-171. Clayphan, Andrew, Judy Kay, and Armin Weinberger. (2013a). Scriptstorm: scripting to enhance tabletop brainstorming. Personal and Ubiquitous Computing, vol. 18, no. 6, August 2014, pp. 1433-1453.
42
Clayphan, Andrew, Roberto Martinez-Maldonado, Christopher Ackad, and Judy Kay. (2013b). An approach for designing and evaluating a plug-in vision-based tabletop touch identification system. In Proceedings of the 25th Australian Computer-Human Interaction Conference (OZCHI 2013), Adelaide, Australia, 26-29 November 2013. New York: ACM. pp. 373-382. Cohen, Elizabeth G. (1994). Restructuring the Classroom: Conditions for Productive Small Groups. Review of Educational Research, vol. 64, no. 1, Spring 1994, pp. 1-35. Cuban, Larry. (1986). The Classroom Use of Technology Since 1920. New York: Teachers College Press, Columbia University. Cuban, Larry., Heather Kirkpatrick, and Craig Peck. (2001). High access and low use of technologies in high school classrooms: Explaining an apparent paradox. American Educational Research Journal, vol. 38, no. 4, December 2001, pp. 813-834. Dillenbourg, Pierre. (1998). What do you mean by 'collaborative learning'? In P. Dillenbourg (Ed.), Collaborative Learning: Cognitive and Computational Approaches. Advances in Learning and Instruction Series. Oxford: Elsevier Science, pp. 1-19. Dillenbourg, Pierre. (2002). Over-scripting CSCL: The risks of blending collaborative learning with instructional design. In P. A. Kirschner (Ed.), Three worlds of CSCL. Can we support CSCL. Heerlen: Open Universiteit Nederland, pp. 61-91. Dillenbourg, Pierre, and Michael Evans. (2011a). Interactive tabletops in education. International Journal of Computer-Supported Collaborative Learning, vol. 6, no. 4, December 2011, pp. 491-514. Dillenbourg, Pierre, Guillaume Zufferey, Hamed Alavi, Patrick Jermann, Son Do-Lenh, Quentin Bonnard, . . . Frédéric Kaplan. (2011b). Classroom orchestration: The third circle of usability. In Proceedings of the International Conference on Computer Supported Collaborative Learning 2011 (CSCL 2011), Hong Kong, China, 4-8 July 2011. ISLS. pp. 510-517. Dillenbourg, Pierre., and Patrick. Jermann. (2010). Technology for classroom orchestration. In M. S. Khine & I. M. Saleh (Eds.), New science of learning. New York: Springer, pp. 525-552. Do-Lenh, Son. (2012). Supporting Reflection and Classroom Orchestration with Tangible Tabletops. PhD dissertation. École Polytechnique Fédérale de Lausanne, Switzerland: CRAFT group, School of Computer Science. Evans, Michael, and Jochen Rick. (2014). Supporting Learning with Interactive Surfaces and Spaces. In J. M. Spector, M. D. Merrill, J. Elen & M. J. Bishop (Eds.), Handbook of Research on Educational Communications and Technology. New York: Springer, pp. 689-701. Fabos, Bettina. (2001). Media in the Classroom: An Alternative History. In Proceedings of the Annual Meeting of the American Educational Research Association (AERA 2001), Seattle, USA, 10-14 April 2001. pp. 1-77. Ganoe, Craig H., Jacob P. Somervell, Dennis C. Neale, Philip L. Isenhour, John M. Carroll, Mary Beth Rosson, and D. Scott McCrickard. (2003). Classroom BRIDGE: using collaborative public and desktop timelines to support activity awareness. In Proceedings of the 16th annual ACM symposium on User interface software and technology, Vancouver, Canada, 2-5 November 2003. New York: ACM. pp. 21-30. Gao, Hong, E Shen, Susan Losh, and Jeannine Turner. (2007). A Review of Studies on Collaborative Concept Mapping: What Have We Learned About the Technique and What Is Next? Journal of Interactive Learning Research, vol. 18, no. 4, October 2007, pp. 479-492. Goodyear, Peter, and Symeon Retalis. (2010). Learning, technology and design. In P. Goodyear & S. Retalis (Eds.), Technology-enhanced learning: design patterns and pattern languages. Rotterdam: Sense Publishers, pp. 1-28. Graham, Charles, and Charles Dziuban. (2006). Blended Learning Systems. In D. H. Jonassen & M. P. Driscoll (Eds.), Handbook of Research on Educational Communications and Technology (Second edition). New York: Springer, pp. 269-276. Gross, Tom. (2013). Supporting Effortless Coordination: 25 Years of Awareness Research. Computer Supported Cooperative Work (CSCW), vol. 22, no. 4-6, August 2013, pp. 425-474. 43
Gunter, Philip, Marti Venn, and John H Hummel. (2001). Improving the Dynamics of Academic Interactions: How to Script Lessons. Valdosta, GA: Valdosta State University. Gutiérrez Rojas, Israel, Raquel Crespo García, and Carlos Delgado Kloos. (2011). Orchestration and Feedback in Lab Sessions: Improvements in Quick Feedback Provision. In C. Kloos, D. Gillet, R. Crespo García, F. Wild & M. Wolpers (Eds.), Towards Ubiquitous Learning. Berlin Heidelberg: Springer, pp. 424-429. Higgins, Steven, Gary Beauchamp, and Dave Miller. (2007). Reviewing the literature on interactive whiteboards. Learning, Media and Technology, vol. 32, no. 3, September 2007, pp. 213-225. Higgins, Steven, Emma Mercier, Elizabeth Burd, and Andrew Hatch. (2011). Multi-touch tables and the relationship with collaborative classroom pedagogies: A synthetic review. International Journal of Computer-Supported Collaborative Learning, vol. 6, no. 4, December 2011, pp. 515-538. Hornecker, Eva, Paul Marshall, Nick Sheep Dalton, and Yvonne Rogers. (2008). Collaboration and interference: awareness with mice or touch input. In Proceedings of the International Conference on Computer Supported Cooperative Work 2008 (CSCW 2008), San Diego, CA, USA, 8-12 November 2008. New York: ACM. pp. 167-176. Jeong, Heisawn, and Cindy E. Hmelo-Silver. (2010). An Overview of CSCL Methodologies. In Proceedings of the 9th International Conference of the Learning Sciences (ICLS 2010), Chicago, USA, 29 Jun - 2 Jul 2010. ISLS. pp. 920-921. Johnson, Roger T, and David W. Johnson. (1986). Action Research: Cooperative Learning in the Science Classroom. Science and Children, vol. 24, no. 2, October 1986, pp. 31-32. Jones, Vicki, and Jun H Jo. (2004). Ubiquitous learning environment: An adaptive teaching system using ubiquitous technology. In Proceedings of the 21st Australasian Society for Computers in Learning in Tertiary Education Conference, Perth, Australia, 5-8 December 2004. ASCILITE. pp. 468-474. Kharrufa, Ahmed, Madeline Balaam, Phil Heslop, David Leat, Paul Dolan, and Patrick Olivier. (2013a). Tables in the Wild: Lessons Learned from a Large-Scale Multi-Tabletop Deployment. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'13), Paris, France, 27 Apr - 2 May 2013. New York: ACM. pp. 1021-1030. Kharrufa, Ahmed, David Leat, Paul Dolan, and Patrick Olivier. (2010a). Digital mysteries: designing for learning at the tabletop. In Proceedings of the International Conference on Interactive Tabletops and Surfaces (ITS 2010), Saarbrücken, Germany, 7-10 November 2010. New York: ACM. pp. 197-206. Kharrufa, Ahmed, Roberto Martinez-Maldonado, Judy Kay, and Patrick Olivier. (2013b). Extending tabletop application design to the classroom. In Proceedings of the International Conference on Interactive Tabletops and Surfaces (ITS 2013), St Andrews, UK, 6-9 October 2013. New York: ACM. pp. 115-124. Kharrufa, Ahmed Sulaiman. (2010). Digital tabletops and collaborative learning. PhD dissertation. Newcastle University, UK: School of Computing Science. Kharrufa, Sulaiman, and Patrick Olivier. (2010b). Exploring the requirements of tabletop interfaces for education. International Journal of Learning Technology, vol. 5, no. 1, February 2010, pp. 42-62. Kollar, Ingo, Frank Fischer, and Friedrich W. Hesse. (2006). Collaboration scripts–a conceptual analysis. Educational Psychology Review, vol. 18, no. 2, June 2006, pp. 159-185. Kottasz, Rita. (2005). Reasons for student non-attendance at lectures and tutorials: An analysis. Investigations in university teaching and learning, vol. 2, no. 2, Spring 2005, pp. 5-16. Kreitmayer, Stefan, Yvonne Rogers, Robin Laney, and Stephen Peake. (2013). UniPad: orchestrating collaborative activities through shared tablets and an integrated wall display. In Proceedings of the Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UBICOMP 2013), Zurich, Switzerland, 8-12 September 2013. New York: ACM. pp. 801-810. 44
Marshall, Paul, Eva Hornecker, Richard Morris, Nick Sheep Dalton, and Yvonne Rogers. (2008). When the fingers do the talking: A study of group participation with varying constraints to a tabletop interface. In Proceedings of the 3rd IEEE International Workshop on Horizontal Interactive Human Computer Systems, 2008 (TABLETOP 2008). Amsterdam, Netherlands, 1-3 October 2008. Los Alamitos, CA: IEEE. pp. 33-40. Martinez-Maldonado, Roberto, Anthony Collins, Judy Kay, and Kalina Yacef. (2011a). Who did what? who said that? Collaid: an environment for capturing traces of collaborative learning at the tabletop. In Proceedings of the International Conference on Interactive Tabletops and Surfaces (ITS2011), Kobe, Japan, 30 Oct - 2 Nov 2012. New York: ACM. pp. 172-181. Martinez-Maldonado, Roberto, Yannis Dimitriadis, Andrew Clayphan, Juan A. Muñoz-Cristóbal, Judy. Kay, Luis P. Prieto, and Maria. J. Rodríguez-Triana. (2013a). Integrating orchestration of ubiquitous and pervasive learning environments. In Proceedings of the 25th Australian Computer-Human Interaction Conference (OZCHI 2013), Adelaide, Australia, 26-29 November 2013. New York: ACM. pp. 189-192. Martinez-Maldonado, Roberto, Yannis Dimitriadis, Judy Kay, Kalina Yacef, and Marie-Theresa Edbauer. (2013b). MTClassroom and MTDashboard: supporting analysis of teacher attention in an orchestrated multi-tabletop classroom. In Proceedings of the International Conference on Computer Supported Collaborative Learning (CSCL2013), Madison, USA, 15-19 June 2013. ISLS. pp. 119-128. Martinez-Maldonado, Roberto, Yannis Dimitriadis, Alejandra Martinez-Monés, Judy Kay, and Kalina Yacef. (2013c). Capturing and analyzing verbal and physical collaborative learning interactions at an enriched interactive tabletop. International Journal of ComputerSupported Collaborative Learning, vol. 8, no. 4, November 2013, pp. 455-485. Martinez-Maldonado, Roberto, Judy Kay, and Kalina Yacef. (2010). Collaborative concept mapping at the tabletop. In Proceedings of the International Conference on Interactive Tabletops and Surfaces (ITS 2010), Saarbrücken, Germany, 7-10 November 2010. New York: ACM. pp. 207210. Martinez-Maldonado, Roberto, Judy Kay, and Kalina Yacef. (2011b). Visualisations for longitudinal participation, contribution and progress of a collaborative task at the tabletop. In Proceedings of the International Conference on Computer Supported Collaborative Learning 2011 (CSCL 2011), Hong Kong, China, 4-8 July 2011. ISLS. pp. 25-32. Martinez-Maldonado, Roberto, Judy Kay, and Kalina Yacef. (2012a). Analysing knowledge creation and acquisition from individual and face-to-face collaborative concept mapping. In Proceedings of the International Conference on Concept Mapping 2012 (CMC 2012), Valleta, Malta, 17-20 September 2012. IHMC. pp. 17-24. Martinez-Maldonado, Roberto, Judy Kay, Kalina Yacef, Marie-Theresa Edbauer, and Yannis Dimitriadis. (2012b). Orchestrating a Multi-tabletop Classroom: from Activity Design to Enactment and Reflection. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces 2012 (ITS 2012), Cambridge, USA, 11-14 November 2012. New York: ACM. pp. 119-128. Martinez-Maldonado, Roberto, Kalina Yacef, and Judy Kay. (2013d). Data Mining in the Classroom: Discovering Groups’ Strategies at a Multi-tabletop Environment. In Proceedings of the International Conference on Educational Data Mining 2013 (EDM 2013), Memphis, USA, 6-9 July 2013. IEDMS. pp. 121-128. Martinez-Maldonado, Roberto, Kalina Yacef, Judy Kay, and Beat Schwendimann. (2012c). An interactive teacher’s dashboard for monitoring multiple groups in a multi-tabletop learning environment. In Proceedings of the International Conference on Intelligent Tutoring Systems 2012 (ITS 2012), Chania, Greece, 14-18 June 2012. Berlin Heidelberg: Springer. pp. 482-492. Mercier, Emma, Steven Higgins, Elizabeth Burd, and Andrew Joyce-Gibbons. (2012). Multi-Touch Technology to Support Multiple Levels of Collaborative Learning in the Classroom. In
45
Proceedings of the International Conference of the Learning Sciences 2012 (ICLS 2012), Sydney, Australia, 2-6 July 2012. ISLS. pp. 187-191. Michaelsen, Larry K, L Dee Fink, and Arletta Knight. (1997). Designing effective group activities: Lessons for classroom teaching and faculty development. In D. DeZure (Ed.), To Improve the Academy: Resources for Faculty, Instructional and Organizational Development. Stillwater, OK,USA: New Forums, pp. 373-398. Morris, Meredith Ringel. (2006). Supporting Effective Interaction with Tabletop Groupware. PhD dissertation. Stanford University, Stanford: Computer Science Department. Morris, Meredith Ringel, Anne Marie Piper, Anthony Cassanego, and Terry Winograd. (2005). Supporting Cooperative Language Learning: Issues in Interface Design for an Interactive Table, Stanford University Technical Report. Stanford, California: Stanford. Muir-Herzig, Rozalind G. (2004). Technology and its impact in the classroom. Computers & Education, vol. 42, no. 2, February 2004, pp. 111-131. Müller-Tomfelde, Christian, and Morthen Fjeld. (2012). Tabletops: Interactive Horizontal Displays for Ubiquitous Computing. Computer, vol. February, no. 1, February 2012, pp. 78-81. Muñoz-Cristóbal, Juan A., Luis P. Prieto, Juan I. Asensio-Pérez, Alejandra Martínez-Monés, Iván M. Jorrín-Abellán, and Yannis Dimitriadis. (2014). Deploying learning designs across physical and web spaces: Making pervasive learning affordable for teachers. . Pervasive and Mobile Computing, vol. 14, no. Special Issue on Pervasive Education, October 2014, pp. 31–46. Novak, Joseph. (1990). Concept maps and Vee diagrams: two metacognitive tools to facilitate meaningful learning. Instructional Science, vol. 19, no. 1, January 1990, pp. 29-52. Novak, Joseph. (1995). Concept mapping to facilitate teaching and learning. Prospects, vol. 25, no. 1, March 1995, pp. 79-86. Novak, Joseph, and Alberto Cañas. (2008). The Theory Underlying Concept Maps and How to Construct and Use Them Technical Report IHMC CmapTools 2006-01. Pensacola, FL, USA: Florida Institute for Human and Machine Cognition. Osborn, Alex. (1953). Applied Imagination: Principles and Procedures of Creative Problem Solving. New York: Charles Scribener’s Sons. Phielix, Chris, Frans J. Prins, and Paul A. Kirschner. (2010). Awareness of group performance in a CSCL-environment: Effects of peer feedback and reflection. Computers in Human Behavior, vol. 26, no. 2, March 2010, pp. 151-161. Piper, Anne Marie, and James D. Hollan. (2009). Tabletop displays for small group study: affordances of paper and digital materials. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'09), Boston, MA, USA, 4-9 April 2009. New York: ACM. pp. 1227-1236. Prieto, Luis P, Pierre Tchounikine, Juan I Asensio-Pérez, Pericles Sobreira, and Yannis Dimitriadis. (2014). Exploring teachers' perceptions on different CSCL script editing tools. Computers & Education, vol. 78, no., September 2014, pp. 383-396. Prieto, Luis Pablo, Martina Holenko-Dlab, Mahmoud Abdulwahed, Israel Gutiérrez, and Walid Balid. (2011). Orchestrating Technology Enhanced Learning: a literature review and a conceptual framework. International Journal of Technology-Enhanced Learning vol. 3, no. 6, February 2011, pp. 583-598. Prieto, Luis Pablo, Juan Alberto Muñoz-Cristóbal, Juan Ignacio Asensio-Pérez, and Yannis Dimitriadis. (2012). Making Learning Designs Happen in Distributed Learning Environments with GLUE!PS. In A. Ravenscroft, S. Lindstaedt, C. Kloos & D. Hernández-Leo (Eds.), 21st Century Learning for 21st Century Skills. Saarbrücken, Germany: Springer Berlin Heidelberg, pp. 489494. Rogers, Yvonne, and Siân Lindley. (2004). Collaborating around large interactive displays: which way is best to meet. Interacting with Computers, vol. 16, no. 6, January 2004, pp. 1133–1152.
46
Roschelle, Jeremy, and Stephanie Teasley. (1995). The Construction of Shared Knowledge in Collaborative Problem Solving. In C. O’Malley (Ed.), Computer Supported Collaborative Learning. . Berlin Heidelberg: Springer, pp. 69-97. Ryall, Kathy, Clifton Forlines, Chia Shen, and Meredith Ringel Morris. (2004). Exploring the effects of group size and table size on interactions with tabletop shared-display groupware. In Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work (CSCW'04), Chicago, USA, 6-10 November 2004. New York: ACM. pp. 284-293. Schmidt, Kjeld, and Liam Bannon. (2013). Constructing CSCW: The First Quarter Century. Computer Supported Cooperative Work (CSCW), vol. 22, no. 4-6, August 2013, pp. 345-372. Scott, Stacey D., Karen D. Grant, and Regan L. Mandryk. (2003). System guidelines for co-located, collaborative work on a tabletop display. In Proceedings of the European Conference on Computer Supported Cooperative Work 2003 (ECCSCW 2003), Helsinki, Finland, 14-18 September 2003. Norwell, MA, USA: Kluwer Academic Publishers. pp. 159-178. Shaer, Orit, Megan Strait, Consuelo Valdes, Taili Feng, Michael Lintz, and Heidi Wang. (2011). Enhancing genomic learning through tabletop interaction. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'11), Vancouver, BC, Canada, 7-12 May 2011. New York: ACM. pp. 2817-2826. Sharples, Mike. (2013). Shared orchestration within and beyond the classroom. Computers & Education, vol. 69, no. 1, November 2013, pp. 504-506. Smith, Heather J, Steven Higgins, Kate Wall, and Jen Miller. (2005). Interactive whiteboards: boon or bandwagon? A critical review of the literature. Journal of Computer Assisted Learning, vol. 21, no. 2, April 2005, pp. 91-101. Stahl, Gerry. (2006). Group Cognition: Computer Support for Building Collaborative Knowledge. Cambridge, USA: MIT Press Tchounikine, Pierre. (2008). Operationalizing macro-scripts in CSCL technological settings. International Journal of Computer-Supported Collaborative Learning, vol. 3, no. 2, June 2006, pp. 193-233. Tchounikine, Pierre. (2013). Clarifying design for orchestration: orchestration and orchestrable technology, scripting and conducting. Computers & Education, vol. 69, no. 1, November 2013, pp. 500-503. Twiner, Alison, Caroline Coffin, Karen Littleton, and Denise Whitelock. (2010). Multimodality, orchestration and participation in the context of classroom use of the interactive whiteboard: a discussion. Technology, Pedagogy and Education, vol. 19, no. 2, July 2010, pp. 211-223. Verma, Himanshu, Flaviu Roman, Silvia Magrelli, Patrick Jermann, and Pierre Dillenbourg. (2013). Complementarity of input devices to achieve knowledge sharing in meetings. In Proceedings of the 2013 ACM Conference on Computer Supported Cooperative Work (CSCW'04), San Antonio, Texas, USA, 23-27 February 2013. New York: ACM. pp. 701-714. Villasclaras-Fernández, Eloy D, Julio A Hernández-Gonzalo, Davinia Hernández-Leo, Juan I AsensioPérez, Yannis Dimitriadis, and Alejandra Martínez-Monés. (2009). InstanceCollage: A Tool for the Particularization of Collaborative IMS-LD Scripts. Journal of Educational Technology & Society, vol. 12, no. 4, October 2009, pp. 56–70. Weinberger, A., I. Kollar, Y. Dimitriadis, K. Mäkitalo-Siegl, and F. Fischer. (2009). Computersupported collaboration scripts: perspectives from educational psychology and computer science. In N. Balacheff, S. Ludvigsen, T. d. Jong, A. Lazonder & B. Sally (Eds.), TechnologyEnhanced Learning: Principles and Products. New York: Amsterdam University Press, Springer. Yacef, Kalina. (2005). The Logic-ITA in the Classroom: A Medium Scale Experiment. International Journal of Artificial Intelligence in Education, vol. 15, no. 1, April 2005, pp. 41-62.
47
48