Workshop on Intelligent and Innovative Support for Collaborative Learning Activities In conjunction with the International Conference on ComputerSupported Collaborative Learning (CSCL) June 8-13, 2009, University of the Aegean, Rhodes, Greece
PROCEEDINGS
Organizing Committee Jacqueline Bourdeau Riichiro Mizoguchi Seiji Isotani Barbara Wasson WeiQin Chen Jelena Jovanovic
Workshop Information CSCL (Computer Supported Collaborative Learning) is an interdisciplinary field that integrates researchers with different backgrounds, but one common goal: to support new learning experiences where students can interact mediated by technology and learn more effectively through exploration, discussion and self-regulation. Collaboration support is a complex issue due to the context of group learning where the synergy among learners’ interactions affects learning processes and hence, the learning outcome. To tackle this problem, CSCL research has always been innovative. Traditional technologies, theoretical frameworks and methodologies that work well in individual learning settings, usually cannot appropriately address the intrinsic challenges of learning in CSCL environments. These challenges often prevent teachers from enjoying the benefits of CSCL environments. Students also have difficulties to follow group activities which are often neither well structured (due to the teachers’ lack of knowledge and/or experience with CSCL environments) nor provided with sufficient guidance. Nowadays this problem becomes more evident with the emergence of Web technologies that allow people to work and learn collaboratively. How to make an effective use of blogs, wikis, podcasts, social networking sites and other Web 2.0 technologies to create more engaging, interesting and fruitful collaborative learning (CL) activities is still an open question. To use these technologies adequately and create pedagogically sound CL activities it is necessary to create more intelligent and innovative ways to support both teachers and learners. In this context, Artificial Intelligence (AI) technologies in general, and ontologies and semantic technologies in particular, can be leveraged for the development of intelligent support systems that help teachers and students. For teachers, the intelligent support consists in offering better ways to create and deploy CL activities and analyze students’ interactions. For students, it allows for provision of intelligent guidance in real time that stimulates the occurrence of good interactions. However, to accomplish the development of intelligent systems for CSCL, we need to re-think the state-of-the-art in AI in general and semantic technologies in particular, and push the research trends in CSCL towards the new generation of technologically sophisticated systems/methods capable of enhancing/improving collaborative learning. The Workshop on Intelligent and Innovative Support for Collaborative Learning Activities (WIISCOLA) aims to bring together researchers from AI, semantic web and CSCL communities to think and share the vision of the next generation of systems that support collaboration in educational settings. We hope to give participants the opportunity to: • • •
Investigate and discuss new architectures, systems and techniques that explore the full potential of CSCL environments Gain an understanding about the current research in AI and Semantic Web technologies within CSCL contexts Investigate the development of intelligent and innovative support systems for CSCL and draw the research directions and agenda to create the next generation of intelligent systems for CSCL.
INDEX
Process and Product Analysis to Initiate Peer-to-Peer Collaboration on System Dynamics Modelling ………………………………………………………………01 Anjo Anjewierden (Univ. of Twente), Weiqin Chen (Univ. of Bergen), Astrid Wichmann, Sylvia van Borkulo, (Univ. of Duisburg-Essen)
e-Class in Ontology Engineering: integrating Ontologies to Argumentation and Semantic Wiki technology ……………………………………………………………09 Konstantinos Kotis, Andreas Papasalouros, George A. Vouros, Pappas Nikolaos, Zoumpatianos Konstantinos (Univ. of the Aegean)
Using Avatar’s Nonverbal Communication to monitor Collaboration in a Task-oriented Learning Situation in a CVE ………………………………………………19 Adriana Peña Pérez Negrón, Angélica de Antonio Jiménez (Univ. Politécnica de Madrid)
Collaborative Educational Virtual Environments Evaluation: The case of Croquet ……………………………………………………………………………………27 Thrasyvoulos Tsiatsos, Konstantinidis Andreas, Pomportsis Andreas (Univ. of Thessaloniki)
Collaborating in Social Networks: The Problem Solving Activity Leading to Interaction - ‘Struggle’ Analysis Framework (SAF) ………………………………………37 Tzanavaris Spyros, Sepetis Anastasios, Gymnasio of Liapades, Corfu (TEI Athens)
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
1
Process and Product Analysis to Initiate Peer-to-Peer Collaboration on System Dynamics Modelling Anjo Anjewierden, Weiqin Chen*, Astrid Wichmann** and Sylvia P. van Borkulo Department of Instructional Technology, Faculty of Behavourial Sciences, University of Twente, PO Box 217, 7500 AE Enschede, The Netherlands, {a.a.anjewierden,s.p.vanborkulo}@utwente.nl * Department of Information Science and Media Studies, University of Bergen, PO Box 7802 Bergen, Norway,
[email protected] ** Department of Computational and Cognitive Sciences, Faculty of Engineering, University of Duisburg-Essen, Lotharstrasse 63, 47057 Duisburg, Germany,
[email protected] Abstract: In this paper, we focus on expanding and optimizing learning opportunities by means of effective collaboration for a complex learning task: system dynamics modelling. Specifically, we present an approach to initiate educational dating and facilitate just-in-time collaboration by suggesting peer-to-peer interaction when it is needed, taking into account learners observed actions and models created.
Introduction Developing models using system dynamics has been shown to lead to deeper processing and understanding of scientific concepts (van Borkulo, 2009). However, learners also face difficulties while engaging in model constructing activities. Students who are inexperienced in creating models often use superficial strategies such as trial and error, which are not effective when it comes to developing system dynamics models. Guidance is necessary throughout the modelling process. Former approaches to support learners in developing a model have been successful in providing advice depending on the quality and state of the model that the learner is creating, by comparing the learner’s model to a reference model (Bravo, van Joolingen, & de Jong, 2006). While this advice took into account the learner’s current difficulties, it did not provide extensive means to overcome these. Means for support should be provided within a learner’s zone of proximal development. The zone of proximal development refers to an area in which learners cannot proceed alone but can proceed when guidance is provided. This guidance can be offered by a teacher or by more capable peers (Vygotsky, 1978). In a computer-supported collaborative environment, learners can exchange their ideas through various channels (e.g., chat, learning artefacts – created by others or by themselves) with peers. A learner might start out developing a model autonomously but he or she might get stuck at some point. A peer-to-peer collaboration can facilitate the learner to master the modelling task progressively. The approach presented in this paper focuses on expanding and optimizing learning opportunities by means of effective collaboration. Specifically, we present an approach to initiate educational dating and facilitate just-in-time collaboration by suggesting peer-to-peer interaction when it is needed taking into account learners observed capabilities. One of the missions in the SCY collaborative learning environment (www.scy-net.eu) is for learners to design a CO2 friendly house. In this mission, system dynamics modelling plays an important role to discover processes of environmental science. Below, we investigate how an analysis of learners’ behaviour during modelling can contribute to collaboration in an inquiry-learning environment such as SCY. First, we present a brief overview of system dynamics modelling and discuss how others have addressed the problem of assessing models. Then we describe a SCY-based scenario, which illustrates what we want to achieve. In the subsequent sections, we describe how the scenario can be realised based on the idea of patterns obtained from analysing both learner actions and models created by learners.
System Dynamics Modelling System dynamics modelling is the computer modelling of dynamic phenomena, in which learners acquire an understanding of the domain at hand by building, using, and testing computer models (de Jong & van Joolingen, 2007; Löhner, 2005; Sabelli, 2005). Learning by computer modelling is a suitable way for inquiry learning, because computer models are runnable and learners get immediate feedback on their actions. Previous research has investigated different ways of supporting learners in inquiry modelling tasks. As described above, one way of support is based on tracking intermediate models and comparing them to a reference model (Bravo et al., 2006). A reference model is the optimal solution developed by an expert. Based on the state of the intermediate model, feedback is given to the learner, resulting in an advice knowledge base with hints to improve the model given. Another way of support is based on tracking the changes in the model, i.e. looking at the steps taken in the construction of the model (Schaffernicht, 2006). Both ways of support provide means to help students improve on various aspects of modelling.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
2
Figure 1. Example of a system dynamics model created by a learner (van Borkulo, 2009).
Scenario The following scenario shows a narrative related to the computer-based learning environment under development in the SCY project. In SCY-Lab, it is foreseen that learners can create and share artefacts such as models. SCY-Lab emphasizes adaptive support by offering advice in terms of resources, tools, scaffolds and collaboration opportunities. The scenario presents one example of how collaboration is facilitated taking into account the learner’s learning situation. Lars (beginner), Adam (intermediate) and Stefan (expert) are classmates who work on the “CO2 friendly house” mission. They start out independently to develop a quantitative model to better understand the relevant variables and how they interact. Stefan, who is already in a second cycle of inquiry, knows how to relate variables and has a reasonable understanding of creating a runnable model. Lars works for the first time on the mission and has less experience in system dynamics modelling. He has only been running simulations until recently. Stefan on the other hand has developed many models already. Even better, he has helped other learners improve their models. Stefan starts working on the temperature-carbon dioxide model and after half an hour, he is happy with it and saves the model. A message appears saying: “Your model seems to be very good.” In the meantime, Lars is struggling a little. He has been running his model repeatedly, modifying the formulas all the time but making no progress in the right direction. He is confused about the difference between auxiliaries and stocks. A message appears: “You have made good progress on the model. Adam has developed a similar model. Together you can try to improve it.” Lars likes that idea, contacts Adam and together they discover that a feedback loop was missing. After a while, a message tells them “Your model is nearly finished. Have a look at Stefan’s model.” Lars and Adam look at Stefan’s model for a while and do some more testing. Unfortunately, they don’t make much progress, then a message appears “You know you can ask Stephan to explain the model to you”. Lars and Adam have learned a lot today; they finish their session and log out. Stefan gets a message from the environment: “Adam and Lars have used your model to improve theirs”! To realise this scenario, pedagogical agents are being developed in SCY. These software agents continuously monitor the actions of learners and the objects (e.g., models) they produce and provide just-in-time interventions based on the analysis of both actions and models. In the following sections, we describe how pedagogical agents, and the techniques they are based on, contribute to an implementation of the scenario.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
3
Approach The general problem we are addressing is widely known as adaptation. Can we determine what the learner is doing and use the result of this analysis as input for adaptation of the learning environment to a learner’s needs. Adaptation is also a key motivation for research on educational data mining (EDM) (de Baker, 2009; de Baker, Barnes, & Beck, 2008). However, examples in which the results of data mining are directly applied in learning environments are difficult to find. Hübscher and Puntambekar (2008) have developed a conceptual framework, which tries to define the connection between learner actions, pedagogy and adaptation. Their argument is that in EDM there has been too large a focus on the technology involved in data mining and too little focus on pedagogical aspects. They propose a framework based on the heuristic classification problem solving method (Clancey, 1984). Heuristic classification is described as follows. “From a set of symptoms (data) a doctor abstracts to a class of symptoms (data abstraction) which then requires a certain type of treatment (solution abstraction) which is found by applying medical knowledge (heuristic match). Using contextual information about the patient and treatment, the type of treatment can be refined, e.g., the dosage can be adjusted (refinement).” The main point for the intermediate reasoning steps (abstraction, heuristic match, refinement) in heuristic classification is that in expert problem solving it is not normally possible to relate data (symptoms) and solutions (treatment) deterministically. Figure 2 summarizes the idea of heuristic classification when applied relating learner actions and adaptations in learning environments. From learner actions, we abstract patterns. Heuristic matches, based on pedagogical knowledge, can relate the patterns found to interventions, partly based on knowledge about an individual learner and other learners using the same environment.
Figure 2. Relating patterns derived from learner actions and products to interventions using heuristic matches based on pedagogical knowledge To realise the above, the following is required. First, we need to define and extract patterns (left). Data mining techniques can be used to analyse learner behaviour and products created by learners can be evaluated. Pedagogically motivated rules (top) take the patterns as input and determine whether intervention is appropriate. Finally, the intervention is forwarded to the learner.
Pattern Discovery A principle source for patterns in system dynamics modelling are the (intermediate) models created by learners. Evaluating models has been studied by Bravo et al. (2006) to give advice to learners. When the learner wants help, she asks for it and the system compares the learner model to a set of reference models. The feedback itself is a list of advice messages, which are provided as prompts. The content of a message that is shown to the learner varies depending on how much the learner model deviates from the reference model. For instance, a learner may be prompted to reflect in the initial modelling phase: “You may need more constants in your model”. If the model has not improved for a while, the advice message is more specific: “You should add constant C”. Given Bravo’s research (Bravo et al., 2006), we can conclude that it is possible to meaningfully compare a learner model and determine differences from reference models. Of course, there are many different models that exhibit exactly the same behaviour. For example, one can make a constant part of a formula or model it explicitly as a constant. In general, we assume it is possible to enumerate the most sensible reference models for a given modelling task in learning contexts. As suggested by the scenario, one of our objectives is to establish when there is a need to suggest an “educational date” between a learner and one of her peers. This not only depends on the quality of the model, but also on the actions of the learner. Initially, the model will be poor, it still needs to be constructed and no interventions are necessary. At a certain point in time the learner will consider the model more or less complete and starts running it. Detecting the transition between constructing and testing the model, has to be determined by studying the actions learners perform. Another source for patterns is therefore the actions and the order in which these actions are executed. In modelling, the action types are related to editing the model (creating,
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
4
deleting and relating model elements), connecting the model to the real world (giving names to elements, assigning formula’s) and testing (running and examining the behaviour of the model). Gassner and colleagues (Gassner, Jansen, Harrer, Herrmann, & Hoppe, 2003) have developed an analysis framework for log files from collaborative environments. The main distinction in their framework is oriented towards the raw data in log files and compares activity-based and state-based analysis. We re-use the distinction between activity and state-based analysis to identify several indicators. The indicators are determined by the following analysis methods. Activity-based analysis. A structural analysis of learner actions can be used to identify short action sequences, which point to particular types of behaviour. For example the sequence run-model when followed by add-auxiliary, add-relation can be an indication for a conclusion and a consequence. The learner has observed the outcome of running the model and decided to add an auxiliary. Longer sequences of actions might point to a particular high-level activity in a cyclic learning process. State-based analysis. A structural analysis of a model could be used to determine differences between a learner model and a reference model and contribute an indication of how far the learner has progressed in the modelling task. Often the above types of analysis will have to be combined to obtain a pedagogically relevant intervention. For instance, when activity-based analysis suggests the learner is creating an initial structure for the model, there is no need to intervene even when the model quality is low.
Figure 3. Action sequences and corresponding measures obtained for several learners (see text) We developed ModIC, a tool which supports the analysis of log files from the Co-Lab system dynamics model editor (van Joolingen, de Jong, Lazonder, Savelsbergh, & Manlove, 2005). ModIC was applied to a data set resulting from experiments in which 38 learners created a model of the energy of the Earth (van Borkulo, 2009). The basic reference model is shown in Figure 1. In the remainder of this section, we briefly describe the methods used by ModIC and some results. Analysing longer sequences of learner actions, in order to assign a label to such sequences that summarizes the process the learner is involved in, has turned out to be difficult. In several other areas, for instance speech recognition, hidden Markov models (HMM) are used to analyse sequences and find the hidden states that generate them (Rabiner, 1989). An example of applying HMMs to collaborative learning is given by Soller (2002). We trained a HMM using sequences as input, containing the types of actions performed by the learners (e.g., the input sequence contained add-auxiliary, add-stock, add-relation, run-model). The two-state HMM generated from our data set, using the algorithms part of UMDHMM (Kanungo, 1999), resulted in states that correspond to the general activity constructing a model (creating, relating and naming model elements) and testing the model (running the model, changing formula’s). This implies that it is possible to automatically determine whether the learner is engaged in the construction or test activity (see also Figure 3).
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
5
As mentioned earlier, the standard method to evaluate a learner model (LM) is to compare it to a reference model (RM). The difference between LM and RM provides information about the quality of a model. For instance, Manlove (2007) has used recall as a measure for model quality by counting the number of elements and relations in the LM that are also present in the RM. In the study by Bravo et al. (2006), the system generates messages that can both refer to missing and unnecessary elements, implicitly suggesting precision is a more appropriate measure as, compared to recall, the score is lower when there are unnecessary elements and relations in the LM. We opted for the F-measure from information retrieval (van Rijsbergen, 1979), which is the harmonic mean of precision and recall: F = (2 * precision * recall) / (precision + recall). We use the term model score for the model quality based on the F-measure. In the case of multiple reference models, a model score is computed for all and the reference model which obtains the highest model score is selected. Comparing models only makes sense when the model elements are named. In many modelling tools, including Co-Lab, learners can create a model without providing names. In our sample log files, learners often initially draw an outline of the model without assigning names. Such models obtain a low model score (close to zero; for instance student 5 in Figure 3) because none of the element names match. We therefore define an additional measure of model quality called the structure score. The structure score is computed in the same way as the model score, with the difference that unnamed elements in the learner model match on any element of the same type in the reference models. Figure 3 provides a partial overview of the analysis. The four rows, from top to bottom, are the following: activity, symbol for the action, model score (as a percentage) and the structure score
Pedagogical Rules In this section, we give examples of how the analysis approach presented in the previous section can result in pedagogically relevant interventions. Figure 4 shows the process from patterns to interventions. The patterns are obtained from analysis of activities, evaluating models and specific action sequences. These patterns can then be used by a pedagogical agent to fire rules (heuristic matching) and produce interventions based on pedagogical knowledge and learner history.
Figure 4. Pedagogical rules Pedagogical rules represent the knowledge about which type of intervention is appropriate based on patterns identified. The patterns are represented as a triple: learner’s activity (either constructing or testing), quality of the model and the most recent actions taken by the learner. Rules are given in both descriptive English and formal Prolog syntax. The basic form is: rule(Intervention) :pattern(Activity, ModelQuality, RecentLearnerActions).
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
6
Which reads as: if (Pattern is true) then Intervention. For example, in one episode of the scenario, Stefan received a message “Your model seems to be very good” after saving the model. This intervention can be based on the rule that requires constructing activity, a good model and the save-model action. rule(message(’Your model seems to be very good’)) :pattern(constructing, good, [save_model]).
Suppose the learner has performed the following action sequence: run-model, add-auxiliary, addrelation. When the model quality has increased because of adding the auxiliary, we might want to provide positive feedback to the learner. The rule is: rule(compliment_learner(add_auxiliary)) :pattern(constructing, increased, [run_model, add_auxiliary]).
For generating interventions that support collaboration, we also need knowledge about other learners including the competence level and the evaluation of the learner model. This is represented similarly to that of the current learner, but with another learner added as a fourth argument. pattern(Activity, ModelQuality, Actions, Peer).
Prolog tries to logically solve the statements we write down, for example we can ask Prolog to give us a Peer who is currently in the testing activity, has produced a good model, regardless of the current activity: ?- pattern(testing, good, [], Peer). Peer = ’Stefan’
This type of rule can be used to provide recommendations of models constructed by other learners with better quality (sample models), possible collaborators who are in a similar situation (dating), or other learners who can provide help (dating). In the scenario, Lars received a message: “Adam has developed a similar model. Together you can try to improve it”. It can result from a rule that recommends a collaborator who is in a similar situation: rule(has_developed_similar_model(Peer)) :pattern(constructing, poor, []), % model(Model), % model(PeerModel, Peer), % compare_model(Model, PeerModel, Score), Score > 90. %
Lars model of Lars model of some peer Very similar
In another episode, both Lars and Adam received a message “Your model is nearly finished. Have a look at Stefan’s model”. This can come from the rule that recommends a model with better quality: rule(look_at_better_model(Peer)) :pattern(testing, good, [change_specification]), pattern(finished, excellent, [], Peer).
Lars and Adam look at the model for a while and do some more tries, but still not much progress, then a message appears “You know you can ask Stephan to explain the model to you”. Here we need to include history, for example we can define “not much progress” as zero improvement of the model over the last 5 minutes: rule(talk_to_peer(Peer)) :pattern(testing, fair, []), pattern(testing, improvement(minutes(5),0), []), pattern(finished, excellent, [], Peer).
Conclusions and Outlook In this paper, we have outlined an approach to offer adaptive guidance during a complex learning task: creating system dynamics models. This approach can both be used to support individual learners, for example as part of improving the models they create, similarly to the objectives of Bravo and colleagues (Bravo et al., 2006), but also to initiate peer-to-peer collaboration through “educational dating” when this appears sensible. The contribution is investigating multiple indicators of learner behaviour simultaneously: the activities pursued, the quality of their models and the actions they undertake. Issues that require further thought are how and by whom the pedagogical rules are created and whether these rules need to be adjusted for a specific learning context. It is usually difficult for teachers to directly specify pedagogical rules. Some systems provide graphical interfaces for users to program rules, customize rules or set up parameters. These are converted to internal representation when saved (Cao & Greer, 2003; Smith, Cypher, & Spohrer, 1994; Terveen & Murray, 1996). Currently, the ModIC tool makes it possible to search for action sequences interactively. The learning context may make a large difference, a model quality score of 0.5 may be very low for a simple model and very high for a complex model. Therefore, it is necessary to provide an adjusting mechanism for parameters in the pedagogical rules for different learning contexts. Future work, as part of the SCY project, will include trials with teachers for detecting patterns based on log file data and rule specification.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
7
References Bravo, C., van Joolingen, W. R., & de Jong, T. (2006). Modeling and simulation in inquiry learning: Checking solutions and giving intelligent advice. Simulation-Transactions of the Society for Modeling and Simulation International, 82, 769-784. Cao, Y., & Greer, J. (2003). Agent Programmability in a Multi-Agent Learning Environment. Paper presented at the 11th International Conference on Artificial Intelligence in Education. Clancey, W. J. (1984). Heuristic classification. Artificial Intelligence, 27, 289–350. de Baker, R. S. J. (2009). Data mining for education. In International Encyclopedia of Education (in press) (3rd ed.): Elsevier, Oxford, UK. de Baker, R. S. J., Barnes, T., & Beck, J. E. (Eds.). (2008). 1st International Conference on Educational Data Mining (EDM 2008). Montreal, Canada. de Jong, T., & van Joolingen, W. R. (2007). Model-facilitated learning. In J. M. Spector, M. D. Merrill, J. J. G. v. Merriënboer & M. P. Driscoll (Eds.), Handbook of research on educational communication and technology (3rd ed., pp. 457-468): Lawrence Erlbaum. Gassner, K., Jansen, M., Harrer, A., Herrmann, K. & Hoppe, H. U. (2003). Analysis methods for collaborative models and activities. In B. Wasson, S. Ludvigsen & H. U. Hoppe (Eds.), Designing for change in networked learning environments: Computer support for collaborative learning (CSCL 2003), Bergen, Norway, June 2003, 369-378. Hübscher, R., & Puntambekar, S. (2008). Integrating knowledge gained from data mining with pedagogical knowledge. In 1st International Conference on Educational Data Mining (EDM 2008) (pp. 97–106). Montreal, Canada. Kanungo, T. (1999). UMDHMM: Hidden markov model toolkit. In A. Kornai (Ed.), Extended Finite State Models of Language (support material). Cambridge, UK: Cambridge University Press. Löhner, S. (2005). Computer Based Modeling Tasks: the Role of External Representation. Unpublished Ph.D. thesis, University of Amsterdam, Amsterdam. Manlove, S. (2007). Regulative Support During Inquiry Learning with Simulations and Modeling. Unpublished Ph.D. thesis. Rabiner, L. R. (1989). A tutorial on Hidden Markov Models and selected applications in speech recognitio. Proceedings of the IEEE, 77, 257–286. Sabelli, N. H. (2006). Complexity, technology, science, and education. Journal of the Learning Sciences, 15, 5-9. Schaffernicht, M. (2006). Detecting and monitoring change in models. System Dynamics Review, 22, 73-88. Smith, D. C., Cypher, A., & Spohrer, J. (1994). KidSim: programming agents without a programming language. Communications of the ACM, 37(7), 54-67. Soller, A. L. (2002). Computational analysis of knowledge sharing in collaborative distance learning. Unpublished Ph.D. thesis, University of Pittsburgh. Terveen, L. G., & Murray, L. (1996). Helping users program their personal agents. In M. J. Tauber (Ed.), Prof. of the SIGCHI Conference on Human Factors in Computing Systems: Common Ground (CHI'96) (pp. 355-361). New York. NY: ACM. van Borkulo, S. P. (2009). The assessment of learning outcomes of computer modelling in secondary science eduation. Unpublished Ph.D. thesis, University of Twente, Enschede. van Joolingen, W. R., de Jong, T., Lazonder, A. W., Savelsbergh, E., & Manlove, S. (2005). Co-Lab: Research and development of an on-line learning environment for collaborative scientific discovery learning. Computers in Human Behaviour, 21(671–688). van Rijsbergen, C. J. (1979). Information Retrieval: Butterworth-Heinemann Newton, MA, USA. Vygotsky, L. (1978). Interaction between learning and development. In Mind and society: Harvard University Press, MA.
Acknowledgments The research described in this document was carried out as part of the SCY project (Science Created by You) funded by the European Commission (project IST-212814). We would like to thank Crescencio Bravo for providing the advice knowledge base of his system and the anonymous reviewers for their constructive comments on an earlier version of this paper.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
8
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
9
e-Class in Ontology Engineering: Integrating Ontologies to Argumentation and Semantic Wiki technology Konstantinos Kotis, Andreas Papasalouros, George A. Vouros, Pappas Nikolaos, Zoumpatianos Konstantinos Ai-Lab, Dept. of Information and Communications Systems Engineering, University of the Aegean, Karlovassi 83200 Greece Email:
[email protected],
[email protected],
[email protected]
Abstract: The aim of this paper is to present an approach for constructing e-classes for ontology engineering education. Such classes concern the shaping of specific information spaces into ontologies in a collaborative and human-centered way. The work is based on a collaborative and human-centered ontology engineering methodology and a meta-ontology framework for developing ontologies, namely HCOME and HCONE-3O respectively. The integration of key technologies such as Semantic Wiki and Argumentation models with Ontology Engineering (O.E) serve as an enabler of e-class construction for different domainspecific information spaces. Such e-classes provide two kinds of lessons, i.e. a) lessons concerning how an information space is shaped into an ontology (development knowledge) and b) lessons concerning the analysis of the domain itself (domain knowledge). An e-class captures the created knowledge (both domain and development) in ontological models for future reference and teaching. The paper also reports on the integration of an automatic method for creating assessment material from the generated domain knowledge.
Introduction Ontologies are evolving and shared artefacts that are collaboratively and iteratively developed, evolved, evaluated and discussed within communities of knowledge workers, shaping domain-specific information spaces. To enhance the potential of information spaces to be collaboratively engineered and shaped into ontologies within and between different communities, these artefacts must be escorted with all the necessary information (namely meta-information) concerning the conceptualization they realize, implementation decisions and their evolution. In HCOME-3O framework (Vouros et al, 2007), authors proposed the integration of three (meta-)ontologies that provide information concerning the conceptualization and the development of domain ontologies, the atomic changes made by knowledge workers, the long-term evolutions and argumentations behind decisions taken during the lifecycle of an ontology. Figure 1 depicts ontology engineering tasks for a domain ontology and its versions (domain knowledge), i.e. editing, argumentation, exploiting and inspecting, during which meta-information is captured and recorded (development ontologies) either as information concerning a simple task or as information concerning the interlinking of tasks. This framework has been proposed in the context of HCOME collaborative engineering methodology (Kotis & Vouros, 2006). HCOME places major emphasis on the conversational development, evaluation and evolution of ontologies, which implies the extended sharing of the constructed domain ontologies together with the meta-information that would support the interlinking, combination, and communication of knowledge shaped through practice and interaction among community members.
Domain Ontology Version 1
Editing
Domain Ontology Version 2
Argumentation Development Ontology and metainformation 1
Exploiting
Inspecting
Development Ontology and metainformation 2
Fig. 1. The HCOME-3O framework for recording interlinked meta-information concerning ontology engineering tasks
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
10
In the context of HCOME-3O framework, information spaces are shaped into ontologies through discussions. Such discussions are supported by an argumentation task which follows an argumentation model1. Such model provides a schema for representing meta-information about issues, positions, and arguments that contributing parties make during an argumentation dialogue upon the collaborative evolution of shared ontologies that shape domain-specific information spaces. Specifically, an argument may raise an issue that either suggests changes in the domain conceptualization, or questions the implementation of the conceptualized entities/properties. Based on this issue, a collaborative party may respond by publicizing a position, i.e. a new conceptualizations of the information space, or by suggesting the change of a specific conceptualization. A new argument may be placed for or against a position, and so on. Issues may be generalized or specialized by other issues. The connection of the recorded arguments with the conceptualizations discussed by specific contributing parties and with the changes made during a period is performed through the argumentation item and position classes’ properties (formal item, contributing party, period and evolving ontology). The argumentation ontology (Figure 2) supports the capturing of the structure of the entire argumentation dialogue as it evolves among collaborating parties within a period. It allows the tracking and the rational behind atomic changes and/or ontology versions. It is generic and simple enough so as to support argumentation on the conceptual and on the formal aspects of an ontology.
Fig. 2. The argumentation ontology
The use of open and Web community-driven technology (e.g. Wiki technology) in order to enable collaborative and community-driven engineering of information spaces (giving Web users the opportunity to contribute their knowledge) is not new (Siorpaes & Hepp2007; Dellschaft et al, 2008). In order to integrate open and Web community-driven argumentation functionality to support the collaborative engineering of information spaces, existing semantic Wiki technology can be integrated. We conjecture that for designing and developing an open and Web community-driven argumentation functionality embedded in a collaborative and human-centered ontology engineering environment that is based on HCOME-3O framework, the following requirements should be met: 1.
1
Use an Argumentation Model (ontology) to represent meta-information concerning the recording and tracking of the structured conversations. Record such meta-information as individual elements (instances) e.g. of OWL Argumentation Ontology classes. Any Argumentation model can be used,
An OWL implementation of the argumentation ontology can be accessed from http://www.icsd.aegean.gr/aiab/projects/HCONEv2/ontologies/HCONEarguOnto.owl.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
11
given that it will be interlinked with the two other HCOME-3O (meta-) ontologies i.e. the HCONEAdministration (meta-)ontology that records administration information of the domain ontologies and their versions and the HCONE-Evolution (meta-)ontology that records individual changes on the domain knowledge that are projected in a new ontology version. 2. Use the HCOME-3O (meta-)ontology framework to record meta-information concerning the interlinking between conversations and ontology development and evolution (changes and versions of a domain ontology). The recording of interlinking between (meta-) ontologies is what really supports the sharing of consistently evolved and living ontologies within and across different communities (Vouros et al, 2007). 3. Use of Semantic Wiki technology for openness and Web community-driven engineering. Developing collaborative functionalities of ontology engineering, such as ontology argumentation, is much more easy and efficient when we use technologies that were devised for such purposes. The proposed method for ontology engineering learning aligns with knowledge building approach as proposed by Scardamalia & Bereiter (2006). Knowledge building considers learning as collaborative production and continual improvement of ideas shared by a community. Emphasis is given not in individual achievement but rather in collective knowledge advancement. While domain knowledge itself is captured as a created ontology, knowledge evolution is captured by using the HCOME-3O (meta-)ontologies. Furthermore, we assert that collective knowledge is advancing through collaborative argumentation by engaging participants in on-line critical discussions supported by wiki technology. The types of argumentation taken into account are captured in the ontology depicted in Figure 2. By making these types explicit to discussion participants, argumentation is supported or 'scaffolded' (Andriessen, 2006) which is a widely adopted approach in relative literature.
Related Work In myOntology project (Siorpaes & Hepp, 2007) the challenges of collaborative, community-driven, and wikibased ontology engineering are investigated. The simplicity of Wiki technology and consensus finding support by exploiting the collective intelligence of a community is being used to collaboratively develop lightweight ontologies. myOntology goal is not only to allow co-existence and interoperability of conflicting views but more importantly support the community in achieving consensus similar to Wikipedia, where one can observe that the process of consensus finding is supported by functionality allowing discussion (however, not structured dialogues). In NeOn project (Dellschaft et al, 2008), the Cicero web-based tool supports asynchronous discussions between several participants. This social software application is based on the idea of Issue Based Information Systems (IBIS) and the DILIGENT argumentation framework (Pinto et al, 2004). The DILIGENT argumentation framework was adapted for Cicero in order to make it easier applicable on discussions and in order to reduce the learning effort by users. In Cicero, a discussion starts with defining the issue which should be discussed. Then possible solutions can be proposed. Subsequently, the solution proposals are discussed with the help of supporting or objecting arguments. Both works provide strong evidence that collective intelligence in the form of (semantic) Wikis can be used to support collaborative ontology engineering, with the advantages of openness and scalability. As far as concerns reaching a consensus on a shared ontology during argumentation, both works, although they provide mechanisms to record the actual dialogues, meta-information concerning the recording of the interlinking between conversations and ontology evolution (versions of a domain ontology) is not recorded. CSILE—Computer Supported Intentional Learning Environments— first used in early prototype version in 1983 in a university course, more fully implemented in 1986 in an elementary school (Scardamalia & Bereiter, 2006). The motive guiding the design of CSILE was a belief that students themselves represented a resource that was largely wasted and that could be brought into play through network technology. The classroom, as a community, could indeed have a mental life that is not just the aggregate of individual mental lives but something that provides a rich context within which those individual mental lives take on new value. CSILE restructured the flow of information in the classroom, so that questions, ideas, criticisms, suggestions, and the like were contributed to a public space equally accessible to all, instead of it all passing through the teacher or (as in e-mail) passing as messages between individual students. By linking these contributions, students created an emergent hypertext that represented the collective rather than only the individual knowledge of the participants. By the 1990s the idea of knowledge building as the collaborative creation of public knowledge forced a major redesign of CSILE to boost it as an environment for objectifying ideas and their interrelationships and to support collaborative work aimed at improving ideas. The next generation of CSILE, called Knowledge Forum (Scardamalia, 2004), provides a knowledge building environment for communities (classrooms, service and health organizations, businesses, and so forth) to carry on socio-cognitive practices that are constitutive of knowledge- and innovation- creating organizations. Knowledge Forum undergoes continual revision as theory advances and experience uncovers new problems and opportunities. It is an extensible environment supporting knowledge building at all educational levels, and also in a wide range of non-
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
12
educational settings. The characteristics of Knowledge Forum are perhaps most easily grasped by comparing it to the familiar technology of threaded discussion, which is to be found everywhere on the Worldwide Web and also as a part of instructional management systems (e.g. Blackboard, WebCT and FLE). Belvedere (Suthers et al. 1995) is a collaborative environment aiming to support the acquisition of skills in scientific argumentation for students. Belvedere supports the diagrammatic definition of arguments for or against certain scientific theories, defined by students in the form of concept maps. Relations in these concept maps depict argumentation for or against interrelated arguments. The system also utilizes an on demand automated advisor for suggesting argument extensions or revisions based on matching of patterns in diagrams as well as in the textual descriptions of arguments. Design models for specifying hypertext structures based on argumentation have been proposed by a number of authors. Streitz et al., (1989) adopt a well known argumentation model, namely the Toulmin Argumentation Schema as a basis for hypertext design with no support for collaboration. In this model, hypertext nodes (pages) represent statements, that is, arguments, where links represent relations such as 'contributes' or 'contradicts'. Selvin (1999) focuses on collaborative modeling of hypertext systems by adding support for argumentation of complex hypertext systems during the design phase. The work presented in this paper has been motivated by the related technologies discussed above. We conjecture that the related technologies must aim to their integration with the HCOME-3O framework since the recording of interlinking between (meta-) ontologies is what really supports the sharing of consistently evolved and living ontologies within and across different communities (Vouros et al, 2007). By re-using such technologies and extending them to be compliant with HCOME-3O framework it is possible to achieve this goal. Furthermore, the related existing approaches do not “see” O.E from a learning perspective. The presented approach strives towards putting the foundations of e-learning in O.E via a mapping of wiki-based collaborative O.E functionalities and objectives to functionalities and objectives that have been proposed in foundational work and research directions papers on Semantic Web and Education (Bittencourt et al, 2008).
Shaping Information in shared learning spaces To support ontology engineering tasks collaboratively, we have introduced the distinction of personal and shared working spaces (Kotis & Vouros, 2006). In this paper we present an architecture for the design of a system that combines, in the shared space, Semantic Wiki technology with HCOME-3O-based ontology engineering functionality to support the shaping of information into shared and agreed ontologies (Figure 3) supported by argumentation dialogues. The architecture, following the “Exploitation” phase of HCOME methodology, supports the following tasks: 1. The inspecting of shared ontologies (reviewing, evaluating and criticizing specified conceptualizations) 2. The inspecting (comparison) of shared versions of an ontology, for identifying the differences (tracking changes) between them 3. The posting of arguments upon versions of ontologies for supporting decisions for or against specifications
Fig. 3. An HCOME-3O-based ontology engineering architecture integrating Semantic Wiki technology
The above presented ontology engineering tasks should be integrated in environments that have been designed according to HCOME-3O framework for recording meta-information and their interlinking. Concerning the Argumentation task, we have integrated an extension of the Wiki-based tool CICERO (Dellschaft et al, 2008) with a prototype Ontology Engineering tool, namely HCONE [1]. The integration of such tools requires design effort in order to communicate information concerning ontology argumentation (at the Ontology Argumentation Wiki side e.g. CICERO) and ontology evolution (at the Editing tool side e.g. HCONE) meta-information. In order to evaluate the proposed technology, we are currently developing an experimental Argumentation
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
13
Semantic Wiki for discussing the engineering of information spaces (namely, SharedHCONE). We have re-used the freely available API’s of MediaWiki , Semantic MediaWiki , and CICERO. The SharedHCONE system supports users to share and discuss domain-specific information in a guided manner towards shaping it into ontologies. It provides a “shared space” where users are able to “discuss” specific conceptualizations of an information space, to upload and download their personal conceptualizations, to browse them, to agree/disagree with specific conceptualizations. More specifically, the system allows users to publicize the conceptualizations they have for an information space by inviting other HCONE users (knowledge workers, ontology engineers or/and domain experts) to join a discussion, to form interest groups and to provide support for discussion, following a Wiki-based structured dialogue. Conceptualizations of information spaces that have been viewed, discussed, and agreed by all group members, are shaped into public, shared and agreed ontologies. The system is integrated in an ontology engineering environment, namely HCONE. HCONE registered users can use the proposed system in order to send an invitation to all HCONE registered users for joining (or not) a discussion group that concerns a domain-specific information space. The notified users are then able to accept the invitation and join that specific discussion group. Concurrent dialogues for different information spaces can be supported. A dialogue does not evolve in a sequential way, i.e. starting always from the last argumentation item: It can evolve starting from any previously-made dialogue/argumentation item. A SharedHCONE host-user initiates a discussion group in order to argue for or/and against specific conceptualizations shaped into a personal ontology within her HCONE environment. An invitation to other HCONE registered users of the community is sent automatically. Invited HCONE-registered-users may accept the invitations by just entering the discussion page (URL is provided in the e-mail invitations) or rejecting the invitation if no action is taken for a specific period of time. HCONE-registered-users that accept the invitation become Contributors of the shared ontology under discussion. Host-Contributors and Contributors are able to access their related argumentation dialogues directly from their HCONE environment. When a Contributor enters an existing argumentation dialogue, she is able to view the already recorded dialogue-items that she and other users have contributed; linked with ontology versions and ontology changes between each version (essentially she is able to follow the whole ontology-evolution history and the reasons/arguments behind it). Browsing/viewing of uploaded ontologies during argumentation dialogues is supported within the Wiki. A Web-based ontology browser, seamless to the Argumentation-dialogue functionality, is integrated. The metainformation concerning the recording and tracking of the dialogues is recorded as individual objects of the “Argumentation-Item” class (specifically, individuals of its subclasses) of the argumentation meta-ontology related to other entities of the other 3O-HCOME framework. The “Argumentation-Item” individual objects are related to a specific ontology item (“Formal Item” class of the Administration meta-ontology) through their “formal item” property in the Argumentation meta-ontology. The “Argumentation-Item” individual objects are returned from queries executed over the Argumentation meta-ontology e.g. “Find all the “Argumentation items” (individuals of all subclasses of “Argumentation item” class) which are related to a specific ontology element (“Formal item” property)”. HCONE community users are able to: a) access/view history information concerning all details of argumentation dialogues made in the HCONE community, b) to access/edit (join) the argumentation dialogues within a pre-specified time-frame, c) browse and download agreed and shared conceptualizations that have been shaped in ontologies. SharedHCONE system users (Wiki registered users that are not necessarily members of the HCONE community) are able to a) access-view history information concerning generic details of argumentation dialogues of the HCONE community (administration details only), b) browse and download agreed ontologies. Such authentication strategy ensures that the learning of the recorded knowledge can be disseminated also to non-creators i.e. learners that have not participated in the learning process (e-class participation). The SharedHCONE system supports OWL-DL ontologies (both domain and meta-ontologies). The system is currently under implementation using the Semantic Wiki technology. The system follows the HCONE-Argumentation (meta-)ontology for the structuring/recording of dialogues. An already implemented Wiki-based Argumentation system following the IBIS model, namely CICERO, is partially re-used to support SharedHCONE design requirements. Furthermore, SharedHCONE is designed for integration with the HCONE environment as far as concerns the interlinking of the Wiki-based dialogues with Administration and Evolution meta-information recorded in HCONE personal space. 2
3
e-Class in ontology engineering education Computer-supported collaborative learning (CSCL) is one of the most promising innovations to improve teaching and learning with the help of modern information and communication technology. Collaborative or 2 3
www.mediawiki.org/ http://semanticweb.org/wiki/Semantic_MediaWiki
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
14
group learning refers to instructional methods whereby students are encouraged or required to work together on learning tasks. In this paper and the work presented it, we consider ontology engineering as a learning task, where learners, i.e. knowledge workers are guided (taught) by instructors, i.e. knowledge engineers or/and domain experts or/and even other more experienced knowledge workers, towards shaping their collective knowledge into ontologies. Instructions are unfolded in the form of argumentation items (elements of the argumentation meta-ontology) and recorded in a structured manner for future reference. Such a way, educational material is created on-the-fly during the unfolding of discussions. It is important to point out that during the learning process there two kinds of learning objects created: a) elements of the domain knowledge (i.e. domain ontology classes, properties, individuals) and b) instances of the argumentation items (classes of the argumentation model) and the evolution meta-information. The first refers to the domain knowledge learned i.e. knowledge about the conceptualizations of the domain-specific information space (i.e. the shaping of information into ontology classes, subclasses, properties, restrictions, axioms), and the latter to the development knowledge learned i.e. knowledge about the steps/actions taken during the specification of the domain knowledge (e.g. change of a class name, deletion of a property, introduction of a disjoint axiom between two classes, an argument upon a new version of a conceptualization, etc). Both types of knowledge are important in ontology engineering education since both domain and development expert knowledge must be taught to knowledge workers for supporting their future involvement in the refinement of already shaped information spaces or in the shaping of a new one.
O.E e-class creation and users’ roles As already mentioned, the community of HCONE users comprises of domain experts, knowledge engineers and knowledge workers, collaborating towards shaping their information spaces into ontologies. Although they carry different expertise, complying with the HCOME methodology, they may contribute equally to the improvising of conceptualizations. In such methodology, there is no any restriction concerning the “who takes the final decision” or the “who should be the host of the dialogue”. In practice, any member of the community can join a discussion group and any member of this group can be the host that invites others for discussion. All members participate in the discussion following an argumentation model and reach a consensus after several equally weighted arguments/positions/issues made by the members of the group. Based on this principle, all community members that join an O.E e-class in the SharedHCONE system can have the role of “learner” or the role of “instructor” in different periods of time. Having said that, a knowledge engineer participating in an O.E e-class will serve as an instructor of developing knowledge whereas she will serve as a learner of domain knowledge that domain-expert-instructors teach. The opposite roles are assigned to domain experts. Knowledge workers on the other hand, learn from both knowledge engineers and domain experts (“learner” role) and share this knowledge with other workers in a unique collaborative fashion. An O.E e-Class can be initiated and hosted by an “instructor” but also, most possibly, it can be initiated and hosted by a learner i.e. a knowledge worker with no much previous experience that invites other experienced community members to join and assist her in shaping of her information space. The organization of a class by a learner and not by an instructor, accentuates the power of collective intelligence and social networking and underlines the need to design new e-learning systems based on open and Web communitydriven technology, such as the Wiki technology. As stated in Scardamalia & Bereiter (2006), “the proof of knowledge building is in the community knowledge that is publicly produced by the students—in other words, in visible idea improvement achieved through the students’ collective efforts”. The Host-contributor of an O.E e-class, states the title and the aim of the e-Class and places her first argumentation item i.e. her first suggesting position concerning the domain-specific information space to be discussed. Such a position is linked with her first conceptualizations that have been put in a draft shape in the form of a kick-off ontology. This is considered as the kick-off educational material of the O.E e-class that is provided to the joined learners for consideration and argumentation. Learners can view the educational material by browsing the ontology and place questions (arguments or issues) responding to the kick-off position of their instructor. Such process implies that the domain knowledge represented in the kick-off domain ontology is learned by the implementation of the “learning by doing” paradigm.
Domain knowledge Learning The SharedHCONE system captures the domain knowledge that is discussed in the argumentation dialogues in the following ways: a) as ontology versions linked to position items, b) as single ontology elements (class, property and individuals formal item) linked with any type of argumentation items. Learners or/and instructors link their domain knowledge created in their personal space (using tools such as HCONE) with their argumentation items created in the shared space. Learners can view chunks of others’ personal domain knowledge and learn from them. Such learning is supported by the argumentations made during the shaping of such knowledge i.e. the rationale behind the creation of such knowledge.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
15
Development knowledge Learning The SharedHCONE system also captures the development knowledge of domain ontologies, the atomic changes made by knowledge workers in their personal space, the long-term evolutions and argumentations behind decisions taken during the lifecycle of an ontology. Such knowledge is recorded in HCONE-3O (meta)ontologies repository. Learners can view the ontology-based knowledge building and learn from it (e.g. comparisons between ontology versions, changes within an ontology version, etc). Such learning is supported in the SharedHCONE system partially (the interlinking of argumentations and ontology versions). Learners should use their personal space environment in order to view development details.
Creation and re-use of educational material Another important aspect of SharedHCONE system is that the educational material, apart from a draft kick-off version of the domain knowledge, is not present at the beginning of the O.E e-class. Both domain knowledge and development knowledge are constructed during the collaboration and discussion that community members have in the shared space. Lessons’ material is created on-the-fly, as learning objects are recorded as instances of the HCOME-3O meta-ontologies. The recording of such knowledge in such a structured and meaningful way allows its re-use as educational material, especially for learners in O.E e-classes that there is no participation of an a expert (knowledge engineers and domain experts are absent). Knowledge workers are able to query the recorded knowledge for learning purposes, either by browsing the meta-ontologies in a stand-alone fashion or by unfolding the stored structured argumentation dialogues of an ontology engineer e-class using the SharedHCONE. Advanced query implementations using special query languages such as SPARQL4 can be used to combine knowledge from the three difference HCOME-3O meta-ontologies. Screenshots of the proof-of-concept SharedHCONE system are provided (Figure 4 and 5) below, depicting an in-progress learning task of “Oil Pollution” e-class. The discussion group (“Oil Pollution”) is comprised by 3 contributing members (the host contributor, HCONE USER 2, and HCONE USER 3).
Fig. 4. SharedHCONE wiki-based system for learning “Oil Pollution” domain
4
http://www.w3.org/TR/rdf-sparql-query/
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
16
Fig. 5. Unfolding of discussion between SharedHCONE users for “Oil Pollution” domain
Three “Issue” argumentation items have been recorded, two of them made for specializing and generalizing the issue placed as a response to the kick-off suggesting position of the host-contributor. The oil_pollution_2.owl ontology was uploaded as a kick-off ontology for the “oil pollution” discussion. Figure 5 depicts the unfolding of the related discussion, showing two Positions made on an Issue and two different arguments (objecting and supporting argument respectively) on Position 1 made by different contributors.
e-Assessment of Individual Knowledge Computer-aided assessment (also referred to as E-assessment), ranging from automated multiple-choice tests to more sophisticated systems is becoming increasingly common. With some systems, feedback can be geared towards a student's specific mistakes or the computer can navigate the student through a series of questions adapting to what the student appears to have learned or not learned. SharedHCONE system integrates an ontology comparison (diff algorithm) functionality in order to evaluate the quality of the created domain knowledge against e.g. a reference or a “gold” domain ontology. Such evaluation approach however does not provide a mean to assess the learned knowledge i.e. “what” and “how” knowledge has learned by each learner individually. Currently we are exploiting an in-house-developed novice approach, namely QuGAR-OWL (Automatic Generation of Question items from Rules and OWL ontologies), as a e-learning approach towards an ITS (Intelligent Tutoring System) that generates multiple choice questionnaires from populated OWL ontologies in an automatic fashion (Papasalouros et al, 2008). The approach utilizes ontologies that represent both domain and multimedia knowledge. Multimedia questionnaires are currently restricted to items with images. For evaluation and experimental purposes we have produced results with a number of domain ontologies for text-based questionnaires. The approach is open to any source of knowledge that can be mapped to OWL semantics and of course to any source that already uses OWL semantics to represent its knowledge. Heterogeneous and distributed domain-specific knowledge can also be automatically transformed in a QuGAR-OWL-generated questionnaire, given that there is an OWL model that these resources can be mapped to (and aligned).
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
17
Certain strategies have been identified and used for selecting the correct answers in question items, as well for selecting distractors. The selected strategies are analytically presented in Papasalouros et al (2008). Below we provide a simple strategy and a related example question automatically generated for an example ontology (Maritime environmental pollution ontology). • Strategy A (text-based): Choose individuals which are not members of a given class, provided that they are members of one of its superclasses. More specifically, if A(a) for some a, then correct answer is: A(a). For the distractors selection, we assume that B is a superclass of A. Then, if B(b), b≠a and b is not an individual of A, then A(b) is a distractor. • Generated Question A: Which of the following sentences is true? A. PERM01 is a pollution event response for minor oil spill. (C) B. PERS01 is a pollution event response for minor oil spill. (D) C. PERD01 is a pollution event response for minor oil spill. (D) D. PERD02 a pollution event response for minor oil spill. (D) In the above, only choice A is a correct answer, indicated with (C), since PERM01 is an individual of ontology class pollution_event_response_for_Minor_oil_spill. The other choices, indicated with a (D), are distractors, containing individuals which belong to disjoint sibling classes of the above class (OWL disjointWith axiom has been utilized). Preliminary work extends QuGAR-OWL approach to handle rules also (specifically SWRL rules) used with problem solving related domains such as the environmental protection/pollution domain. We have evaluated such functionality as a proof-of-concept tool for improving the effectiveness of decision making via education and awareness of diagnosis/response policies. More specifically, we identify a number of new strategies that extend our previous work with text-based and multimedia-based strategies. In this paper we present an example rule-based strategy (Strategy B). • Strategy B (rule-based): Given that d1∧d2∧...∧dm → v1∧v2∧...∧vk is a rule in the knowledge base, where x is a variable and C is a class, and one of the atoms v1,v2,...,vk in the head of the rule is in the form C(x), then a multiple choice question item can be formed as follows: The rule provides the semantics for the correct answer and distractors are selected among disjoint siblings of or among subclasses of C. As an example we assume that the following rule exists in the knowledge base: 1. oil_pollution_event(?e) ∧ has_oil_spill_volume_severity(?e, oil_spill_volume_severity_disastrous) ∧ has_recovery_time_severity(?e, recovery_time_severity_disastrous) ∧ oil_spill_region_size_on_photo(?e, "large") → pollution_event_Disastrous_oil_spill(?e)
Based on concept pollution_event_Disastrous_oil_spill, which appears in the head of the above rule, this strategy generates question items as in the following example. • Generated Question B: If an oil pollution event has disastrous oil spill volume severity and disastrous recovery time and large region size on photo, then the pollution event is a(n): A. Disastrous oil spill pollution event (C) B. Oil spill pollution event C. Minor oil spill pollution event (D) D. Significant oil spill pollution event (D) In the above example, the correct answer is indicated by (C), while the wrong answers (distractors) are indicated by (D) (for presentation reasons only in the paper). In current version of QuGAR-OWL, natural language generation is based on the names of ontology classes and properties, provided that they follow certain conventions. Future work should tackle the problem of generating natural language items from domain-specific OWL and SWRL semantics with further study of OWL-to-NLG techniques (e.g. the work presented in Karakatsiotis et al (2008)).
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
18
Conclusion and Future Work The presented in this paper work has been motivated by technologies already proposed in collaborative and wiki-based Ontology Engineering as well as in collaborative e-learning in order to support the design of systems that create e-classes for learning ontology engineering collaboratively. The proposed technologies are integrated with the HCOME-3O framework in order to support the sharing of consistently evolved and living ontologies within and across different communities. The proposed approach “sees” O.E from a learning perspective, striving towards putting the foundations of e-learning in O.E via a mapping of wiki-based collaborative O.E functionalities and objectives to functionalities and objectives that have been proposed in collaborative elearning. The paper reports not only on the creation of e-classes and the recording/use of learning knowledge by different users’ roles, but also in the automatic creation of assessment material from the domain ontologies. Although several technologies of the proposed approach have been individually evaluated (HCONE, HCOME-3O, QuGAR-OWL) or are under development and evaluation (SharedHCONE), it remains a future goal of our work the evaluation of the integrated approach (as a whole) i.e. identifying potential learners of a community of knowledge workers, provide the system for exploitation, and create e-classes towards learning the knowledge of their domain (domain knowledge) as well as learning the ontology development knowledge, and finally use the proposed e-assessment technology for evaluating the learning method. Having said that, the presented approach could integrate alternative tools and methodologies that also propose the use of a) argumentation models, and b) Semantic Wiki technology. For instance, DILIGENT methodology and CICERO tool as well as tools that integrate a collaborative ontology engineering functionality (e.g. Protégé 3.4) could also be used to support (partially or fully) the presented approach.
References Andriessen, J. (2006). Arguing to Learn. In Sawyer, R. K. (Ed.), The Cambridge Handbook of the Learning Sciences, pp. 443, 459, Cambridge University Press. Bittencourt, I. I., Isotani, S., Costa, E., Mizoguchi, R. (2008). Research Directions on Semantic Web and Education. Journal of Scientia - Interdisciplinary Studies in Computer Science, 19(1), pp. 59-66 Dellschaft, Klaas, Engelbrecht, Hendrik, MonteBarreto, José, Rutenbeck, Sascha and Staab, S. (2008). Cicero: Tracking Design Rationale in Collaborative Ontology Engineering ", Proceedings of the ESWC 2008 Demo Session Karakatsiotis, G., Galanis, D., Lampouras, G., Androutsopoulos, I. (2008). NaturalOWL: Generating Texts from OWL Ontologies in Protege and in Second Life. System demonstration, 18th European Conference on Artificial Intelligence, Patras, Greece Kotis, K. & Vouros, G. A. (2006). Human-Centered Ontology Engineering: the HCOME Methodology. International Journal of Knowledge and Information Systems (KAIS), 10(1): 109-131 Papasalouros, A., Kotis K., Kanaris K. (2008). Automatic generation of multiple-choice questions from domain ontologies. IADIS e-Learning 2008 (eL 2008), Amsterdam Pinto, H. S., Staab, S., Sure, Y., Tempich, C. (2004). OntoEdit empowering SWAP: a case study in supporting DIstributed, Loosely-controlled and evolvInG Engineering of oNTologies (DILIGENT). In: 1st European Semantic Web Symposium, ESWS 2004, Springer Scardamalia, M. (2004). CSILE/Knowledge Forum®. In Educational technology: An encyclopedia., pp. pp. 183-192, Santa Barbara: ABC-CLIO. Scardamalia, M., & Bereiter, C. (2006). Knowledge building: Theory, pedagogy, and technology. In K. Sawyer (Ed.), Cambridge Handbook of the Learning Sciences (pp. 97-118) New York: Cambridge University Press Selvin, A. M. (1999). Supporting Collaborative Analysis and Design with Hypertext Functionality. Journal of Digital Information, 1(4) Siorpaes, K. & Hepp, M. (2007). myOntology: The Marriage of Collective Intelligence and Ontology Engineering. In Proceedings of the Workshop Bridging the Gap between Semantic Web and Web 2.0, ESWC Streitz, N. A., Hannemann, J. & and Thüring, M. (1989). From Ideas and Arguments to Hyperdocuments: Travelling through Activity Spaces. Hypertext Conference 1989, pp. 243–364, Vouros, G. A., Kotis, K., Chalkiopoulos, C., Lelli, N. (2007). The HCOME-3O Framework for Supporting the Collaborative Engineering of Evolving Ontologies. ESOE 2007 International Workshop on Emergent Semantics and Ontology Evolution, ISWC 2007, November 12th, Busan, Korea. CEUR-WS.org/Vol. 292, ISSN 1613-0073
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
19
Using Avatar’s Nonverbal Communication to monitor Collaboration in a Task-oriented Learning Situation in a CVE Adriana Peña Pérez Negrón, Angélica de Antonio Jiménez Universidad Politécnica de Madrid – Facultad de Informática Campus Montegancedo s/n, Boadilla del Monte 28660 Madrid, Spain
[email protected],
[email protected] Abstract: Nonverbal communication gives significance to communication and in a collaborative task accomplishment situation it is a helpful mechanism to support peers’ awareness. Besides speech, actions and gestures help students to create a shared ground and the group process in a learning scenario. In virtual environments, the context nonverbal communication displayed by the user’s avatar could be the means to understand collaborative interaction, at least to an extent that collaboration can be distinguished from other working together organization types not proper for collaborative learning such as division of labor, hierarchic or brute force trials. In this paper we discuss some consideration about the nonverbal communication cues patterns displayed by the users’ avatar for different workingtogether situations in an immersive virtual scenario while their gazes’ behavior is highlighted.
Introduction Monitoring students with different geographical locations, while a collaborative learning session takes place, should allow the needed just in time group assistance. But if there is not a human facilitator available, then a pedagogical agent could take this place, at least to a certain point. In Collaborative Virtual Environments (CVEs) several communication channels other than speech might serve this purpose, either as a complement or as a substitution −when it is required− of the speech analysis. Nonverbal communication (NVC) comprises all these wordless messages people interchange (Argyle, 1988; Knapp & Hall, 2007; Mehrabian, 1969). During people’s interaction NVC involves three factors: environmental conditions, physical characteristics of the communicators, and behaviors of communicators (Knapp & Hall, 2007), all of them clearly restricted in CVEs by computer conditions. In a CVE for learning, environmental conditions have to do with the pedagogical strategy, which determines the session’s goal, such as discussing around a theme, solving a problem or accomplishing a task. Depending on the purpose of the learning session, the emphasis on the environment will be on the communication media, the conditions of the workspace, the surrounding objects and/or the features of the virtual scene. As for physical characteristics of the communicators, they are determined by the avatar’s appearance, which in learning environments usually is established by the developer, with not many options available to be changed by the students; physical characteristics also include those more interesting related to the avatar’s possibilities of expressing NVC via facial expressions, navigation or some specific body movements. CVEs are especially useful both for interaction and for collaborative spatial tasks (Park & Kenyon, 1999); while in the learning session the task to be accomplished is the students’ common goal and the main cognitive entity (Whitworth, Gallupe, & McQueen, 2000). While a task is being collaboratively accomplished there is a rich interplay between the speech and the actions that take place (Bekker, Olson & Olson, 1995; Goodwin, 1996). Thus, sharing views of the workspace is very helpful on physical tasks (Gutwin & Greenberg, 2001; (Kraut, Fussell, & Siegel, 2003). Here, the NVC cues can generate shared knowledge about previously spoken discourse and behavioral actions (Clark, 1996). Also, the visual information provided by the CVE is one of the strongest sources to verify mutual knowledge (Clark & Marshall, 1981). A shared visual environment provides cues to others’ comprehension; both the speakers and the listeners will make use of these cues to the possible extent to reduce their collaborative effort (Gergle, Millan, Kraut, & Fussell, 2004). But working together to accomplish a task does not necessarily mean that the final outcome is due to collaboration; it could be the result of splitting the task and then putting parts together, or the task could be accomplished by some participants giving orders while others just follow them. In order to get the collaborative learning proper advantages, the students must create a shared common ground and group process through a joint effort (Collazos, Guerrero, Pino, & Ochoa, 2002; Johnson and Johnson, 1975; Soller, Jermann, Muehlenbrock, & Martínez Monés, 2004). In this paper we discuss the possibility of distinguishing patterns of NVC cues, with an emphasis in the gaze behavior, while a task is accomplished in an immersive CVE. Those patterns are addressed to distinguish collaboration from other organizational patterns, in order to allow a pedagogical agent to successfully guide the learning session.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
20
Nonverbal Communication Patterns in Virtual Environments The rich nonverbal communication in a face-to-face situation is not already available in virtual environments; succinct metaphors have to substitute it, although in some cases it can be almost directly tracked from the user to his/her avatar. Up to day, the most common scenario in an immersive virtual environment is oral communication, a head and a hand trackers, and navigation within the environment usually through a joystick (Wolff et al., 2008). From those, the available NVC cues could be related to Paralinguistics, Proxemics, deictic gestures and head movements. Visual feedback plays an important role; speakers and addressees take into account what others can see (Schober & Clark, 1989); they notice where other’s attention is focused (Boyle, Anderson, & Newlands, 1994). By seeing the actions of the partner, the speaker gets immediate feedback regarding whether or not the addressee understood the given suggestion or instruction (Brennan, 2004; Gergle et al., 2004). While NVC is usually related to personality, whenever it is task oriented it tends to follow regular patterns (Chopra-Khullar & Badler, 1999; Patterson, 1982). These observed patterns, in context, can give us a clue of what occurs in a task oriented learning session. Paralinguistic features like tone of voice or irony are computer challenge for analysis, but the amount of talk and/or the turn-taking patterns could be useful for the interaction analysis. It has been demonstrated that some proxemic behaviors are kept in VEs (Bailenson, Blascovich, Beall, & Loomis, 2003), then the position maintained in relation to peers in the scenario while a task is being accomplished could indicate, for example, a discussion period or a working apart one. Deictic terms such as here, there, that, are interpreted resulting from the communication context, and when the conversation is focused on objects and their identities, they are crucial to identify the objects quickly and securely (Clark & Brennan, 1991), consequently, deictic gestures, especially those directed to the workspace, should be useful to determine whether students are talking about the task. The manipulation of objects related to the task accomplishment can also contribute to the interaction analysis. For an effective collaborative learning session we expect that each student takes part both while the decisions are taken and while the implementation takes place (Dillenbourgh, 1999; Jermann, 2004; Webb, 1995), which means that the students have to have a more or less symmetrical participation in discussions and in execution. Some hypotheses can be formulated about a session that is not following a collaborative organization. For example, if only some of the students are giving orders and the others are following them, then the expected pattern could be asymmetry in discussion and execution; as another example, those that give orders while the task is being accomplished should be pointing to what they are referring to while the executors should be paying attention. In division of labor, the position of the participants could be in different working areas without taking care if their partners can see what they are doing or without discussing it with them. If session follows a brute force trial organization, implementation will probably start from the beginning without reviewing periods. In order to verify if these assumptions can be followed in an immersive virtual environment, four sessions with the task of setting furniture, in different situations were observed.
Case of Study: Arranging furniture Test sessions were carried out in two remotely connected CAVETM installations in two different locations, one in the Centre for Virtual Environments, University of Salford, and the other in the Department of Computer Science, University of Reading. A human-like avatar of the remote user was displayed in the scenario, with each user provided with head and one hand tracked motions, and real eye movements tracked from the user (see Wolff et al., 2008 for the EyeCVE system characteristics). The eye-tracking allowed the avatars to represent the user’s exact eye movements. Both avatars looked the same but the Salford’s user avatar had a black t-shirt and the Reading’s user avatar had a blue t-shirt. Actions allowed were grabbing the furniture, rotation and relocation. The two participants are male expert CAVETM users that did not require any explanation about how to navigate in the scenario or how to manipulate the furniture. The selected task was to furnish a room, since it does not require any special knowledge, it is a spatial task and it implies some planning. The trials followed this order: first the collaborative one, then the hierarchical, followed by division of labor, and at the end the brute force trial. We tried to create the conditions for the collaborative, hierarchical, division of labor and brute force trials by changing the furniture and task situation without explicitly saying to the participants what the real intention was, trying to avoid bias. The conditions and the furniture were set as follows: 1) Collaborative trial a. Furniture. With the intention of encouraging consensus, some objects with different shapes with no evident use were available. For example, a box that could be used either to seat on it or as a table (left of Figure1).
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
21
b.
Instructions. Participants were told to arrange the furniture considering that they were going to share the room and that they could get rid of anything they did not want on it. They were also told to agree as much as possible in every decision they made. 2) Hierarchical trial a. Furniture was already set in the room, but they needed to make room to put a billiard table on it (center of Figure 1). b. Instructions. One of the participants was supposed to own the room, they decided which one, and the other one was a helper. The “owner of the room” was not allowed to move big things, so he had to ask the helper to move the billiard table, among other furniture. The room’s owner was allowed to get rid of the furniture he did not want in the room. 3) Division of labor trial a. Furniture had two colors. Part of the furniture was green −the usual elements in a living room, and the others were red −the usual elements in the dining room (right of Figure 1). b. Instructions. They were told they were allowed to move only furniture of one color and they could choose which color each one was going to set. 4) Brute force trial a. Furniture. They were the same as in the collaborative trial. b. Instructions. The participants were told to set furniture as quickly as they could. The participants knew each other, thus they did not need to be introduced and since they are regular CAVETM users and partners, they skipped the social introductory phase. There was only a very short greeting when they started, the Reading participant, which will be called subject A from now on, waved to the Salford one, who will be called subject R from now on, and he waved back. The nonverbal communication displayed by participants presented restrictions if compared with a real life situation. Although the communication was oral and they could have regular turns and make comments, they were listening to the other one always in their ear, thus they did not have the real life acoustic information necessary to distinguish where the other one was located if they were not seeing each other. In one occasion subject R was talking to subject A while he was on his back, and subject A had to ask −where are you? In order to avoid loosing the eye-tracker calibration, the participants were not free to make head nods or jerks. Since avatars had only one hand tracked and its shape could not be changed, the pointing gestures were made by extending the whole arm, (see Figure 2). When they referred to an extended area they moved the arm from one side to the other, and some times the pointing was made with an object attached to the hand. Although they could see each other head, and point of view, they always used the arm-hand pointing or a verbal reference.
Stages during the task The analysis was made with the help of a tool to replay the sessions that allows observing both connected CAVETMs as one single scenario, and from different points of view (Murgia et al., 2008). The first three sessions followed the same stages: first they observed what they had at hand, then they planed the furniture arrangement, then they implemented while making small reviews and changing some plans, then by the end they did a review that sometimes conducted to small changes. The main differences observed were in the implementation stage. The brute force trial started with the implementation and ended with a very quick review, more like a session closure. First stage. The participants started the first three trials by observing what they had at hand in the scenario. In the collaborative trial, by touching and moving some of the furniture to get used to their shape, same that they usually put back where it was located (left of Figure 1). In the hierarchical trial, they first observed the billiard table and then they put it inside before deciding what to do (center of Figure 1). In the division of labor trial, they only observed the furniture while they made comments and jokes about what each one had got (right of Figure 1). In the brute force trial, they skipped this first step since they already were familiar with the available furniture.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
Collaborative
Hierarchical
22
Division of labor
Figure 1. Observing what is at hand before starting planning Second stage. The next step was the planning phase. The NVC behavior was pretty much the same in the three first trials –collaborative, hierarchical and division of labor. They stayed one in front of the other while they discussed how to set furniture; at the same time they were pointing towards the furniture or the area they were referring to, see Figure 2. Although they barely moved around the scenario, their gazes went all around it. In the left part of Figure 3, the trace of their heads’ position is displayed by small tetrahedrons; red for subject A and green for subject R, and in the right side of Figure 3 the gaze target points are the small spheres, green for subject A and red for subject R. This phase can be distinguished then by talking turns, their position in the scenario −one in front of the other, the interchange of deictic gestures, and the dispersed.
Collaborative
Hierarchic
Division of labor
Figure 2. Making plans to set furniture. Left in the collaborative situation, center hierarchical and right division of labor Head trace
Gazes target point
Collaborative
Hierarchic
Division of labor
Figure 3. The moving around area compared to dispersed gazes Third stage. After they made some planning, they started implementation. The implementation was considered to initiate from the moment they started to move furniture in order to set the scenario. In these stage important differences among NVC behaviors can be observed. Turns of talk. The turn of talk definition shifts from one researcher to other according to the criterion to determine its limits (Feldstein, Aberti & BenDebba, 1979). Here we will only establish the difference between
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
23
chatting and clear periods of silence. An audio editor was used to show audio over time. The most significant difference we found are the longer silent periods in the division of labor trial, as it can be seen in Figure 4 rounded by a black box.
Brute force
Collaboration
Division of labor
Hierarchic
Figure 4. The audio track of the implementation stage for the four trials Manipulation of objects. Another difference that can help the differentiation between the trials is the time the participants manipulate furniture. In Table 1 can be seen that in the brute force and the division of labor trials, they spent most of the time moving furniture. The biggest range of difference between the two participants was in the hierarchical trial, where subject A, that gave orders, did clearly less moving. Table 1. Weighted average of the time of object manipulation while implementation Trial Collaboration Brute Force Division of Labor Hierarchical
Manipulation time 58% 98% 78% 41%
Subject A
Subject B
22% 54% 39% 15%
36% 44% 38% 26%
Gazes. It comes to our attention that although the participants were aware that the avatars were representing the real user’s eye movements, they never got eye contact or even mutual gaze during sessions. Gazes went either to the scenario or to the other participant. When they looked at each other, they gazed to the object the other was moving, the body of the other one, the moving hand or the head. The first notorious difference is the time of the stage they were looking at each other, for collaboration 22%, for hierarchic 20%, and for brute force 17%, and for division of labor it was only the 6% of the gazing to each other time. In the collaborative trial, up right side of Figure 5, it can be seen the proportion of gazes between participants, subject R red colored, made the 58% of them and subject A in green color made the 42%. In the hierarchical trial (right up side of Figure 5), subject A did the 65% of the looking at the other one, while he was giving orders, and subject R did only the 35% of them. Subject A increased head gazes with respect to the collaborative trial to a 4% and subject R did the same to the other’s hand to a 4%. In division of labor, subject R did most of the gazes 75% (lower down side of Figure 5) and almost half of them to what subject A was doing, while subject A did only the 25% of the gazing to the other one. In the brute force trial again as in division of labor (lower left side of Figure 5), the subject R did most of the gazes with a 71%, and subject A did only the 29%. While some time of the gazes directed to each other can be explained based in the work-together context, others might have a different explanation. In collaboration they look at each other more or less the same quantity of the time, which is consistent with a sharing opinion situation. Part of the time to the other one and part to what the other was doing. The hierarchical situation is also consistent with the needed feedback of being understood that the one who is giving instructions requires. In division of labor the amount of gazes is quite small compared to the other trials due to the fact that they were most of the time implementing on their own, but that does not explain why subject R gazed that much to subject A with not the same respond of subject A gazing at him, same as in the brute force trial, although it was a good proportion of the time to what the other was doing and not actually to the other one. This could be explained as the subject R requiring more feedback of what his partner was doing than did subject A.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
24
Figure 5. Gazes to each other participants while implementing Fourth stage. Once they were finished moving the furniture, they reviewed what they did and they made small changes. In this stage the participants stood where they could get the best view of the area they were reviewing, despite where the other one was standing. Gazes went around with some pointing and small changes to the furniture set were done in the hierarchical and the collaborative trial. Left side of Figure 6 shows how the scenario looked once they ended the implementation, and the right side of Figure 6 is how the scenario looked when they finished the review after some small changes were made.
Collaborative
Hierarchical
Figure 6. Small changes after review
Discussion A detailed observation of four situations related to working together in an immersive collaborative virtual environment while a task is being accomplished, was made with the purpose of searching for NVC cues that
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
25
could be used to distinguish them. The four situations were collaborative, hierarchical, division of labor and brute force trial, and the task was setting furniture within a room. The first three situations followed four stages: the participants observing the scenario in order to see what they had at hand to work with; planning the furniture setting; making the implementation; and reviewing. The brute force trial had only the implementation stage. The main differences in NVC cues among the stages are summarized in Table 2. When they are exploring the scenario, planning or reviewing the same NVC behaviors can be observed in the collaborative, hierarchical and division of labor sessions, while the brute force trial skips them. In the implementation stage in collaboration they chat, move around the same area, usually only one of the two of them is moving objects, there are some sporadic deictic gestures and gazes go from each other to the scenario. In the hierarchical stage the talk is made mainly by the one who gives orders and the gazes, while the implementation is made by the one who follow orders. In the division of labor, they gazed mainly to what they were doing, when they talked most of the time were statements instead of trying to get an answer like in conversation, and implementation is made at the same time for both of them. Finally in the brute force trial main difference in implementation with division of labor is the amount of gazes that they direct to each other. Table 2. Different Nonverbal Communication behaviors among the stages
NVC cues Talk Stages Exploring the scenario
Proxemics
Turns
Planning Turns
Review
Turns
Manipulation of objects
Allowing to see each other Allowing to see each other, around a small area To get the best point of view
Deictic gestures
Gazes Around the scenario and the objects
Touching
Some pointing
Not
Interchange of pointing
Barely
Great amount
Around the scenario, ant to each other Around the objects
Some pointing
Mainly to the objects and to each other
Mainly from the one that gave the orders
Mainly from the one that gives orders to the one that followed them
Barely
To the working area
Barely
Around the area and to each other
Collaboration Implementation Turns
Around the same area
Most of the time from only one person
Hierarchical Implementation
Turns – main talk from the one who was giving orders
Allowing to see each other
Barely
Each one on their own working area
Barely
Mostly each one on their own working area
Mainly from those that followed the orders
Division of labor Implementation
At the same time in different areas
Brute force Implementation
At the same time in different areas
While this is only a first review of NVC cues, our hypothesis is that different working-together situations can be distinguished only by observing NVC. The final goal is to distinguish collaboration while a group of students is working together in a learning session, in order to create a pedagogical agent that guides them to get the advantages expected from collaborative learning.
References Argyle, M. (1988). Bodily Communication (2nd ed.). London: Methuen & Co. Ltd. Bailenson, J. N., Blascovich, J., Beall, A.C., & Loomis, J. (2003). Interpersonal distance in immersive virtual environments. Personality and Social Psychology, 29, 819-833. Bekker, M. M., Olson, J. S., & Olson, G. M. (1995). Analysis of gestures in face-to-face design teams provides guidance for how to use groupware in design. Proceedings of DIS 95, 157-166. NY: ACM Press.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
26
Boyle, E. A., Anderson, A. H., & Newlands, A. (1994). The effects of visibility on dialogue and performance in a cooperative problem solving task. Language & Speech, 37(1), 1-20. Brennan, S.E. (2004). How conversation is shaped by visual and spoken evidence. In J. Trueswell & M. Tanenhaus (Eds.), World Situated Language Use: Psycholinguistic, Linguistic and Computational Perspectives on Bridging the Product and Action Traditions. Cambridge, MA: MIT Press. Chopra-Khullar, S., & Badler, N. I., (1999). Where to look? Automating attending behaviors of virtual human characters. Proceedings of the 3th Annual Conference on Autonomous Agents, p.16-23, Seattle, Washington, United States. Clark, H. H., & Marshall, C. R. (1981). Definite reference and mutual knowledge. In B. L. W. A. K. Joshi, I. A. Sag (Eds.), Elements of Discourse Understanding (pp 10-63). New York: Cambridge University Press. Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, J. M. Levine, and S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127-149). Washington, DC, USA: American Psychological Association. Clark, H. H. (1996). Using language. Cambridge: Cambridge University Press. Collazos, C., Guerrero, L., Pino, J., & Ochoa, S. (2002). Evaluating Collaborative Learning Processes. Proceedings of the 8th International Workshop on Groupware (CRIWG 2002), Springer Verlag LNCS, 2440, Heidelberg, Germany. Dillenbourg, P. (1999). ‘What do you mean by collaborative learning?’. In Dillenbourg, P. (Ed.) Collaborativelearning: Cognitive and Computational Approaches (pp. 1-19). Oxford: Elsevier. Feldstein, S., Aberti, L., & BenDebba, M., (1979). Self-attributed personality characteristics and the pacing of conversational interaction. In Siegman, A.W., Feldstein, S., (Eds.), Of Speech and Time: Temporal Speech Patterns in Interpersonal Contexts. Lawrence Erlbaum, Hillsdale, New Jersey. Gergle, D., Millan, D. R., Kraut, R. E., & Fussell, S. R. (2004). Persistence matters: Making the most of chat in tightly-coupled work. CHI 2004 (pp. 431-438). NY: ACM Press Goodwin, C. (1996). Professional vision. American Anthropologist, 96, 606-633. Gutwin, C. & Greenberg, S. (2001). A Descriptive Framework of Workspace Awareness for Real-Time Groupware. Journal of Computer-Supported Cooperative Work, 3(4), 411-446. Jermann, P. (2004). Computer Support for Interaction Regulation in Collaborative Problem-Solving, PhD thesis, University of Genéva, Genéva, Switzerland. Retrieved, May 2005 from http://craftsrv1.epfl.ch/~colin/thesis-jermann.pdf. Johnson, D., & Johnson, R. (1975). Learning Together and Alone, Cooperation, Competition and Individualization. Prentice Hall Inc. Englewood Cliffs, New Jersey. Knapp, M. L., & Hall, J. A. (2007). Nonverbal Communication in Human Interaction (5th ed.). Wadsworth: Thomas Learning. Kraut, R. E., Fussell, S. R., & Siegel, J. (2003). Visual information as a conversational resource in collaborative physical tasks. Human Computer Interaction, 18(1), 13-49. Mehrabian, A. (1969). Significance of Posture and Position in the Communication of Attitude and Status Relationships. Psychological Bulletin,71, 359-72. Murgia, A., Wolff, R., Steptoe, W., Sharkey, P., Roberts, D., Guimaraes, E., et al. (2008). A Tool for Replay and Analysis of Gaze-Enhanced Multiparty Sessions Captured in Immersive Collaborative Environments. Proceedings of the 12th IEEE International Symposium on Distributed Simulation and Real Time Applications (IEEE DS-RT 2008) Vancouver, Canada Park, K. S., & Kenyon, R V. (1999). Effects of Network Characteristics on Human Performance in a Collaborative Virtual Environment. Proceedings of the IEEE Virtual Reality, p.104. Patterson, M.L. (1982). A sequential functional model of nonverbal exchange. Psychological Review, 89, 231249. Schober, M. F., & Clark, H. H. (1989). Understanding by addressees and over hearers. Cognitive Psychology, 21(2), 211-232. Soller, A., Jermann, P., Muehlenbrock, M., & Martínez M. (2004). Designing computational models of collaborative learning interaction: Introduction to the workshop proceedings. Proceedings of the 2nd International Workshop on Designing Computational Models of Collaborative Learning Interaction at ITS. Maceió, Brazil. Webb, N. M. (1995). Group collaboration in assessment: multiple objectives, processes, and outcomes. Educational Evaluation and Policy Analysis, 17(2), 239-261. Whitworth, B., Gallupe, B., & McQueen, R.J. (2000). A cognitive three process model of computer-mediated groups: Theoretical foundations for groupware design. Group Decision and Negotiation, 9(5), 431-456. Wolff, R., Roberts, D., Murgia, A., Murray, N., Rae, J., & Steptoe, W. (2008). Communicating Eye Gaze across a Distance without Rooting Participants to the Spot. Proceeding of the 11th IEEE ACM International Symposium of Distributed Simulation and Real Time Applications DSRT, pp 111-118.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
27
Collaborating in Social Networks: The Problem Solving Activity Leading to Interaction - ‘Struggle’ Analysis Framework (SAF) Tzanavaris Spyros, Gymnasio of Liapades, Corfu,
[email protected] Sepetis Anastasios, TEI Athens, Athens,
[email protected] Tzanavaris Steve, TopoSat Co, Athens,
[email protected] Abstract: This paper examines the effect of problem solving activity during computersupported collaborative problem solving in digital social networks and introduces the ‘Struggle Analysis Framework (SAF)’, for analyzing interactions in these networks; interactions will be called ‘struggle’. A study of collaborative modeling has been contacted in the frame of an authentic educational activity among students in a secondary school and in a university, using Facebook social network. The students involved were separated into two groups. Students peer interaction analysis revealed that, according to our expectations, the group provided with an activity appeared to have much more struggle than the group that had nothing special to do. They were more active, they exchanged more messages and documents, they were involved in deeper discussions, they achieved mentor/tutor roles and they collaborated more. The reported findings as well as the SAF model, can have implications to various settings of collaboration in social networks.
Introduction A recent variety of new web tools and web technologies leading computer-supported collaborative learning (CSCL) appeared and established itself on the Internet (Beldarrain, 2006; Bryant, 2006). What is of particular importance in the Web context for CSCL researchers is the integration of so-called social software (Tzanavaris Sp., Korakianiti & Tzanavaris St, 2008; Kesim & Agaoglu, 2007; Kolbitsch and Maurer, 2006). Social software refers to systems which facilitate human communication, interaction and collaboration in large communities (Wagner & Bolloju 2005; Ward, 2006). These systems support the constitution and maintenance of selforganizing social networks and communities (Köhler & Fuchs-Kittowski, 2005; Lin, Chen, Yu, 2006; Moore & Serva, 2007; Wasko & Faraj, 2005). Digital social networks, weblogs (blogs), file-sharing communities, and wikis loom large in this social-software context (Wagner & Bolloju, 2005). In this empirical research we try to finger out the contribution of problem solving activity to the interaction –struggle- during collaboration in digital social networks. Therefore we use the hereby introduced Struggle Analysis Framework (SAF). Modern approaches in teaching and learning put emphasis on problem solving activities that involve collaboration. It seems that there is a wider acceptance of the fact that these approaches encourage construction of knowledge and building of meaning. The main benefits of collaborative learning are related to the active character of the learning process, the deep level of information processing and the requirement of deep understanding from the students involved (Scardamalia & Bereiter, 1994; Dillenbourg, 1999). Through such approaches skills of critical thinking, communication and coordination can be developed and conscious knowledge construction mechanisms can be built (Steeples & Mayers, 1998; Stahl, 2002). Network-based computer systems offer new possibilities in this context and at the same time raise new questions related to the feasibility and effectiveness of distance collaboration. Also questions are related to the factors that affect collaboration, the role of the symbolic and physical tools that support human activity and communication in this context as well as the role of human interaction and peer support during collaborative learning. The knowledge building takes place through student peer interaction, interaction between students and external representations, between students and teachers or students and software agents. Such interactions found in digital social networks will be called ‘struggle’. Special interest has shown that struggle (social interaction and scaffolding) affects the learning process in computer-supported collaborative problem solving and it can be evaluated (Soller, Lesgold, Linton & Goodman, 1999, Soller, 2001; Amy Wu, Farell, R. & Singley, 2002). Communication often takes place through specially designed tools/platforms, which should remain transparent in order not to interfere with students' problem solving activity (Reiser, 2002). Social network platforms are a combination of synchronous and asynchronous tools as well as a combination of open and close environments. Synchronous tools as video and chat are the virtual transformation of physical presence and speech. For example, synchronous communication can take place through exchanged text messages using chat tools (Baker, Hansen, Joiner and Traum, 1999; Wenberger, Fischer and Mandl, 2002) and through shared activity boards, in which problem solutions are constructed. There are also asynchronous tools as bulletin boards, forums and asynchronous collaborative platforms as wikis (Reinhold, 2006). The collaborating partners share this way representations of cognitive artifacts, supporting common understanding. In this context special interest has been shown on the investigation of the conditions under which struggle (computer-supported collaborative problem solving) can be effective. Investigation of these conditions
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
28
often involves design of collaborative learning environments, e.g. environments, which provide learning resources and in particular primitive artifacts that can be used in the process. In most cases these primitive artifacts belong to a pre-determined closed set. Examples of these primitives artifacts can be tools like chat, forum, e-mail as well as abstract objects, like rectangles, ellipses, squares, different statement types, etc., as it is the case in Belvedere (Suthers and Jones, 1997), COLER (Constantino & Suthers, 2001), C-CHENE (Baker and Lund, 1997), Modeler Tool (Koch, Schlichter and Trondle, 2001). These can take special meaning for the students during struggle. So common understanding among collaborators is based on the existence of these common basic primitives and the solution is built using these shared available resources. This is one of the mechanisms provided for struggle (scaffolding) the collaborative activity. These common primitives are the items about which the users struggle (argue and discuss) before converging to a commonly acceptable solution (Suthers, 2000). According to Stahl (2002) the students can start their struggle (argumentation) only after they have built a common understanding of their meaning and use it in the modeling activity. However this ‘closed environment’ assumption is not always true. Today struggle (collaborative problem solving activity) can take place within open systems, which permit additional resources to be built or sought by the students themselves (photo, document, video). In addition, pedagogical motivations often encourage this ‘open’ approach (Dillenbourg, Baker, Blaye, O’Malley, 1995; Muehlenbrock, Tewissen, and Hoope, 1998; Baker, de Vries, Lund & Quignard, 2001). On the one hand, as a consequence the building resources are not shared among all the partners who therefore need to struggle (negotiate) the available resources before even start getting engaged in problem solving (Fidas, Komis, Avouris, Dimitracopoulou, 2002). The collaborators struggle (search) for artifacts in a wider space like Internet or even struggle (build) new artifacts themselves during the process. On the other hand, in the digital social networks, large groups of likeminded people are able to struggle (work collaboratively) on one and the same text about a certain topic, or/and to struggle (jointly discuss) an uploaded photo (p.ex. the Facebook 'wall'). All these activities allows the collaborative generation of knowledge (Fuchs-Kittowski and Köhler, 2005; Köhler and Fuchs-Kittowski, 2005). Generally, digital social software potential for collaborative learning (social networks, wikis, blogs) lies on their ability to allow for struggle (debate-based learning) experiences (Chong and Yamamoto, 2006) or to facilitate shaping of knowledge (Reinhold, 2006). Digital social applications can be regarded as media which support learning due to their ability to facilitate collaboration (Kim, Han H. & Han S., 2006; Notari, 2006), to enhance inventiveness (Guzdial, Rich & Kehoe, 2001), and to support inquiry learning and the co-construction of knowledge (Yukawa, 2006). In digital social networks, due to media nature, the struggle does not necessarily takes place between collaborating partners who share common conventions, cultural and cognitive backgrounds, tools and resources. This heterogeneity of learning material may enforce struggle (Fidas, Komis, Tzanavaris and Avouris, 2005). Subsequently, ‘struggle’ is not equal to ‘interaction’. It is a social interaction -takes place in an open social environment- between collaborators or collaborator and artifact, in a scaffolding context that uses the problem solving activity. Struggle is a deeper – more internal- approach than the interaction, based on pedagogical/psychological aspects (Tzanavaris and Sepetis, 2009). In the reported research we have attempted to investigate the ‘struggle’ aspect of collaborative learning in digital social networks, by studying the role of problem solving. This is a key question relating to the interaction of the context of collaborating partners, who do not necessarily share common conventions, cultural and cognitive backgrounds, tools and resources. Building a common understanding in such a case is considered particularly difficult. In order to achieve this objective we set up an experiment involving collaborative problem solving of pairs of students at a distance through Facebook. The struggle analysis was based on the hereby introduced Struggle Analysis Framework (SAF). In the following sections of the paper we describe the experiment and the context of the study, we analyze the collected data and discuss the findings of the empirical study, and finally the results are summarized and the implications of the reported research are outlined.
The study The Context This study took place among students of a Greek Gymnasium (ages 14-15) and of a Greek Technological Educational Institute (TEI) (ages 18-20), in the frame of an educational activity in Facebook. Twenty (24) students and their two tutors took part in the experiment that took place, in the context of the classes on 'Internet Technology' in both Gymnasium and TEI, during the winter semester of 2008. The students volunteered to participate in the study and they were not evaluated for their performance in this problem solving activity. They were separated into two groups (A and B), each group divided in six (6) pairs of students, each pair had one partner from Gynasium and one from TEI. The pairs of group A, which was used as a control group, provided with a problem activity (present later) while the pairs of group B had nothing special to do. The physical location of the students was in Corfu and in Athens: group A (6 students in Corfu and 6 in Athens), group B (6 students in Corfu and 6 in Athens). The members of each pair –selected at random- collaborated using the
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
29
Facebook environment. They did not know each other, so they had to find first the given name of their pair using a Facebook search tool and then ‘+add recipient’. Subsequently, pairs of group A were given an activity sheet and instructions on the problem solving strategy to be followed during the provided time (50-55 min.), while pairs of group B were told to do anything they liked with their new friend (chat, exchange photos, exchange documents through e-mail). It was mentioned that every student used the computer he/she used to work during the semester, thus he/she already had a personal library of photos and documents. The problem that was given to each pair of group A was to form a joint one page poster, for describing their common favorite band or signer. They could use photos and texts they already had or found in the internet during experiment time. Both groups were provided with an appropriate form (present later) where they were asked to notice every time they did an upload or download using Facebook interface, their e-mail account or web (mainly google).
The Digital Social Network Environment The digital social network environment selected was the popular Facebook, at www.facebook.com, since all students already had an account there. This social network, as most of the social networks do, contains tools for exchange of text messages (chat) between collaborating partners. Also contains a ‘wall’ for uploading photos and making comments on them. It is possible to exchange simple messages, photo, video and hyperlinks through an e-mail process. You can try to locate someone you already know the name and you can send an invitation to be your friend. If a friend of yours is online at the same time, you are getting informed so you can chat. Struggle in this context takes place through exchanged chat messages between students, through actions in the activity space of Facebook like ‘+add recipient’, upload photo, make comments on it, and through exchanged documents (Tzanavaris Sp., Korakianiti K., Tzanavaris St., 2008; Tzanavaris and Sepetis, 2009).
Figure 1. The facebook entrance page.
Educational Scenario for Collaborative Problem Solving The educational scenario was based on studies of Pinelle & Gutwin (2002) on collaborative problem solving and on studies of Ogata, Yano, Furugori and Jin (2001) on computer supported social networking for augmenting cooperation. The scenario involved formulation of a joint poster of their common favorite band or singer, for the 2nd annually world competition: ‘poster of my favorite singer/band’. They could use photos and documents they already had or found in the internet during experiment time. They had to negotiate affecting the characteristics of the poster using the provided tools (Facebook artifacts: chat and collaborative tool like ‘the wall’; their e-mail accounts for exchange the poster). Many researchers argue that the problems that users should be asked to solve need to be authentic and should be based on realistic scenarios (Hansen, Holmfeld, Lewis & Rugelj, 1999; Gutwin & Greenberg, 2000). Suppose that you and your friend make a poster (one page) of your favorite singer or band for the 2nd annually world competition: ‘poster of my favorite singer/band’. Collaborate using Facebook (chat, ‘wall’, comments) and your e-mail account (for exchange poster and photo) with your friend and build a joint poster. Decide the singer/band, discuss the factors that you need to take in consideration using the chat tool. Find the photos you prefer using your gallery and internet. Upload photos on your wall, check your friends’ photos and discuss with them for the right one. Build a poster in Microsoft Word. Study the effect of various factors, establish which ones affect the poster more. Use your e-mail account to exchange the poster as many times as you need. Sent your poster to ‘my favorite singer/band committee’:
[email protected]
Figure 2. The educational scenario for collaborative problem solving activity.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
30
Note that the pairs of group A were given an activity sheet and instructions on the problem solving strategy to be followed during the provided time (50-55 min.), while the pairs of group B were told to do anything they like with their new friend (chat, exchange photos, exchange documents through e-mail).
Analysis of Struggle Interaction in a Digital Social Network The findings of the study, discussed in this section are based on the following data collected during the field experiment: (a) logfiles of activity which include exchanged dialogue messages, in chronological order. This is due to copy/paste windows function from the Facebook chat application to MS Windows XP Notepad application, (b) the solutions sent by e-mail, produced by the six pairs of students of Group A. These six (6) uploads were not included in the upload actions of Group A, (c) the filled forms during the study where students could easily keep the number of uploads/downloads and Analysis of these data is based on the following dimensions: (a) Analysis of struggle. In this study context, struggle is the dialogue and activity interaction. Dialogue interaction produced by the exchanged text messages. Activity interaction produced by uploads and downloads artifacts (photos, documents) using Facebook, e-mail, web. Using Facebook as described previously in the digital social network environment section, through e-mail for attached documents/solutions of Group A and finally using web (mainly google) for locating photos, (b) quantitative and qualitative analysis of interaction, (c) micro-analysis of dialogues concerning the struggle. In the following section we provide a brief introduction to the main principles of the methodology used for analysis of dialogues and actions. 10:15amEri hi. eimai i eri. i ti knc? 10:15amΑgeliki geia sou. eimai i aggeliki. ti m kaneis???? 10:16amEri kala eimai ec??????? ti 8a kanoume me tin afisa?????????? 10:17amΑgeliki kala ti les na broume kamia afisa???? 10:19amΑgeliki ok!!!! na psa3oume kiolas.....mhpws na boume sto google na broume foto kai plurofories???? 10:20amEri ναι!!!!!!!!να ωροθµε και κανενα τραγοθδι τοθ........
Figure 3. The chat dialogues after copied to MS Windows XP Notepad. They appeared in Greeklish (Greek with Latin characters) and in Greek.
Methodology of Αnalysis The Struggle Analysis Framework (SAF) used in this study, is particularly suitable for analysis of collaborative problem solving activity in digital social network environments for four reasons: a) due to interleaving of actions and dialogue b) due to both qualitative and quantitative perspectives of analysis c) due to its both technical and pedagogical/psycological approaches d) due to its open structure that can be easily expanded to future ones. It is based on studies of OCAF model analysis (emphasis on artifacts of the communication) for collaborative problem solving activity (Avouris, Dimitrakopoulou and Komis , 2003) and on studies on textual interaction that took place during problem solving (Komis, Avouris and Fidas, 2002; Soller, Lesgold, Linton and Goodman, 1999; Mc Manus and Aiken, 1995; Webb, 1992). The main and most famous struggle that can be found in every wide known digital social network are two: a) chat and b) share (upload/download) artifacts (text, photo and video). Subsequently, the analysis is going to be based on these interactions:
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
31
(a) Struggle Chat analysis is based on the qualitative evaluation pointers ‘Struggle Chat Share’ and ‘Struggle Chat Action’: (i) ‘Struggle Chat Action’ is a pointer that indicates the main actions of a partner, during a discussion in a digital social network: propose, argue, agree, disagree, noise (off task). If needed, it is open to expand in order to include new actions (contestation, verification). For example, in the Table 1, pairs of group A did –during chat- totally 86 proposals. (ii) ‘Struggle Chat Share’ is a pointer that indicates the main interactions shared between partners, during a discussion in a digital social network: strategy or control, task related, related to the usage of artifacts, noise (off task). If needed, it is open to expand in order to include new interactions (investigation, compilation). For example, in the Table 4, pairs of group A did –during chat- totally 85 strategy interactions Table 1: An example of a table for ‘Struggle Chat Action’ analysis
Group A B Total Difference
Propose 86 39 125 47
Struggle Chat Action' analysis between groups Argue Agree Disagree Noise (Off Task) 106 81 70 62 29 27 10 110 135 108 80 172 77 54 60 -48
Total Struggle 405 215 620 190
(b) Struggle Artifacts analysis is based on the quantitative evaluation pointers ‘Struggle Artifact Share’ and ‘Struggle Artifact Action’: (i) ‘Struggle Artifact Action’ is a pointer that indicates the action on a certain kind of artifact (photo, video, document) used during collaboration in a digital social network, including artifacts from non-network sharing tools, like e-mail accounts. If needed, it is open to expand in ‘Struggle Artifact Photo’, ‘Struggle Artifact Document’, ‘Struggle Artifact Video’ etc. It can be applied to pairs and groups. For example, in the Table 2, pairs of group A did –during collaboration- totally 114 uploads or downloads of photo. (ii) ‘Struggle Artifact Share’ is a pointer that indicates the number of artifacts shared (uploaded and downloaded) during collaboration in a digital social network, including sharing tools that do not belong to the network but they can be used in the collaboration proccess, like e-mail accounts. If needed, it is open to expand in ‘Struggle Artifact Upload’ and ‘Struggle Artifact Download’. It can be applied to pairs and groups. For example, in the Table 3, pairs of group A did –during collaboration- totally 86 uploads of different kind of artifacts. Table 2: An example of a table for ‘Struggle Artifact Action’ analysis
Group Group A Group B Total Difference
‘Struggle Artifact Action' analysis between Groups Photo Documents 114 63 31 4 145 67 83 59
Total 177 35 212 142
In the following sections, we discuss the quantitative and qualitative analysis of our data through SAF model.
Quantitative Struggle Analysis Some results of the SAF quantitative analysis are included here. Each partner was given a form for checking each time he did an upload or a download. In Table 3, we can see an analysis of ‘Struggle Artifact Share’ evaluation pointer. In this summary table, the average number of struggles (uploads and downloads) is shown. Partners of group A produced as an average 14,75 struggles while partners of group B 2,92 struggles. So overall group A was more active. Significant difference is contributed to every struggle: upload artifacts (69) and download artifacts (73). The total difference between groups is strongly significant: (142) artifacts uploaded or downloaded
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
32
Another quantitative evaluation pointer is ‘Struggle Artifact Action’. In results summarized in Table 2, we describe the actions that took place on a certain kind of artifact. Partners of group B produced significant less action than partners of group A both on photos and documents artifacts: photos (83) and documents (59). Table 3: An example of a table for ‘Struggle Artifact Share’ analysis
Group Group A Group B Total Difference
‘Struggle Artifact Share' analysis between groups Struggle Artifact Symmetry Upload Download Total Struggle Average 86 91 177 14,75 17 18 35 2,92 103 109 212 69 73 142 11,83
The degree of symmetry of interaction indicates the contribution of each partner to the exchanged artifacts. The average value of this index, 14,75 for group A and 2,92 for group B, shows that each partner in group A was more ‘collaborative’. The conclusion of this section analysis is therefore that group A is more active and takes actions that indicate collaborative activity, like the Upload operator, more often than group B. In the following a more detailed analysis of this activity is performed.
Qualitative Struggle Analysis SAF model is quite suitable for qualitative analysis in computer supported collaborative problem solving through digital social networks. Comparison of the overall chat activity of the pairs of Group A and Group B is included in Table 5. In this summary table, the average number of actions and exchanged text messages per struggle is shown. Partners of group A produced as an average 28,58 struggles while partners of group B 8,75 struggles. So overall group A was more active. In summary Table 1, significant difference is contributed to every struggle: proposals (47), argues (77), agrees (54), and disagrees (60), while for noise-off task messagesthe difference is minus. This is expected, due to the fact that collaborators in group B had nothing to collaborate on, so they were not forced to propose, argue, agree, disagree thus they discussed on general topics. According to Avouris et al. (2003), these struggle functional roles are a strong indication of ownership of artifacts and relations as well as strong indication of participation in collaboration and peer support. An additional point of view concerns the Struggle Chat Share textual interaction (Table 4) that took place during problem solving through social network, following a methodology also used by Komis et al., (2002). As expected, communication between members of group A was more productive. In all categories group B exchanged less messages. The overall number of exchanged messages was 405 between partners of group A and 215 between partners of group B. Messages in ‘Usage of Artifacts Related’ interactions were significantly more for group A. This is expected, since the scenario involved transactions of artifacts (photos, documents, email, chat). For the same reason, this category had the most exchanged interaction messages inside group A. The ‘Task Related’ messages were significant more inside group B. This happened since scenario absence made no real need of ‘Strategy’ or ‘Usage of Artifacts’. Table 4: An example of a table for ‘Struggle Chat Action’ analysis ‘Struggle Chat Share' analysis between groups Group A B Total Difference
Strategy/control
Task Related
Usage of Artifacts Related
Noise (Off Task)
Total Struggle
85 10 95 75
103 80 183 23
155 15 170 140
62 110 172 -48
405 215 620 190
An analysis of distribution of messages to the partners of the pairs of groups A and B has also been performed. The degree of symmetry of interaction (Table 5) indicates the contribution of each partner to the exchanged messages, without counting the noise messages. The average value of this index, 28,58 for group A and 8,75 for group B, shows that each partner in group A was more ‘collaborative’.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
33
Table 5: Symmetry Average Struggle
Group Group A Group B Total Difference
The Noise (Off Task) Messages (chat analysis between groups) Struggle Chat Total Struggle Noise Total Struggle minus Noise Symmetry Average 405 62 343 28,58 215 110 105 8,75 620 172 448 190 -48 238 19,83
The conclusion of this section analysis is therefore that group A is more active and takes actions that indicate collaborative activity, like the Propose operator and the Strategy operator, more often than group B. In the following a mentor/tutor analysis of this activity is performed.
Mentor/Tutors Enrolling So far it has been observed that in the activities of the pairs of group Α, there was more struggle than in those of group B. The existence of a problem solving activity seemed to be the reason for this observed behaviour. However it was considered important to examine in more detail the activity that concerned the mentor/tutor enrolling. Table 6: Evaluation of Mentor/Tutorr Enrolling Evaluation of Mentor/Tutor role in group A Pair Evaluation (1-10) A1 7 A2 7 A3 6 A4 5 A5 8 A6 9 Average 7 (a)
Evaluation of Mentor/Tutor role for Group B Pair Evaluation (1-10) A1 3 A2 2 A3 2 A4 4 A5 0 A6 2 Average 2,17 (b)
A prime quantitative analysis of Table 3, shows that the degree of Mentor/Tutor Enrolling does not exist, since uploads and downloads inside groups are equal: group A(86/91) and group B(17/18). Then a qualitative analysis of Mentor/Tutor enrolling performed through logfiles. The evaluation results are presented in Table 6. Each pair achieved an evaluation, in a 10 degree scale, depends on how much enrolled Mentor/Tutor activities. For example, in Figure 3 in 10:19 action, Aggeliki encourages Eri to try finding photo and information using google. In 10:20 action Eri responds positively (in Greek). She also proposes to try finding a song. As it seems in this evaluation table, collaborators of group A achieved Mentor/Tutor roles, while partners of group B do not seem to do so. This enrolling has a significant difference –3,2 times more- between groups: group A (7) and group B (2,17) (Figure 4). This clue matches with symmetry average struggle chat relation -3,2 times more as well (!)- between groups: group A (28,58) and group B (8,75) (Table 5).
70.00% 70.00% 60.00% 50.00% 40.00%
21.67%
30.00% 20.00% 10.00% 0.00%
Group A
Group B
Figure 4. Percentage Evaluation of Average Mentor/Tutor role between groups (10 degree scale) The conclusion of this section analysis is that group A seems to appear a mentor/tutor enrolling while group B did not seem to do so.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
34
Conclusions This study focuses on the effect of problem solving activity during computer-supported collaborative problem solving in digital social networks and introduces the ‘Struggle Analysis Framework (SAF)’. ‘Struggle’ is a social interaction -takes place in an open social environment- between collaborators or collaborator and artifact, in a scaffolding context that uses the problem solving activity. Struggle is a deeper – more internalapproach than the interaction, based on pedagogical/psychological aspects (Tzanavaris and Sepetis, 2009). The findings of this study are summarized here. Two groups of secondary education students (ages 1516) and of university students (ages 16-18) were formed, made of six pairs each. One partner of secondary education and one of university. The students were distributed at random. The digital social network environment selected was the popular Facebook, at www.facebook.com, since all students already had an account there. The two groups differed only in terms of the collaboration problem solving activity, out of which the first group provided with an activity while the second one had nothing special to do. The activity was to build a joint poster for the 2nd annually world competition: ‘poster of my favorite singer/band’. The discussions produced and the activity that lead to these discussions were analyzed by using the hereby introduced ‘Struggle Analysis Framework (SAF)’. SAF used in this study, is particularly suitable for analysis of collaborative problem solving activity in digital social network environments for four reasons: a) due to interleaving of actions and dialogue b) due to both qualitative and quantitative perspectives of analysis c) due to its both technical and pedagogical/psycological approaches d) due to its open structure that can be easily expanded to future ones. Some distinct differences were observed between two groups. Group A was overall more active in terms of struggle (actions and dialogue). From the quantitative analysis (Struggle Artifact), by studying the artifacts exchanged (Struggle Artifact Action), we found out that partners of group B produced significant less action than partners of group A on photos (83) and documents (59) artifacts and subsequently, they did less artifact sharing (Struggle Artifact Share) with upload (69) and download (73) functions. Both these pointers indicate stronger collaboration between partners of group A. From the qualitative analysis (Struggle Chat), significant difference is contributed to every struggle (Struggle Chat Action): proposals (47), argues(77), agrees (54), and disagrees (60). According to some researchers (Avouris Dimitrakopoulou and Komis 2003; Fidas, Komis, Tzanavaris. and Avouris, 2005), these struggle functional roles are a strong indication of ownership of artifacts and relations as well as strong indication of participation in collaboration and peer support. Significant difference also comes from Struggle Chat Share analysis: Strategy/Control (75), Task Related (23), Usage of Artifacts Related(140). The qualitative analysis results were expected, since collaborators in group B had been given nothing to collaborate on, so they were not forced to propose, argue, agree, disagree, follow certain strategy, thus they discussed on general topics. In addition it was observed that the pairs of group A made thrice as many off noise messages concerning the quality of the chat communication (28,8 against 8,75 symmetry average struggle per partner of group A), an indication of stronger collaboration. An extra analysis of Mentor/Tutor enrolling performed through logfiles found that –despite the quantitative action analysis- partners of group A appeared to enroll Mentor/Tutor techniques while partners of group B did not seem to do so. From these observations, we concluded that in group A there was more discussion and collaboration relating to the constituent parts of the solution. This was mainly due the collaborative problem solving activity, which gave more subject of discussion and negotiation. It seems that the existence of the collaborative problem solving activity instead of creating additional difficulty to collaborating partners, as originally expected, was a reason for more involvement and deeper discussions, without any deterioration of the quality of the produced solutions. Taking into account the new of the field ‘computer supportive collaborative learning through digital social networks’, the findings of this study are of more general value. It should be observed that in the reported experiment the students shared a different, cognitive and social context, as members of different education levels living in different societies. But they –more or less- share a common Greek cultural context. Luck of this condition could have inhibited further sharing of understanding in relation to the problem solving activity, a premise requiring further validation. In conclusion, social collaboration environments, like Facebook, Myspace, hi5, as they become available, set new challenges in collaborative problem solving, new functionalities for social interactions in open social collaborating environments based on a scaffolding context - struggle. It was shown that the problem solving activity on these environments –analyzed by Struggle Analysis Framework (SAF)- can eventually provoke more semantically rich patterns of struggle, development of new grounding mechanisms and achievement of Mentor/Tutor roles, since the participants feel obliged to negotiate their usefulness with their social peers and deepening further the social collaborative activity.
Acknowledgements Special thanks are due to the students of Gymnasium of Liapades/Corfu and to students of TEI of Athens, who participated in our study. Also to Sella Helena –teacher of English language- for the English revision and to Kermanidou Katia –assistant professor in Informatics Department of the Ionian University- for her support.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
35
References Avouris, N., Dimitracopoulou, A. & Komis, V., (2003). On analysis of collaborative problem solving: An object-oriented approach, Computers in Human Behavior Amy S. Wu, Farell, R. & Singley, K. M. (2002). Scaffolding Group Learning in a Collaborative Network Environment, Proceeding of CSCL 2002, p. 252. Baker, M., Hansen, T., Joiner, R., & Traum, D. (1999). The role of grounding in collaborative problem solving tasks. In P. Dillenbourg (Ed) Collaborative-learning: Cognitive and Computational Approaches. pp. 31-64, Advances in Learning and Instruction series, Pergamon, Elsevier Baker, M., Lund, K., (1997). Promoting reflective interactions in a computer-supported collaborative learning environment, Journal of Computer Assisted Learning, 13 (3), 175-193. Baker, M.J., de Vries, E., Lund, K. & Quignard, M (2001). Computer Epistemic Interactions for co-constructing scientific notions: Lessons Learned from a five-years research program. In P. Dillenbourg & A. Eurelings, Proceedings of Euro Computer Supported Collaborative Learning, Maastricht, March 22-24, 2001, pp.89-96. Beldarrain, Y. (2006). Distance education trends: Integrating new technologies to foster student interaction and collaboration. Distance Education, 27, 139–153. Bryant, T. (2006). Social software in academia. Educause Quarterly, 29, 61–64. Constantino-Conzalez, M. & Suthers, D. (2001). Coaching Collaboration by Comparing Solutions and Tracking Participation. In P. Dillenbourg & A. Eurelings, Proceedings of Euro Computer Supported Collaborative Learning, Maastricht, March 22-24, 2001, pp.173-180. Chong, N. S. T., & Yamamoto, M. (2006). Collaborative learning using Wiki and flexnetdiscuss: A pilot study. Proceedings of the Fifth IASTED International Conference on Web-based Education 2006, pp. 150– 154. Dillenbourg P. (Edited by) (1999). Collaborative-learning: Cognitive and Computational Approaches. Advances in Learning and Instruction series, Pergamon, Elsevier Dillenbourg P., Baker M., Blaye A., O’Malley C. (1995). The evolution of research on collaborative learning. In Spada E. & Reiman P. (Eds), Learning Human and Machine: Towards an interdisciplinary learning science, pp. 189-211, Oxford: Elsevier. Fuchs-Kittowski, F., & Köhler, A. (2005). Wiki communities in the context of work processes. WikiSym2005— Conference Proceedings of the 2005 International Symposium on Wikis 2005, pp. 33–39. Fidas, C., Komis, V., Avouris, N., Dimitracopoulou, A., (2002). Collaborative Problem Solving using an Open Modelling Environment. In G. Stahl (edited by), Computer Support For Collaborative Learning: Foundations For A CSCL Community, Proceedings of CSCL 2002, Boulder, Colorado, USA, January 7 – 11, 2002, Lawrence Erlbaum Associates, Inc., pp. 654-655 Fidas, C., Komis, V., Tzanavaris, S. and Avouris, N., (2005). Heterogeneity of learning material in synchronous computer-supported collaborative modelling, Computers & Education (Elsevier), Vol. 44, Iss. 2, pp. 135-154 Guzdial, M., Rick, J., & Kehoe, C. (2001). Beyond adoption to invention: Teacher-created collaborative activities in higher education. Journal of the Learning Sciences, 10, 265–279. Gutwin, C. and Greenberg, S. (2000). The Mechanics of Collaboration Developing Low cost Usability Evaluation Methods for Shared Workspaces. (pp. 98-103), Proceeding of the 9th IEEE WETICE Workshop,2000, IEEE Press. Hansen, T., Holmfeld, L.D., Lewis, R. and Rugelj, J. (1999). Using Telematics for Collaborative Knowledge Construction, In P. Dillenbourg (Ed) Collaborative-learning: Cognitive and Computational Approaches. pp. 169-196, Advances in Learning and Instruction series, Pergamon, Elsevier Kim, S.-H., Han, H.-S., & Han, S. (2006). The study on effective programming learning using wiki community systems. WSEAS Transactions on Information Science and Applications, 3(8), 1495–1500. Koch J.H. Schlichter J. & Trondle P (2001). Munics: Modeling the flow of Information in Organisation. 1st EuroCSCL 2001, pp.348-355. Komis, V., Avouris, N., Fidas, C., (2002). Computer supported collaborative concept mapping: Study of Interaction, Education and Information Technologies, 2002, 7:2, pp. 169-188 Kesim, E., & Agaoglu, E. (2007). A paradigm shift in distance education: Web 2.0 and social software. Turkish Online Journal of Distance Education, 8, 66–75. Köhler, A., & Fuchs-Kittowski, F. (2005). Integration of communities into process-oriented structures. Journal of Universal Computer Science, 11, 410–425. Kolbitsch, J., & Maurer, H. (2006). The transformation of the web: How emerging communities shape the information we consume. Journal of Universal Computer Science, 12(2), 187–213. Lin, S.-C., Chen, Y.-C., & Yu, C.-Y. (2006). Application of wiki collaboration system for value adding and knowledge aggregation in a digital archive project. Journal of Educational Media and Library Science, 43, 285–307.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
36
Mc Manus, M. & Aiken, R. (1995). Monitoring Computer - Based Problem Solving. International Journal of Artificial Intelligence in Education, pp. 307-336. Moore, T. D., & Serva, M. A. (2007). Understanding member motivation for contributing to different types of virtual communities: A proposed framework. Proceedings of the 2007 ACM SIGMIS Computer Personnel Research Conference: The Global Information Technology Workforce, SIGMIS-CPR 2007, 153–158. Muehlenbrock, M.,Tewissen, F., & Hoope, H. U. (1998). A framework system for intelligent support in open distributed learning environments. International Journal of Artificial Intelligence in Education, 9, 256274. Notari, M. (2006). How to use a Wiki in education: Wiki based effective constructive learning. Proceedings of WikiSym‘06—2006 International Symposium on Wikis 2006, pp. 131–132. Ogata, H., Yano, Y., Furugori, N. and Jin, Q. (2001). Computer Supported Social Networking For Augmenting Cooperation. Computer Supported Cooperative Work 10: 189–209, 2001. Pinelle, D. and Gutwin, C. (2002). Groupware Walkthrough: Adding Context to Groupware Usability Evaluation. Proceedings of CHI 2002, 455-462. New York, NY: ACM. Reinhold, S. (2006). Wikitrails: Augmenting Wiki structure for collaborative, interdisciplinary learning. Proceedings of WikiSym‘06—2006 International Symposium on Wikis 2006, pp. 47–57. Reiser, B., J., (2002). Why Scaffolding Should Sometimes Make Task More Difficult for Learners. In G. Stahl, Computer Support For Collaborative Learning: Foundations For A CSCL Community, Proceeding of CSCL 2002, Boulder, Collorado, USA, pp. 255-264. Reiser, B., J., (2002). Why Scaffolding Should Sometimes Make Task More Difficult for Learners. In G. Stahl, Computer Support For Collaborative Learning: Foundations For A CSCL Community, Proceeding of CSCL 2002, Boulder, Collorado, USA, pp. 255-264. Scardamalia, Μ. & Bereiter, C. (1994) Computer Support for Knowledge – Building Communities. The Journal of the Learning Sciences, 3(3), pp. 265-283. Soller, A., L., (2001). Supporting Social Interaction in an Intelligent Collaborative Learning System. International Journal of Artificial Intelligence in Education, vol 12, pp. 40-62. Soller, A., Lesgold, A., Linton, F. & Goodman, B. (1999). What Makes Peer Interaction Effective? Modeling Effective Communication in an Intelligent CSCL. American Association for Artificial Intelligence, pp. 1-7. Stahl, G., (2002). Introduction: Foundations For a CSCL Community. (pp. 1-2), In G. Stahl, Computer Support For Collaborative Learning: Foundations For A CSCL Community, Proceeding of CSCL 2002, Boulder, Collorado, USA. Steeples, C. & Mayers, T. (1998) A Special Section On Computer – Supported Collaborative Learning, Computers & Education, Vol. 30, 3/4, 219--221. Suthers, D. & Jones, D. (1997). An Architecture for Intelligent Collaborative Educational Systems. In B. du Boulay, R. Mizoguchi (Eds) 8th World Conference on Artificial Intelligence in Education (AIED’97), pp.. 55-62. Suthers, D., (2000). Initial Evidence for Representational Guidance of Learning Discourse. Proceedings of International Conference on Computers in Education, November, Taiwan. Tzanavaris Sp., Korakianiti K. & Tzanavaris St., (2008). From a ‘republic communistic’ tool to a ‘democratic’ tool. Day Next: Internet for the trainee. Proceedings of 1st International Conference on Sientific Dialogue for the Hellenic Education, Athens, 28-30 November, Vol. 1, pp. 285-292. Tzanavaris Sp. & Sepetis A. (2009). Towards a Democratic Collaborative Internet. Proceedings of 5th Conference ‘Teachers and ICT Technologies’, Siros, Greece, in press Wagner, C., & Bolloju, N. (2005). Supporting knowledge management in organizations with conversational technologies: Discussion forums, weblogs, and wikis. Journal of Database Management, 16, i–viii. Ward, R. (2006). Blogs and wikis: A personal journey. Business Information Review, 23, 235–240. Wasko, M. M., & Faraj, S. (2005). Why should I share? Examining knowledge contribution in networks of practice. MIS Quarterly, 29, 35–58. Webb, N. (1992). Testing a Theoretical Model of Student Interaction and Learning in Small Groups, Interaction in Cooperative Groups: The Theoretical Anatormy of Group Learning, New York: Cambridge University Press, pp. 102 - 104. Wemberger, A., Fischer, F. & Mandl, H. (2002). Fostering Computer Supported Collaborative Learning with Cooperation Scripts and Scaffolds. Proceedings of CSCL 2002, Workshop: Computational Models of Collaborative Learning Interaction, p. 76. Yukawa, J. (2006). Co-reflection in online learning: Collaborative critical thinking as narrative. International Journal of Computer-Supported Collaborative Learning, 1, 203–228.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
37
Collaborative Educational Virtual Environments Evaluation: The case of Croquet Thrasyvoulos Tsiatsos, Konstantinidis Andreas, Pomportsis Andreas, Department of Informatics, Aristotle University of Thessaloniki, PO BOX 114, GR-54124, Thessaloniki, Greece Email:
[email protected],
[email protected],
[email protected] Abstract: E-learning systems have gone through a radical change from the initial text-based environments to more stimulating multimedia systems. In this paper we present and compare 3D multi-user virtual environments for supporting collaborative learning. After a pre-selection phase the most promising solution seems to be the Croquet platform. An educational environment has been implemented on top of this platform. Furthermore, this paper presents a case study carried out in a tertiary education department, to assess the educational environment. This environment has been evaluated based on a hybrid evaluation methodology for uncovering usability problems, collecting further requirements for additional functionality to support collaborative learning environments, and determining the appropriateness of different kinds of learning scenarios.
Introduction In the past few years and due to the evolution of networking and telecommunication technologies a number of interactive Virtual Environments (VEs) have been developed. In order to discern VEs from typical software applications, they can be defined as interactive, multisensory, three-dimensional (3D), computer synthesized environments (Barfield et al., 1995). In other words, all VEs provide a 3D space which presents the environment according to the perspective or view of the user, as well as include interactive components that allow the user to manipulate objects in the virtual world (Schwan & Buder, in press). Three-dimensional VEs come with varying features. However, most provide three main components: (a) the illusion of 3D space, (b) avatars that serve as the visual representation of users, and (c) an interactive chat environment for users to communicate with one another (Dickey, 2005). Specific types of VEs can be distinguished based on their use or purpose. VEs are most commonly used for commercial gaming (e.g. Everquest, World of Warcraft), socializing or online community building (e.g. Second Life, Google Lively, There) and as educational (e.g. AWEdu, Second Life, EVE, AquaMOOSE 3D) or working environments (e.g. Tixeo, I-maginer). An Educational Virtual Environment (EVE) (Bouras et al., 2001) is a special case of a VE. EVEs are actually Collaborative Virtual Environments (CVEs) (Oliveira et al., 2000) that can be used for educational applications such as collaborative e-learning. As described in Chee & Hooi (2002), CVEs are powerful and engaging collaborative environments for e-learning, because they are capable of supporting several important learning objectives. These objectives include experiential learning, simulation-based learning, inquiry-based learning, guided exploratory learning, community-based learning and Collaborative Learning (CL). A CVE is a computer-based, distributed, virtual space or set of places. In such places, people can meet and interact with others, with agents, or with virtual objects. CVEs might vary in their representational richness from 3D graphical spaces, 2.5D and 2D environments, to text-based environments. Access to CVEs is by no means limited to desktop devices, but might well include mobile or wearable devices, public kiosks, etc (Churchill et al., 2001). It is probable that CVEs will play an important role in future education since continuous enhancements in computer technology and the current widespread computer literacy among the public have resulted in a new generation of students that expect increasingly more from their e-learning experiences. To keep up with such expectations, e-learning systems have gone through a radical change from the initial text-based environments to more stimulating multimedia systems. In this paper we will focus on a specific category of EVEs that aims to support Collaborative Learning. We call these environments Collaborative Educational Virtual Environments (CEVEs). Collaborative learning is group-based learning, regardless whether this takes place face-to-face, via computer networks, or through a mixture of both modalities. From a constructivist perspective, collaborative learning can be viewed as one of the pedagogical methods that can stimulate students to negotiate information (abstract, ill-defined and not easily accessible knowledge and open-ended problems) and to discuss complex problems from different perspectives. This can support learners to elaborate, explain and evaluate information in order to re- and co-construct (new) knowledge or to solve problems (Veerman & Veldhuis-Diermanse, 2001). According to Petraglia (1997), a technologically sophisticated collaborative learning environment, designed following cognitive principles, could provide advanced support for a distributed process of inquiry, and
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
38
facilitate advancement of a learning community’s knowledge as well as the transformation of the participants’ epistemic states through a socially distributed process of inquiry. Bruckman & Bandlow (2002) summarize the most important benefits of using Computer Supported Collaborative Learning (CSCL) in education. Among others they mention that, using CSCL, teacher-student interactions become more balanced and gender differences are reduced. They also point out that the exploitation of a virtual environment (a) lowers inhibitions; (b) provides strong anchors from which classroom discussions can emerge; (c) can be enjoyed by multiple different personality types; and (d) aids students in discovering aspects of their own identity. Furthermore, Bruckman & Bandlow (2002) refer that CSCL, due to its studentcentered learning process could (a) increase the likelihood that students will absorb and remember what they learn; (b) allow students to form personal connections with powerful ideas; (c) help students exhibit higher levels of attention; (d) support the students in becoming more honest and candid toward those in a position of authority; and (e) provide students with motivation. Due to the above reasons the combination of collaborative e-learning and CEVEs seems to be an effective solution for supporting CSCL processes. This paper focuses on the comparison of CEVEs, the pre-selection of the most promising solution and its evaluation for supporting computer supported collaborative learning scenarios. Therefore, this paper briefly reviewed available commercial and open source collaborative virtual environments, based on the process described by Dimitracopoulou (2005), and through their support for specific high-level functions that should be performed during collaboration. Furthermore, this paper presents a concrete evaluation methodology to evaluate the selected environment for (a) uncovering usability problems; (b) collecting further requirements for additional functionality to support collaborative learning environments, and (c) determining the appropriateness of different kinds of learning scenarios. Finally, this paper presents a case study carried out in a tertiary education department, to assess the selected platform based on the proposed evaluation methodology. This paper is structured as follows: the next section presents the state of the art in 3D multi-user collaborative virtual environments and their comparison. Following this, the need for an evaluation methodology is identified and in the fourth section a proposed evaluation methodology for CEVEs is presented. Afterwards, the case study is presented. The paper concludes through the presentation of the evaluation results and future work.
Preselection of a CEVE platform In this section we present the state of the art in 3D multi-user collaborative virtual environments and their comparison. The presented platforms were initially chosen based on their popularity, proven educational and collaborative value (Bransford, 1990; Bedford et al., 2006), respective user testimonials and support of the generic features and advantages of current systems. The platforms we present are Second Life (SL, http://secondlife.com), Active Worlds (AW, http://www.activeworlds.com/), Croquet (http://www.croquetconsortium.org), I-maginer (http://www.i-maginer.fr/), and Workspace 3D (W3D, http://www.tixeo.com). All the above platforms are (fully or partially) commercial except Croquet, which is free and open source. In the next paragraphs we briefly present specific tools and services offered by the aforementioned environments. An analysis of the existing collaborative systems shows that a number of tools and functions are designed and implemented in order to facilitate or better support the collaborative process. Furthermore, these tools could support the collaborative learning process. In order to pre-select a tool for further evaluation we need a quick way to review their collaborative features. For that reason, during the pre-selection phase, we have adopted the process described by Dimitracopoulou (2005), and we have reviewed the above environments through the lense of their support for specific high-level functions that should be performed during collaboration. The results of this review are presented in Table 1 and they are discussed in the following paragraphs: • The appropriate means for dialogue and action: They provide the essential means for the collaborative learning activity itself. In this category we include the following tools: text chat, e-mail, forum, video conference, voice over Internet (VoIP). Other tools that are supported by some of the examined platforms shared text processors, shared web browsers and shared whiteboards. Through a shared text processor, users can co-author a document or a presentation. In most of the environments (e.g. Workspace 3D) users can view the document through a 2D top down perspective since it’s the most simple, accessible and familiar viewpoint. Salaheddin & Qaraeen (2007) for example, reveals that users are satisfied with the current traditional 2D representation of a shared word processor, requesting only the depiction of the document in higher resolution and a translation mechanism. Other useful tools in this category are: simulations and argumentation tools. Argumentation tools are used for the augmentation and presentation of arguments. Other tools include designing tools, brainstorming tools, structured chat mechanisms etc. The goal here is to satisfy needs and activities on a cognitive, social level. These types of activities include conversation,
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
39
design and programming, sharing of ideas and data, evaluation, role entrusting, coordination and social interaction (Schoder & Fischbach, 2005). • The functions for workspace awareness: They are related to up-to-the-minute knowledge about partners’ actions in a closed collaborative scheme or in a wide community of collaborators. As presented in Table 1, many collaborative applications, like Active Worlds and Croquet, support project and scenario based learning through role playing. These tools have been realized in 2D collaborative environments with success, but their application in a 3D environment is where the advantages of this type of learning can be fully utilized. As has already been mentioned in the definition of CEVEs in the introductory section, users interact with the virtual world and its inhabitants through an avatar. Some of the basic advantages of using a 3D avatar are summarized by Zhigeng et al (2005): (a) Perception, the user’s ability to percept the presence of others; (b) Tracking, the user can identify the location of others; (c) Recognition, the user can recognize others from their avatars; (d) Visualization of concentration; (e) Visualization of actions and gestures; (f) Social representation of the self through the avatar’s attire, meaning that users can recognize the task someone is involved with and his place in a hierarchy. Finally, avatars enhance the feeling of trust and security between the members of a group. Another function is action key support (used in Workspace3D for example): the user possessing the action key is the only one with access to the common workspace. The rest of the users can ask to obtain the key from the current owner. In the 3D environment the action key could be represented by a virtual object which can be transferred between the users. In Ang & Wang (2006) users report the transfer of a virtual microphone between them as a pleasant experience. • The functions for supporting students’ self-regulation or guidance: They support or directly guide students’ reasoning on a metacognitive level. Some users request the integration of a private space, where they can keep notes which could then be shared with the rest of the team. In most 3D collaborative environments, the shared text editor (and most shared applications) is embedded into a virtual desk or work space. Also, in a collaborative environment activity replay is the very useful functionality of recording and viewing all the actions that took place during a collaborative session. It serves the post examination of the co-authoring of a document and the co-planning of a project or a simulation. Unfortunately none of the examined applications integrates this capability. Furthermore, many platforms offer programming tools or application programming interfaces in order to create agents for supporting students’ self-regulation or guidance. • The facilities related to teachers’ assistance: They are essential, especially when the systems are addressed to students of primary and secondary education. The number of tools offered in this category is very small. Useful tools are activity replay and log files. • The functions related to community level management: They provide significant tools and functions for the management of the activities and material produced amongst a wide community. Tools that are supported to this direction are file sharing, forum, and voting systems. After considering all of the above, we chose to utilize the Croquet platform in order to design and develop a 3D educational environment. As is evident from Table 1, none of the examined platforms support every reviewed feature. Therefore, modification and integration of more features seems to be necessary. We selected Croquet mainly because it has cross platform capabilities and is also an open source software application. Croquet’s cross platform capabilities and virtual machine framework guarantee a simple and quick installation on any operating system. Finally, being an open source application grants designers the freedom of creating a multitude of user interfaces, simulations and environments and enhancing Croquet with needed functionality. In addition, through Croquet’s multi-user 3D virtual environment users can share files and applications, collaboratively browse the web, co-author documents and presentations and communicate through text, VoIP or video. Also, out of the five platforms examined, Croquet is the only one to feature portals which link virtual worlds together. Portals allow users to peer into other environments and share files. Looking through nested portals is also supported. Finally, Croquet houses a physics engine which is capable of simulating vector fields such as wind and gravity. Although Croquet has many useful features, there are some tools and services which are yet to be integrated into the platform. For example, from Table 1 we gather that Croquet is missing valuable coordination tools, necessary for the management of a collaborative session. A more concrete evaluation methodology is needed to evaluate Croquet in order to: • Uncover usability problems • Collect further requirements for additional functionality for supporting collaborative learning environments • Determine the appropriateness of different kinds of learning scenarios The following paragraph reviews the current evaluation processes for evaluating CEVEs in the directions described above.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
40
Table 1: Review of 3D multi-user collaborative environment platforms Category Tool/Functionality Dialogue and action Text chat E-mail Forum Video conference VoIP Shared whiteboard Shared text processor Simulations Co- browsing Argumentation tools Workspace awareness Role playing scenarios Portals Avatars’ interaction with objects Create objects, build Avatars’ teleportation Avatars’ manipulation Avatars’ perspective control Avatars’ gestures Avatars’ facial expressions (e-motes) Avatars’ interactions with other users Action key Save meeting room condition Students’ self-regulation/guidance Annotation Programmable through agents Teachers’ assistance Activity replay Log files Community level management File sharing Forum Voting system Group creation tools
AW
Croquet
I-maginer
SL
W3D
Yes No No Yes Yes No No Yes No No
Yes No No Yes Yes Yes Yes Yes Yes No
Yes No No Yes Yes Yes Yes Yes Yes Yes
Yes No No Yes Yes No No Yes Yes No
Yes No No Yes Yes Yes Yes No Yes Yes
Yes No Yes Yes Yes Yes No Yes No Yes No No
Yes Yes No No Yes No Yes No No Yes No Yes
Yes No Yes No No Yes Yes Yes No Yes No No
Yes No Yes Yes Yes Yes Yes Yes Yes Yes No No
Yes No Yes No No No No Yes No Yes Yes Yes
No Yes
Yes Yes
No No
No Yes
No No
No No
No No
No Yes
No Yes
No Yes
Yes No No Yes
Yes No No No
Yes No Yes Yes
No No No Yes
Yes No No Yes
The need for an evaluation methodology for CEVEs In this section we briefly present some methodologies applied by researchers in the studied bibliography for the evaluation of CEVEs either from the usability or from the pedagogical point of view. Prasolova-Førland (2008) has presented a case study where place metaphors in a number of 3D educational CVEs are analyzed. The methodology followed was based on an exploratory case study in order to answer a number of questions such as (a) How should 3D CEVEs be designed to suit different educational purposes? (b) What place metaphors are typically used? (c) Which design features are beneficial and which are not? (d) How could the virtual place design in such worlds be analyzed in a systematic way? According to this methodology, Prasolova-Førland (2008) concludes that a characterization framework of 3D CEVEs could be based on the terms of learner, place and artefact. This framework is inspired by Activity Theory – activities are performed by learners and are mediated by artefacts, while both learners and artefacts are contained in a place. Furthermore, the characterization for the place dimension could be presented in more detail in terms of outlook, structure and role. Even though the characterization framework presented by Prasolova-Førland (2008) is not an evaluation framework, we believe that it could give some guidelines toward the determination of specific metrics in an evaluation methodology for CEVEs. Another significant work concerning the way that virtual reality aids complex conceptual learning has been presented by Salzman et al (1999), and applied successfully in the project “Science Space”. Although this work is focused on an immersive virtual environment it is possible to adopt the model’s main ideas for scientific investigations concerning also other types of virtual environments such as desktop-based VEs. Koubek & Müller (2002) and Salzman et al (1999) point out that features of VEs (or settings based on virtual reality) do not act in isolation. Learning content (concept) and learner characteristics influence learning processes and outcomes as well. Moreover, interaction experience (simulator sickness and usability) and learning experience (immersion, motivation and meaningfulness of representations) work together to determine the process and the results. According to this model the first step is to specify the learning concepts and which of the VEs characteristics are able to facilitate the knowledge acquisition. The next step is to gather background
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
41
information of the learners. Furthermore, parameters of the learning process and the learning outcome have to be assessed. Lee & Wong (2008) have also observed that there is a need of a detailed theoretical framework for VRbased learning environment that could guide future development efforts. They point out that critical step towards achieving an informed design of a VR-based learning environment is the investigation of the relationship among the relevant constructs or participant factors, the learning process and the learning outcomes. According to the above, we can conclude that there is no focused and concrete evaluation framework for evaluating CEVEs. One the one hand, there are many techniques and evaluation frameworks for evaluating either the pedagogical or the technical nature of CSCL systems but none of them is focused on the CVE nature of the e-learning system. On the other hand, there are several evaluation approaches for CVE systems. However, none of them are focused on the pedagogical and educational nature of the e-learning system. Therefore, after identifying the absence of an evaluation framework for CEVEs we are propose a new hybrid model as described in the following paragraphs.
Toward an evaluation methodology for CEVEs Our rationale concerning the evaluation methodology for CEVEs is to organize the evaluation process based on the idea of an iterative and incremental development process of an educational virtual environment (Bouras et al., 2002). An iterative and incremental development process specifies continuous iterations of design solutions together with users. According to Goransson (2001) iteration includes: (a) a proper analysis of the user requirements and the context of use; (b) a prototype design and development phase; and (c) a documented evaluation of the prototype. Therefore, concerning the evaluation process, which is the main interest in this paper, we need to conduct various evaluation cycles in order to assess each prototype of the system. The evaluation of each prototype system will result in suggestions for modifications in the following version of the prototype system design. In the case of a CEVE we propose to conduct three phases in each evaluation cycle (Table 2), namely: (a) Pre-analysis phase; (b) Usability phase; and (c) Learning phase. A brief and general description of the necessity of the above phases as well as useful techniques for their successful conduct is presented in the following paragraphs. Afterwards, we describe in detail the specific steps followed for evaluating the Croquet platform.
Pre-analysis Phase The pre-analysis phase is used in order to define the evaluations goals and to detect the characteristics of the evaluators. Evaluation goals can be defined in the form of questions. Two types of questions can be formed (Bruckman & Bandlow, 2002): (a) the evaluation methodology questions and (b) evaluation questions. The evaluation methodology questions concern questions regarding the general process of evaluation. Usually, the selected evaluation methodology contains specific inherent goals. Therefore, before selecting a specific evaluation methodology, the evaluator’s desired outcomes should be in consonance with the possible results of a framework. A list of questions like those presented by Asensio et al (2006) can aid evaluators in detecting the appropriate metrics and selecting a fitting methodology. The evaluation questions concern questions regarding usability and learning outcomes. Usability refers to the ability of the system to support the learning process. This covers the effectiveness, efficiency and satisfaction with which specified users achieve specified goals in particular environments (ISO 9241-12, 1998). Furthermore, in the usability questions the characteristics of the system (e.g. tools, modules, avatars, non-verbal signals, and communication) are evaluated. The learning outcome questions concern pedagogical research.
Usability Phase Usability inspections of the initial applications are necessary so as to uncover the main design flaws and allow a clean up of the design, while adapting the method to 3D collaborative aspects. Usability and interaction are very much interrelated. Concerning interaction, social-arbitrary knowledge (language, values, rules, morality, and symbol systems) can only be learned in interactions with others. Several human-computer interaction rules for display design must be taken into account when implementing any e-learning system. Rules such as consistency of data display (labeling and graphic conventions), efficient information assimilation by the user, use of metaphors, minimal memory load on user, compatibility of data display with data entry, flexibility for user control of data display, presentation of information graphically where appropriate, standardized abbreviations, and presentation of digital values only where knowledge of numerical value is necessary and useful.
Learning Phase The main goal of the learning evaluation phase is to conduct pedagogical research. As proposed by Lee & Wong (2008) the model presented by Salzman et al. (1999) can be a useful guide in designing, developing and
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
42
evaluating a VR learning environment. Although this model is focused on immersive virtual environments it could be used also in desktop CEVEs in order to address which CEVEs features are appropriate, what kind of students might gain benefits through learning in CEVEs, and how CEVEs enhance learning by looking into the interaction and learning experience. Therefore, our proposed evaluation methodology is based on the rationale of this model in order to test scientific hypotheses concerning learning and communicating in VEs through experimental studies. Thus, the main questions to be solved in the Learning phase of the proposed evaluation are: • What effects does the VE have on process parameters? • What effects does learning in VE have on outcome parameters? • How do users rate the system in terms of attitude toward learning in VEs? • What effects does the VE have on interpersonal parameters? • In which parameters does the system differ significantly from a comparable platform without VRtechnology?
Case study: Usability and Learning Evaluation of Croquet In this section we will describe the way we applied the methodology described previously for the evaluation of the open source platform Croquet. This will be followed by a presentation of the results attained from questionnaires filled in by the participants at the end of each phase. Using the open source platform Croquet, we created a 3D virtual environment which could be used for collaboration and carrying out online lectures. The design of the environment consisted of two interconnected rooms: a lecture hall (Figure 1) where presentations and classes can be held, and a room where student teams can meet to collaborate. Our proposed evaluation methodology was applied through a group of students interacting within our educational environment design.
Figure 1. The lecture hall of the environment we designed in Croquet In October of 2008, a presentation of the Croquet platform took place within the context of the course “Internet Learning Environments”, taught during the winter semester of the fourth year, of the Undergraduate Studies Programme at the Computer Science Department of our university. The presentation was held inside a computer lab with the participation of twenty-four postgraduate students consisting of eleven male and thirteen female students split into two groups of twelve members each. The evaluation methodology we applied in our case study is comprised of three phases spread across three days. These phases and their individual steps and goals are described briefly in Table 2. In the learning phase, which will be described in more detail in a following section, we chose to utilize the jigsaw teaching technique and evaluate its effectiveness for a 3D CEVE. The jigsaw technique is a cooperative learning method with a three-decade track record (Aronson & Bridgeman, 1979) of successfully reducing racial conflict and increasing positive educational outcomes. Just as in a jigsaw puzzle, each piece, in essence each student's part is essential for the completion and full understanding of the final product. Table 2: Case study evaluation methodology
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
Phase Pre-analysis Usability
Learning
Step Pre-test Familiarization session (usability session 1) Co-presence session (usability session 2) Learning scenario-based session
43
Goals Set the evaluation goals and separate the learners into advanced and novice CEVE users and determine learners’ learning styles Uncover usability problems of the most important parts of the user interface concerning the basic functionalities of the first prototype Uncover usability problems of the communication and collaborative functionalities of the first prototype Collect further requirements and additional functionality, discover the pros and cons of the virtual environment and determine the appropriateness of different kinds of learning scenarios
Several pedagogical advantages have been attributed to the jigsaw process (Aronson & Patnoe, 1997). These educational benefits include listening encouragement, engagement, and empathy by giving each member of the group an essential part to play in the academic activity. Group members must work together as a team to accomplish a common goal and thus each student depends on everyone else. No student can succeed completely unless everyone works together. Also, the jigsaw technique is a typical method for researching certain collaborative interactions in a virtual environment. This "cooperation by design" facilitates interaction among all students in the class, leading them to value each other as contributors to their common task.
Pre-analysis Phase Before participating in the evaluation, participants were asked to complete a questionnaire regarding their familiarity with distance collaboration and 3D virtual environments in general. The nature of this questionnaire would allow the research team to split the evaluators into two groups based on their previous experience with similar environments. The majority of participants responded that they had used distance collaboration software in the past (15 out of 24) and general familiarity with 3D interactive environments (19 out of 24). This allowed the research team to proceed with the definition of advanced and novice groups of CEVE users. In addition to the previous experience questionnaire, the participants were asked to fill in a learning styles modality preference inventory. This test included three sections: visual modality, auditory modality and kinesthetic/tactile modality. After totaling the score for each section, a score of 21 points or more in a modality indicates strength in that area. The highest of the 3 scores indicates the most efficient method of information intake. The second highest score indicates the modality which boosts the primary strength. Through this questionnaire we could add more weight to the opinions of individuals with strength in the tactile or visual modalities since they are the ones we would expect to benefit more from the use of a CEVE. Results indicated that most of the participants had strength in the visual modality (16 out of 24), with tactile modality allowing a boost of the primary strength.
Usability Phase Following a general presentation of the platform and its functionality, the students had a chance at navigating through the 3D environment we had developed, as part of the familiarization session. As mentioned in Table 2, the goal of this session was to uncover usability problems regarding the most important parts and basic functionality of the user interface. In this session the users were alone in the environment and allowed to experiment with the user interface and navigation controls. After 45 minutes of experimenting, the participants were asked to complete a questionnaire recording their experience. Results indicate a positive initial reaction to the general feel of the platform. Users had no difficulty in learning how to operate the user interface or managing basic functionalities such as traversing the portal in order to enter another room. In addition, the rooms of the virtual environment were deemed satisfactory but the 3D graphics were considered disappointing. This is probably due to an uneven comparison to modern proprietary computer games, from the expert users. Functionality such the ability to change the viewpoint and the interaction with 3D windows garnered positive reviews also. Opinions are divided regarding the navigation scheme and the ease of orientation within the platform. Some students mastered the controls rather quickly, while others stumbled even after 45 minutes of practice. The Sketch tool which creates a 3D object from a 2D drawing was considered useful but of little educational value. Finally, students had no difficulty in distinguishing which windows were 2D/3D or which windows were collaborative and therefore common between them. After the familiarization session, users were asked to test out the collaboration tools in groups of two or three. This was the collaboration session and as mentioned in Table 2, its goal was to uncover problems regarding the communication and collaboration tools of the platform. Following 45 minutes of this session, users were asked to complete a questionnaire documenting their experience. Results indicated a disappointment in the networking performance of the platform. In general, we can surmise that users considered the platform a hindrance to collaboration. The questionnaires reveal a dissatisfaction regarding system stability and system response time. On the other hand, features such as representing an avatar’s viewpoint with an arrow and being able to see the other user and follow his actions were commended. According to the users the major advantages of the platform are the 3D graphics, the support
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
44
for collaboration and the communication mechanism. Croquet’s problems as revealed by the questionnaires are mainly regarding technical difficulties, 3D graphics, navigation scheme, user interface and system response time. This double mention of the 3D graphics as both an advantage and disadvantage is due to the variety of experience contained within the evaluator group. In other words, users experienced in the use of 3D graphical environments found Croquet’s graphics disappointing (11 of 24), while novice users were either not sure or satisfied.
Figure 2. The major advantages and disadvantages of the Croquet platform according to the users
Learning Phase For this phase, the research team attempted to carry out an educational scenario through the platform. We chose to implement a jigsaw-type teaching technique. Split into two sessions, students were organized into four groups of three. Each group member was given a subtopic to research. Next, individual members of the group would break off to work with the members of the other groups which had the same subtopic assigned to them, thus forming three groups of four. Then they would return to their starting body in the role of instructor for their subcategory. In the context of evaluating the CEVE, the process of forming into groups would require the students to join the same virtual world. Also, students were allowed communication solely through the chat mechanism, while a think-aloud protocol was in effect in order to register the users’ attitude, reflection and problems that they have faced. After an hour into the phase, users were asked to complete a questionnaire. Most of the users agreed that a number of technical difficulties hindered the scenario process. Despite this, they still speak in favour of the platform albeit with a few suggested improvements. Results of the questionnaire indicate several features the users would like to see implemented.
Figure 3. Results of the Learning Phase questionnaires, regarding the educational value of the Croquet platform The students used the chat tool extensively finding it convenient, but encountered difficulty identifying the user that was chatting. In other words, they couldn’t easily relate the user avatars to the chat nicknames. Most students suggested either using speech bubbles, or having the nicknames hover above the avatars. Users also agreed on the implementation of a map of the environment somewhere in the user interface and suggested the augmentation of communication through gestures and facial expressions for the avatars. Specifically regarding
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
45
avatar functionality, results show that the users would prefer the ability to modify their avatar’s appearance. Also, they think collaboration would be augmented if one could distinguish roles from avatar appearance alone. On the other hand, users do not consider humanoid avatars a necessity for meaningful learning. Private spaces for the users, argumentation and voting tools, recording tools and file sharing capabilities were also discussed and recommended by the users. From the students’ answers we can surmise an uncertainty concerning the pedagogical value of the Croquet platform. As mentioned by one student in her questionnaire: “I found the software application entertaining even though I am not used to 3D environments, but I am also not sure about its educational value”. Moreover, the students kept a neutral attitude regarding the degree of ease that they consider the organisation and following of courses through the virtual space, presents.
Conclusion and Future work The main goal of this paper was to evaluate the exploitation of CEVEs for supporting computer supported collaborative learning scenarios. In this paper we reviewed available commercial and open source collaborative virtual environments (Second Life, Active Worlds, Croquet, I-maginer, and Workspace 3D) in terms of their appropriate means for dialogue and action; their functions for workspace awareness; their functions for supporting students’ self-regulation or guidance; the facilities related to teachers’ assistance and their functions related to community level management. Based on this review we chose to utilize the Croquet platform in order to design and develop a 3D educational environment. Furthermore we have realized that none of the examined platforms supports every reviewed feature. Thus, modification and integration of more features seems to be necessary. Our next step was to follow a more concrete evaluation methodology in order to evaluate the selected Croquet platform for (a) uncovering usability problems; (b) collecting further requirements for additional functionality to support collaborative learning environments and (c) determining the appropriateness of different kinds of learning scenarios. Therefore, after identifying the absence of an evaluation methodology for CEVE evaluation, a new evaluation methodology for CEVEs has been proposed, which consists of three phases namely: (a) Pre-analysis phase; (b) Usability phase; and (c) Learning phase. The proposed methodology has been applied in order to evaluate the selected Croquet platform. On the negative side of the questionnaire results we can conclude a disappointment in the Croquet platform concerning its use for collaboration. Disappointment is mainly centered on system stability and system response time which hindered the collaboration process. Other problems concern the implemented navigation scheme and user interface. On the other hand, although a disappointment, the platform was considered by most students both inspiring and entertaining. By overcoming the technical difficulties and implementing their suggestions, users believe the educational process could be revitalized through the use of such novel technology. Future work includes two alternative steps. We could opt to augment the Croquet platform based on the suggestions and observations of the evaluators and proceed with a repeat evaluation or examine a different platform and by doing so realize a meaningful comparison to Croquet. Either way, we will continue to assess and enhance the effectiveness of our proposed evaluation framework.
References Ang, K.H., & Wang, Q. (2006). A case study of engaging primary school students in learning science by using Active Worlds, Proceedings of the First International LAMS Conference: Design the future of learning, Sydney, Australia, December 6-8, 2006, pp 5-14. Aronson, E., & Bridgeman, D. (1979). Jigsaw groups and the desegregated classroom: In pursuit of common goals. Personality and Social Psychology Bulletin, 5, 438-446. Aronson, E., & Patnoe, S. (1997). The Jigsaw Classroom: Building Cooperation in the Classroom, Longman, 2nd Edition, ISBN: 978-0673993830. Asensio, M., Hodgson, V., Saunders, M. (2006). Developing an inclusive approach to the evaluation of networked learning: the ELAC experience, Proceedings of the Fifth International Conference on Networked Learning, 10-12 April, Lancaster: Lancaster University. Barfield, W., Zeltzer, D., Sheridan, T., Slater, M. (1995). Presence and Performance within Virtual Environments. In W. Barfield & T. Furness (Eds). Virtual Environments and Advanced Interface Design (pp. 473-513). Oxford: Oxford University Press. Bedford, C., Birkedal, R., Erhard, J., Graff, J., Hempel, C. (2006). Second Life As An Educational Environment: A Student Perspective, Proceedings of the First Second Life Education Workshop, Fort Mason Centre, San Francisco, Ca., August 20th, pp. 25-27. Bouras, C., Philopoulos, A., Tsiatsos, T. (2001). E-Learning through Distributed Virtual Environments, Journal of Network and Computer Applications, Academic Press, No.3, Vol 24, July, pp. 175-199.
Workshop on Intelligent and Innovative Support for Collaborative Learning Activities
46
Bouras, C., Triantafillou, V., Tsiatsos, T. (2002). A Framework for Intelligent Virtual Training Environment: The Steps from Specification to Design”, Journal of Educational Technology & Society, Special Issue on "Innovations in Learning Technology", Vol: 5, Issue: 4, pp. 11-26. Bransford, J.D. (1990). Anchored instruction: Why we need it and how technology can help. In D. Nix & R. Sprio (Eds), Cognition, education and multimedia. Hillsdale, NJ: Erlbaum Associates. Bruckman, A., & Bandlow, A. (2002). HCI for Kids, Published in: The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications, edited by Julie Jacko and Andrew Sears. Lawrence Erlbaum and Associates. Chee, Y., & Hooi, C. (2002). C-VISions: Socialized learning through collaborative, virtual, interactive simultation, Proceedings of CSCL’02: Conference on Computer Support for Collaborative Learning, Boulder, CO, USA, pp. 687-696. Hillsdale, NJ: Lawrence Erlbaum. Churchill, E., Snowdon, D., Munro, A. (2001). Collaborative Virtual Environments: Digital Places and Spaces for Interaction, Springer-Verlag, London Limited, Great Britain. ISBN:1852332441. Dickey, M. (2005). Brave new (interactive) worlds: A review of the design affordances and constraints of two 3D virtual worlds as interactive learning environments, Interactive Learning Environments, 13:1, pp. 121 -137. Dimitracopoulou, A. (2005). Designing collaborative environments: Current Trends and Future Research Agenda. In D.Suthers & T. Koschmann (Eds). Computer Support for Collaborative Learning, 30 May4 June, Taipei, Taiwan. Goransson, B. (2001). Usability Design: A Framework for Designing Usable Interactive Systems in Practice. IT Licentiate theses 2001-06, ISSN 1404-3203, Department of Human Computer Interaction, Information Technology, Uppsala University, Uppsala, Sweden. International Organization for Standardization (1998). ISO 9241-12: Ergonomic requirements for office work with visual display terminals (VDTs) Part 11: Guidance on usability. Geneva, Switzerland. Koubek, A., & Muller, K. (2002). Collaborative and Virtual Environments for Learning, ACM SIG Proceedings, New Orleans, Louisiana, USA, November 16-20. Lee, E.L., & Wong, K.W. (2008). A Review of Using Virtual Reality for Learning, Lecture Notes in Computer Science, Z. Pan et al. (Eds.): Transactions on Edutainment I, LNCS 5080, pp. 231–241, Springer, ISSN 0302-9743 (Print) 1611-3349 (Online), DOI: 10.1007/978-3-540-69744-2. Oliveira, C., Shen, X., Georganas, N. (2000). Collaborative Virtual Environment for Industrial Training and eCommerce, Workshop on Application of Virtual Reality Technologies for Future Telecommunication Systems, IEEE Globecom Conference, San Francisco, November-December. Panagiotakopoulos, H., Pierrakeas, H., Pintelas P. (2003). Educational software and its evaluation, in Greek. Metaihmio Publications, ISBN: 960-375-579-6. Petraglia, J. (1997). The rhetoric and technology of authenticity in education. Mahwah, NJ: Lawrence Erlbaum. ISBN: 978-0805820423. Prasolova-Førland, E. (2008). Analyzing place metaphors in 3D educational collaborative virtual environments, Computers in Human Behavior, Volume 24, Issue 2, March, Pages 185-204. Salaheddin, O., & Qaraeen, O. (2007). Evaluation Methods and Techniques for ELearning Software for School Students in Primary Stages, International Journal of Emerging Technologies in Learning. Vol 2, No 3, ISSN: 1863-0383. Salzman, M., Dede, C., Loftin, R., Chen, J. (1999). A model for understanding how virtual reality aids complex conceptual learning. Presence, 8 (3), 293-316. Schoder, D., & Fischbach, K. (2005). Core Concepts in Peer-to-Peer (P2P) Networking. In: Subramanian, R.; Goodman, B. (eds.): P2P Computing: The Evolution of a Disruptive Technology, Idea Group Inc, Hershey. Schwan, S., & Buder, J. (in press). Learning and knowledge acquisition in virtual realities. In German. In G. Bente (Hrsg.), Digitale Welten. Virtuelle Realität als Gegenstand und Methode der Psychologie. Göttingen: Hogrefe. Veerman, A., & Veldhuis-Diermanse E. (2001). Collaborative learning through computer-mediated communication in academic education, Proceedings of Euro-CSCL 2001, Maastricht, Netherlands. Zhigeng, P., Zhu, J., Zhang, M., Hu, W. (2005). Collaborative Virtual Learning Environment Using Synthetic Characters, Springer- Verlag Berlin Heidelberg.