Simulating Pedagogical Agents in a Virtual Learning ... - CiteSeerX

0 downloads 0 Views 247KB Size Report
out with the Wizard of Oz technique. Students used a groupware system, TeamWave Workplace, to solve a learning task in the domain of object-oriented ...
Simulating Pedagogical Agents in a Virtual Learning Environment Silje Jondahl

Anders Mørch

Department of Information Science University of Bergen [email protected]

InterMedia University of Oslo [email protected]

ABSTRACT This paper presents and analyses data from a usability experiment conducted at the University of Bergen in Norway. The experiment is entitled CoPAS: Collaboration Patterns Agent Simulation. CoPAS is a simulation study carried out with the Wizard of Oz technique. Students used a groupware system, TeamWave Workplace, to solve a learning task in the domain of object-oriented analysis and design using UML (Unified Modelling Language) without meeting each other face-to-face. Simulated software agents (human experts) gave advice for various knowledgeintensive tasks, including tool use, domain (UML) understanding and peer-to-peer collaboration. The study is part of multidisciplinary project (DoCTA-NSS) between the University of Bergen, University of Oslo and Telenor R&D. One of the goals of this project is to assess the usability and utility of knowledge-based pedagogical agents for virtual learning environments. Our findings indicate that agents can have an effect on collaboration by making users aware of collaboration patterns (division of labour, explicit roles, etc.) and by creating focus shifts in the users’ interaction. Keywords Groupware, distributed collaborative learning, Wizard of Oz technique, software agent simulation, collaboration patterns

INTRODUCTION The general research areas addressed in this paper is Human Computer Interaction (HCI), Computer Supported Collaborative Learning (CSCL), and Artificial Intelligence (AI). We focus on design and evaluation of user interfaces to interactive learning environments. Further, we are interested in interfaces that embody software agents to support human learning in knowledge-intensive domains. We call these agents pedagogical interface agents. Our interest is not so much in the development of algorithms or student models for these agents, but rather how they should intervene and behave in interaction with a group of collaborative peers. Our premise is that agents should be developed on the basis of their perceived benefits for end users. A challenge is to develop agents that can be both usable and useful in the context of interactive learning environments, or to paraphrase Maulsby, Greenberg and Mander (1993): “Agents must be designed around our understanding of what people require and expect of them.” One often perceived benefit of interface agents is that they can function as an active help system available in the user interface of a difficult to use system (Fischer, Lemke & Schwab, 1985). However, for such agents to be perceived as useful and not intrusive when carrying out the primary task with a system, the agent system must be made such that the users will trust the help and advice provided by it (Maes, 1994). This paper addresses some of these issues by a simulation study carried out to examine the following working hypothesis: “Using pedagogical agents in virtual learning environments will have an effect on the collaboration in the group.” Virtual Learning Environments Virtual learning environments are systems developed to support learning through distributed collaboration in some domain of inquiry. Research within the CSCL and CSCW traditions give examples and directions for developing such environments (e.g. Fjuk, Sorensen & Wasson, 1999). One concept closely related to virtual learning environments is how the environment supports awareness of the activities carried out by collaborating peers (Dourish and Bellotti, 1992). Gutwin, Stark & Greenberg (1995) divide awareness into four different categories: social, task, conceptual and workspace awareness. These are all critical to the quality of collaboration and coordination in the work on a distributed group. TeamWave Workplace (abbreviated Workplace or just TW) supports workspace awareness and was chosen as environment for the CoPAS experiment, based on previous experience in the VisArt scenario in DoCTA I (Wasson, Guribye & Mørch, 2000). TW is based on the metaphor of shared networked places, and provides a permanent

shared working place for groups. A group of people working together remotely over the Internet are able to use the environment to conduct meetings, store documents, share URL links, communicate by chat, and coordinate individual contributions. The environment supports both real-time (synchronous) and asynchronous work (Wasson et al., 2000). The CoPAS experiment is primarily real-time collaboration (over a period of 90 minutes). TW provides a number of different tools for communicating and collaborating. Figure 1 gives an overview of the tools used in CoPAS.

Figure 1. TW before the experimented started. The Concept maps were used for creating design diagrams and the Post-it notes for class definitions. The agents are represented as icons in the top left corner. From left to right: Domain agent, Collaboration agent, and Tool agent. Pedagogical Agents Our focus is on pedagogical interface agents. We take as starting point a working definition on pedagogical agents adopted from Johnson (1999): “Pedagogical agents can be autonomous and/or interface agents that support human learning, by interacting with students in the context of an interactive learning environment.” Pedagogical agents support human learners in the context of their actions and activities in a virtual learning environment and these agents can support both individual learning and group learning since they are seamlessly integrated in the environment. It can sometimes even be difficult to distinguish an agent’s actions from that of a user taking the role of an advising peer helping other users to solve a task in the environment. Pedagogical agents can be classified in two different categories: Animated pedagogical agents (Johnson, Rickel & Lester, 2000) and reactive pedagogical agents (Müller, 1998). The qualifiers “animated” and “reactive” are partially overlapping with respect to the phenomena they refer to, and sometime also used when describing autonomous agents. Nevertheless, we maintain the distinction in this paper. Both kinds of pedagogical agents can have different roles in the environment, including a more capable peer, a facilitator, or a novice. They can also have different degrees of domain knowledge and behave like a guide, critic (Fischer & Morch, 1988), coach (Suthers & Weiner, 1995), or wizard.

Animated agents use various techniques for simulating human behaviour, such as facial expression, body movement and gesture (Cassell, 2000). Contrary to this, reactive agents are merely reacting to events in the environment and taking action by displaying messages when certain threshold values have been reached. As such, the behaviour of reactive agents is analogous to the operation of a thermostat: when a certain temperature value has been reached some action is taken (such as turning off the heat). Reactive agents may be passive or active. A passive agent waits until it receives a request to act. An active agent takes action on its own when it sees an opportunity for doing so. Both kinds of agents follow a set of rules for when to act (precondition) and what to say (intervention strategy). Fischer and colleagues (1985) were the first to introduce the distinction between passive and active agents (later referred to as critics). Examples of active agents are the “design critics” developed by Fischer and Morch (1988) for domain-oriented design environments. These critics inform the users about general principles of the domain in which they operate. The knowledge embedded in a pedagogical agent system has to be represented in a formal way. There are various ways to accomplish this. In AI systems, for example, the following schemes are commonly used: model-based, rule-based, and case-based representation (Luger & Stubblefield, 1997). The pedagogical agents in CoPAS have a rule-based representation scheme. Pedagogical Agents in CoPAS The pedagogical agents in CoPAS are reactive agents. There are three types of them: 1) Tool agent, 2) Domain agent, and 3) Collaboration agent. The tool agent represents technical knowledge about how to use the TeamWave tools. The domain agent gives advice about the concepts and relations in UML (Unified Modelling Language) as well as the basic principles of object-oriented analysis and design (Hoffner, George and Valacich, 1998). The collaboration agent’s knowledge base is built on theories from CSCL (Koshmann, 1996), awareness (Gutwin et al., 1995), the principles of Genuine Interdependence (Salomon, 1992), and Collaboration Patterns (Wasson et al., 2000; Wasson & Mørch, 2000). Reactive pedagogical agents use different techniques to get the users’ attention. In CoPAS we decided on text messages in pop-up boxes because this technique was readily available in Workplace. An example of an agent displaying a message is shown in Figure 2.

Figure 2. A pop-up box from the Collaboration agent shows a message regarding division of labor. The message is written in Norwegian and reads: “It can be useful to divide the work amongst yourself”. This piece of advice was triggered when prolonged work by many users on the same object did not produce substantial change. In CoPAS the Collaboration agent and the Tool agent were active agents. The Domain agent combines the two techniques, serving both as an active and a passive agent.

METHOD We used several techniques for capturing data in the CoPAS experiment. First, we observed the activity in the virtual learning environment as it unfolded. Second, we held a personal interview with each participant afterwards. In addition, the chat logs in Workplace were examined in detail and the trigger counts of the wizards’ comments were enumerated. The Wizard of Oz Technique We have used the Wizard of Oz technique to simulate the usability and utility of pedagogical agents in a virtual learning environment. With this method, the participants are led to believe that they are using an implemented system, when in fact they are interacting with a simulation staged by human operators (e.g. Dahlbäck, Jönsson, & Ahrenberg, 1993; Maulsby et al., 1993; Ericsson, 1999). The Wizard of Oz technique was chosen for our study because it is a frequently used method for testing new ideas without having to make a working prototype of a system before implementing it. When using state of the art educational technology the cost of implementing new ideas can be high, especially when the technology is not built

for radical tailorability. With the Wizard of Oz technique we have been able to make a realistic simulation of a future learning environment in a shorter time than it would have taken us if we were to program the agents in the learning environment. The method is even more useful when experimenting with techniques that may require technology that is not yet available, but within reach (Ericsson, 1999). Three graduate students at the Department of information science were acting as wizards. Each of them had a preassigned task to simulate one of the following three agents: Tool agent, Domain agent and Collaboration agent. Their behaviours were defined by a set of rules for when to act and what to say upon acting. There are some ethical dilemmas surrounding the use of Wizard of Oz (Ericsson, 1999). First, it is participation on false premises since the participants were not aware they were part of a simulation experiment. They were told to interact with a prototype system with computer based pedagogical agents. Only later were the participants informed that in fact it was a simulation (they were informed in an interview the following week). The researcher responsible for the experiment had a personal interview with each of them, in which they were told they were free to withdraw from the experiment. None of the participants withdrew and the misinformation did not harm or put them in any awkward situation. Research Design The simulation study was carried out during one week of March 2000. Six groups participated, and each group was comprised of three first semester students from the Department of information science at the University of Bergen. All groups were given the same task: to make object-oriented analysis and design diagrams for an Internet banking system using the UML notation (use case and class diagrams). Each group had 90 minutes at their disposal for solving the task. The task constraints together with the constraints imposed by the Wizard of Oz technique were the main reasons for conducting a short experiment (i.e., the assignment was taken from a previous final exam, and the wizards may have revealed themselves if exposed for a longer period of time). The experiment was conducted in the TeamWave Workplace environment. The participants were given a short introduction to TW before the experiment started. This included how to use the different tools (see Figure 1) and how to go about querying to the domain agent for advice (described below). As previously mentioned there were three types of agent support: • Technical support using the TW tools (Tool agent), • Object-oriented modelling using UML (Domain agent), and • Peer-to-peer collaboration within the group (Collaboration agent). The human agents (wizards) had a list of rules for when and how to act and a corresponding list of predefined messages for what to say. The messages were written down in a Word document and defined the “knowledge base” for the agents. The text messages shown in figures 2, 3, 4 and 5 are examples of the agents’ knowledge. The total numbers of messages issued by the wizards are listed in Table 1. Two of the agents (Tool agent and Collaboration agent) were active agents. This means they were actively taking initiative and responding to the situation whenever they saw an opportunity. Figure 3 shows a response from the Collaboration agent. It displays a “message to all users,” which is a technique for sending shared feedback in TW.

Figure 3. This shows a pop-up box with a “message to all users”. The message is sent from the Collaboration agent to all users and reads: ”Collaboration agent says: If you have any problems please consult the Domain agent”. The Domain agent is a passive agent. In Figure 4 a typical action sequence for requesting help from this agent is illustrated. First, clicking with the mouse on the Domain agent icon allows the user to page the agent. Second, this brings up a dialog box where the user can type in a question. Finally, the question is sent to the agent

Figure 4. Steps for querying the Domain agent by a TW “Page” message. The question reads: “What is a class diagram?” The Domain agent responded to the query in Figure 4 by sending the message shown in Figure 5. This is another example of a TW Page message. A page message is sent only to a single user (or agent).

Figure 5. Example of a message sent in response to the question shown in Figure 4. It is a private message sent as Page to a single user. The message reads: “Domain agent says: Class diagrams give an overview over the classes in the system. Classes follow the reuse principle of object-orientation, and Java is an example of an object-oriented programming language.” When none of the predefined messages would apply the Domain agent made up a new message to salvage the situation (see the next Section), or responded with the following canned message: “Domain agent cannot understand your request. Please reformulate your question.”

FINDINGS The findings are divided into two parts. First, we enumerate the number of times the agents fired and then we present and analyse some of the interaction data. Frequency of Agent Intervention Ericsson (1999) has identified three factors he judge as important when measuring the quality of feedback provided by the wizards in a Wizard of Oz experiment: • Type of feedback (e.g. the range of message alternatives to choose from) • Number of instances of each type • Time delay in deliverance

The way the CoPAS experiment was conducted allowed us to measure the first two factors but not the last. We could not measure the time delay between user request and agent response with the software we used1. The last factor is therefore not included in the findings we report below.

Number of Type of agent

Domain Agent:

Message

Number of messages sent to all five groups

message

alternatives

alternatives:

actually used

19

63%

34

86.5%

56

82.5%

42

77%

132

(6 new) Collaboration

15

Agent:

(1 new)

Tool Agent:

23

(12 new) Total:

57

(19 new)

Table 1. Agent message types and number of times they fired. The new items refer to the number of new messages created during the course of the experiment Table 1 shows the total number of messages sent by the agents to the groups during the experiment. It is worth noting that not all messages were actually used. Of the original 38 messages, 25 (65.5%) were used at least once. This is higher than the number reported by Ericsson (1999). He reported that 54% (18 of 33) possible messages were used. However, some precaution must be made with respect our data since the wizards themselves were responsible for collecting the counts and this was done manually. Nevertheless, the overall picture is that there is a relative good match between the type of messages prepared in advance and the ones actually used. It is also interesting to note that the wizards made a total of 19 new messages during the course of the experiment as a supplement to the pre-made messages. The wizards were not told to do so, nor were they prevented from doing it. During the interviews the wizards (graduate students) said they had to make up the new messages in order to help the participants out of difficult problem situations, which we did not anticipate in advance. This points to the importance of having agent systems that can modify themselves and adapt to the environment as they interact with the users. In future implementations, alternative representation schemes other than rule-based may have to be considered in order to support agent adaptation better than rule-based systems currently can do. One such alternative is a casebased representation scheme. This is part of future work. System, Tool, and Domain The data we collected are from notes the experimenter made, from chat logs recorded, and from an interview with each participant after the experiment was completed. The dialog excerpt2 below is taken from a chat communication between the three members in one of the groups (group 1). One of the members (B1.1) sends a question to the others regarding the definition of classes. She attempts to answer it herself, but later receives a response from B1.3. An agent then intervenes and the dialog turns into a discussion at a different level of abstraction: B1.1 says: where in the diagram do we write in class attributes? B1.1 says: is it in the yellow frame?3 B1.3 says: I have already done that in the yellow rectangles on the top 1

This is relevant for passive agents only. This and subsequent excerpts have been translated from Norwegian to English by the authors. 3 What B1.1 calls the “yellow frame” and B1.3 the “yellow rectangle” are references to a TW Post-it note. Post-it notes were used for writing class definitions. 2

B1.3 says: I was able to change the name in both of the two classes Domain agent says: Is it possible to make a superclass here? B1.3 says: Should we add another class? B1.3 says: Yeah.., let’s make a common superclass B1.2 says: How? B1.3 says: we can call it ”Customer” The example shows that the Domain agent intervenes and gives advice regarding what classes to include in the diagram. All participants who are in the same virtual room see this message. As a result they shift their focus from a low-level operation (class definition mechanics) to a higher-level, domain-oriented concept (“Customer” as superclass name). The following dialog illustrates peer-to-peer commenting (an explanation regarding the use of Post-it notes). B1.1 says: How do we distinguish savings from checking accounts? B1.3 says: don’t know, how’d we do it B1.3 says: How did you make a yellow rectangle, need one for Customer B1.3 says: What do you think about the arrows in the diagrams B1.2 says: no idea B1.1 says: don’t know, but we need to distinguish savings from checking B1.3 says: Didn’t one of you make a yellow rectangle before? B1.2 says: Yellow rectangle: tools and post-it B1.2 says: The arrows are wrong The above excerpt has three dialogs running in parallel, each at a different level of abstraction (identified by the markers on the left). B1.1 tries to distinguish customers with checking accounts from customers with savings account (first and sixth utterance). B1.3 asks a question regarding how to make ”yellow rectangles”, and he is also uncertain about the relations (”arrows”) between the classes in the diagram. B1.2 answers the questions from B1.3. The dialog on arrows has been triggered by an intervention by the Domain agent: “The link ”objects-to” is incorrect to use in a class diagram. Use ”extends”, ”implements” or “contains”. The point we wish to communicate here is that peer-to-peer commenting may suffer from poor language phrasing that may be perceived as discouraging. Indeed, several users felt their peers’ comments could be phrased in a more constructive language, as in the following suggestion by B5.1 when asked how to phrase a comment about something being “wrong”: ”I think you are wrong”. A reason for this was taken up later in the interview and explained as: “One has to be cautious about what one says.” In the above dialog excerpt the utterance “Yellow rectangle: tools and post-it” is more constructively phrased than “The arrows are wrong.” Reflection on Collaboration One of the messages frequently issued by the Collaboration agent was the message regarding division of labour (see also Figure 2), as in the following example: Collaboration agent says: It can be useful to divide the work amongst yourself B3.1 says: we should divide the tasks B3.1 says: I can start with the class diagram... B3.3 says: what about me, what should I do? There were two main tasks to be solved in the assignment (class diagram and use case diagram). Since each group had three members there were no obvious ways to divide the tasks equally. The above comment by B3.3: “what about me, what should I do?” was discussed in the group interview. Two of the group members knew each other quite well, whereas the third did not know the others. She felt left out of much of the conversation. B3.2 gave the following comment when asked if knowing each other influenced division of work in their group. ”It certainly influenced our work, because one knows how to talk to a person you already know, and that can complicate collaboration with a third person who are supposed to be part in solving the same task” In another group where none of the participants knew each other in advance, one of the participants gave the following comment regarding language use. He (B4.1) says ”I think that because all of us did not know each other we were more formal and careful in our discussions than we would be if we knew each other.” This experience is

also reflected in the “collaboration patterns” that were identified in the VisArt scenario (Wasson et al., 2000). One of these patterns, Informal Language, describes that the use of language will change over time: It often starts in a formalistic style and gradually becomes more informal as the participants learn to know each other (Wasson & Mørch, 2000). The final point we would like to bring up in this paper is the importance of having a designated coordinator in the group. It was a frequently made comment by our participants and points to a need for having designated roles in the learning environment, especially someone who will take responsibility and make sure the task will be completed and progress towards that goal in measurable steps. One of the participants said the following: ”I think it is very difficult for three people to collaborate when there are no designated role for this. Someone who could be the leader, someone you could ask questions and who would distribute tasks.” (B2.1). The role as coordinator in virtual learning environment can be assigned to both human and virtual collaboration agents. The extent of distributing this responsibility between human facilitators and computer agents is part of future work.

DISCUSSION AND IMPLICATIONS FOR DESIGN It is important not to focus on the technology alone, but to consider the resources that human agents in fact are in the learning environment. A virtual agent does not easily cover the experience of a human facilitator. Human agents are better at fulfilling some roles. Different users also have contrasting preferences according to how help and advice from virtual partners are perceived. However, the use of virtual agents makes the learning environment richer and more advanced from a computational point of view. Computational agents are good at providing shared feedback on recurring tasks in well-defined domains, but less useful for giving individual student assistance. At the present level of agent technology the role of passive agents a human expert will fulfil better than a virtual agent, whereas a virtual agent may be better in the role as active agent. The kind of help given by our wizards to the groups about collaboration was not always optimal and has to be revised. In one situation illustrated in Section 3.3, one of the group members feels excluded after a comment from the Collaboration agent. A human facilitator would have resolved this situation in a more appropriate manner. On the other hand, there were situations where the teams where happy to get advice and to shift their focus of interaction. This can be explained by the term “breakdown”. A breakdown makes room for learning and reflections about the work situation (e.g. Fischer 1994; Bødker, 1996). Agents that create breakdowns may cause a shift in focus and indicate a new activity at a different level of abstraction. Computational support for this will depend on more advanced techniques in technology (such as access to multiple representations in the learning environment and to its domain of inquiry). Agent messages were presented in pop-up boxes and this would sometimes disturb the collaboration. One suggestion we have for future work is to make a permanent window to display the questions asked by students and the answers given. The messages could be organized as a sorted list according to their senders. This may also improve genuine interdependence since making these messages explicitly available to others will promote awareness in the group. A combination of “just in time” messages displayed in pop-up boxes (e.g. for collaboration patterns) and “more lasting” messages in the permanent window for other issues (e.g. tool, system and domain) could be explored. To prevent the participants from finding out it was a simulation Wizard of Oz study and not an implemented system, the participants were given a very short introduction to the system before the experiment started. They did not have much time getting to know each other. When using a real agent system this introductory phase should have priority since knowledge about the system functionality and the other members in the teams are paramount for achieving quality work during collaboration.

ACKNOWLEDGMENTS We would like to thank all the people involved in the CoPAS experiment. A special thanks goes to the three graduate students acting as Wizards: Rune Baggetun, Karianne Omdahl and Trond Pedersen. They did a great job! We are grateful for DoCTA NSS project funding by ITU: the IT in Education program of KUF (the Norwegian Department of Church affairs, Education and Research).

REFERENCES Bødker, S. (1996). Applying activity theory to video analysis: How to make sense of video data, in B. A. Nardi, Ed., Context and consciousness: Activity Theory and Human-computer Interaction, MA: MIT Press, Cambridge, pp. 147-174. Cassell, J. (2000). Embodied Conversational Interface Agents. Communications of the ACM, April 2000, Volume 43 (4).

Dahlbäck, N., Jönsson, A. and Ahrenberg, L. (1993). Wizard of Oz studies – why and how. In Workshop on Intelligent User Interfaces, Orlando, FL Dourish, P. and Bellotti, V. (1992). Awareness and Coordination in Shared Workspaces. Proceedings of CSCW’92. ACM Press, pp. 107-114. Ericsson, M. (1999). Supporting the Use of Design Knowledge, Linköping Studies in Science and Technology, Dissertation No.592. Fischer, G. (1994). Turning Breakdowns into Opportunities for Creativity, Knowledge-Based Systems, Special Issue on "Creativity and Cognition", Butterworth-Heinemann, Vol. 7, No. 4, 1994, pp. 221-232. Fischer, G., Lemke, A., & Schwab, T. (1985). Knowledge-based Help Systems. Proceedings CHI’85. ACM, New York, pp. 161-167. Fischer, G. & Morch, A. (1988). CRACK: A Critiquing Approach to Cooperative Kitchen Design. ITS-88 Proceedings. Montreal, Canada, pp. 176-185. Fjuk, A., Sorensen, E.K. & Wasson, B. (1999). ICT-mediated collaborative learning in work organisations. NIN Research Project Final Report, Telenor Scientific R&D Report 12/99, 106 pgs. Gutwin, C., Stark, G., and Greenberg, S. (1995). Support for Workspace Awareness in Educational Groupware, in Proceedings of the ACM Conference on Computer Supported Collaborative Learning 147-156. Hillsdale NJ: Lawrence Erlbaum Associates. Hoffner, J.A., George J.F., and Valacich J.S. (1998). Object-Oriented Analysis and Design. Chapter 12 in Modern Systems Analysis & Design 2nd Ed. Addison Wesley. Johnson, W.L. (1999). Pedagogical Agents. Invited paper at the International Conference on Computers in Education. Johnson, W.L., Rickel, J.W. and Lester, J.C (2000). Animated Pedagogical Agents: Face-to-Face Interaction in Interactive learning Environments, in International Journal of AI in Education (2000), 11. Koschmann, T. (1996). Paradigm shifts and instructional technology: An introduction. In T. Koshmann (Ed.) CSCL: Theory and practice of an emerging paradigm, 1-23. Mahwah, NJ: Laurence Erlbaum Associates. Luger, G. and Stubblefield, W. (1997). Artificial Intelligence. Structures and Strategies for Complex problem Solving. Addison Wesley. Maes, P. (1994). Agents that Reduce Work and Information Overload. Communications of the ACM, July 1994 Volume 37 (7), Maulsby, D., Greenberg, S. and Mander, R. (1993). Prototyping an intelligent agent through Wizard of Oz, Proceedings of the ACM HCI’93 Conference on Human Factors in Computing Systems, Amsterdam, The Netherlands, ACM Press. Müller, J.P. (1998). Architectures and applications of intelligent agents: A survey. The Knowledge Engineering Review, Vol. 13 (4), 353-380. Salomon, G. (1992). What does the design of effective CSCL require and how do we study its effects? SIGCUE Outlook Special Issue on CSCL, 21(3), 62-68. Suthers. D. and Weiner, A. (1995). Groupware for Developing Critical Discussion Skills. Proceedings of the ACM Conference on Computer Supported Collaborative Learning, Hillsdale NJ: Lawrence Erlbaum Associates. Wasson, B., Guribye F., & Mørch, A. (2000). DoCTA. Design and Use of Collaborative Telelearning Artefacts. ITU Report no. 5. Department of Information Science, University of Bergen. Wasson, B. & Mørch, A. (2000). Identifying Collaboration Patterns in Collaborative Telelearning Scenarios, Educational Technology & Society 3(3).