Developing a Virtual Reality Learning Environment for ...

3 downloads 130142 Views 500KB Size Report
learning efficiency of using 2D course materials is limited. As a result .... According to the survey conducted by Adobe Systems, ..... New York: John Wiley & Sons.
Developing a Virtual Reality Learning Environment for Medical Education Hsiu-Mei Huang1

Shu-Sheng Liaw2

Department of Information Management National Tai-Chung Institute of Technology Taichung, Taiwan e-mail: [email protected]

Department of General Education China Medical University, Taiwan e-mail:[email protected]

Wen-Ting Chen3

Yi-Chun Teng4

Graduate School of Computer Science & Information Technology National Tai-Chung Institute of Technology Taichung, Taiwan e-mail: [email protected]

Department of Information Management National Tai-Chung Institute of Technology Taichung, Taiwan e-mail: [email protected]

Abstract: As part of our research into virtual reality and educational virtual environment, we built a learning system that realizes features and characteristics of both virtual reality and role-playing based learning. We apply this learning system to the discipline of human anatomy in medical education. Our system establishes a role-playing based virtual learning environment. An individual student takes a role and learns under this environment. Students can walk around, explore the scene, watch videos and images displayed in TV screen inside the virtual hospital, listen to virtual instructor’s lecture and interact with organ objects and other characters. In this paper, we discuss theoretical background and motivations from educational perspective. Then we present our system and explain its features.

1. Introduction Human anatomy is one of the most important foundation disciplines in medical educations. Most teaching materials used in introductory human anatomy courses consist of 2D images and plain texts. The information conveyed from two-dimensional course materials can not cover everything in our three-dimensional real world. A flat organ image, for example, does not tell the relative position of this organ in the direction of missing dimension. The learning efficiency of using 2D course materials is limited. As a result, specimens, plastinated organs, or artificial anatomical models are necessary complements to optimize course curriculum. However, anatomical donations are rare and artificial organ models are also scarce and costly. These complementary teaching aids cannot be distributed to every student. To provide inexpensive and abundant learning materials, as well as a safe and interesting learning environment, we resort to virtual reality (VR) technologies. Virtual reality technologies allow people to visualize and interact with computer generated 3D objects. Various 3D objects mimic a real world and VR technologies enable real time simulations in this world (Burdea, 1999). Adopting virtual reality in education has become extremely popular in recent years. Several educational applications have successfully employed VR learning environments (VRLEs) (Chittaro & Ranon, 2007; John, 2007; Monahan, McArdle & Bertolotto, 2008; Pan, Cheok, Yang, Zhu & Shi, 2006; Rauch, 2007). Traditional VR systems are associated with high costs due to expensive high-end development hardware and software, and specialized controlling hardware (e.g. head-mounted displays, projectors, 3D I/O devices). The high cost of VR hardware equipments has been the barrier that keeps people from adopting VR into virtual learning systems. Fortunately, with advanced computer graphics technologies and new software languages, VR multimedia data can be easily implemented and rendered. Contrasting with traditional virtual reality, users only need a basic personal computer and browser plug-in to interact with the virtual reality environment (Chittaro & Ranon, 2007). With all these new technologies, we are now able to achieve the Educational Virtual Environment (EVE). Such 3D virtual world is used to support cooperative tasks or social role-playing. By using virtual avatars, which are 3D human-shape models, users can increase the sense of reality. Users can also show their identities, attendance status, locations, current actions...etc (Joslin et al., 2004). Through using avatars, users are able to interact with each other in the virtual world. Users can also perform communications by using other type of media such as voices, images, texts...etc (Benford & Greenhalgh & Rodden & Pycock, 2001).

In our research, we designed and built a medical virtual learning system by deploying VR and computer graphics technologies. This paper presents the software product of our research – VR teaching hospital (VR-TH). Our learning system demonstrates the concepts of EVE and how current 3D technologies are used to implement an inexpensive learning environment in medical education. In the reminder of this paper, we first present background information regarding current VR technologies and features of virtual learning environments. The section that follows presents the architecture and features of our system. Functionalities and some screenshots will also be shown. Finally, we conclude the paper with a brief summary.

2. Background To help readers understand more about VR-TH learning system, this section supplies some background information. First, VR features and current VR technologies are briefly reviewed. Then, the benefits of having role-playing based virtual learning environment are presented. Finally, how such benefits can be realized by VR technologies is discussed. 2.1 Features and types of VR technologies Burdea & Coiffet (2003) defined virtual reality as I3 for “Immersion-Interaction-Imagination”. Virtual reality (VR) employs computer graphics to provide the effect of immersion in the interactive virtual environment with various interface devices. Virtual reality (VR) is understood as the use of 3D graphic systems in combination with various interface devices to provide the effect of immersion in the interactive virtual environment (Pan et al, 2006). For users to interact with VR worlds, interfaces need to be specially designed and real-time feedbacks must be offered. Sherman & Craig (2003) considered that immersion can be classified into mental immersion and physical (or sensory) immersion. Thus, these two types of immersion play an important part in creating a successful personal experience with a VR world. The visual, auditory, or haptic devices that establish physical immersion in the scene change in response when the user moves (Sherman & Craig, 2003). Users can interpret visual, auditory, and haptic cues to gather information while using their proprioceptive systems to navigate and control objects in the synthetic environment to accomplish physical immersion. On the other hand, mental immersion refers to the “state of being deeply engaged” within a VR environment (Sherman & Craig, 2003, p.7). For example, if a VR world is designed for entertainment purpose, the success in mental immersion is based on how involved the user becomes (Sherman & Craig, 2003). As a result, educators would like to use VR technology’s immersion to induce learners engage in learning activities (Hanson & Shelton, 2008). Another feature of virtual reality is real-time interactivity. The ability of providing highly interactive learning experiences is very important and valuable. A virtual reality system can detect a user’s input and respond instantly. At the same time, user can see action change on the screen that based on their commands. Moreover, users not only see and manipulate 3D objects on the screen; they also touch and feel them by using all human sensorial channels (Burdea, 1999). Virtual reality not only allows an immersive and interactivity learning environment, it also facilitate problem solving by stimulating imaginations. For the issues that require high imagination and problem solving ability, VR is especially helpful. Jonassen (2000) considered that technologies have intrinsic properties and activate cognitive tools that help learners to consciously elaborate on what they are thinking and to engage in meaningful learning. Therefore, a VR environment can trigger the user mind’s capacity to experience and imagine in a creative sense , nonexistent things. In short, VR is well suited to convey difficult abstract concepts due to the visualization abilities (Burdea & Coiffet, 2003). 2.2 Development tool selection Programming language and platform selection is important and critical. The language chosen must have libraries, whether built-in or developed by third party, to realize the features discussed above. In the beginning of this research we had done a quick survey on some language and tool candidates. Table 1 lists the advantages and disadvantages of these candidates. Tools Second Life

Advantage y A large set of tools are available to users. Provides a library that contains rich set of objects, textures and scripts. These objects are readily applied or they can

Disadvantage y A proprietary platform. Private servers require license fee. y Distractions. There are enormous types of

serve as templates where users can continue improving upon. y Allow fast and easy integrations. y Flexible, expandable. People can expand or update their community easily.

Flex / Action Script/ PV3D

C++/ OpenGL/ DirectX

Java/ Xj3D

Virtools

y Flash player has the most penetration rate y Backward compatibility and easy integration. Flash application can integrate or can be integrated with existing or old Flash/HTML learning systems. y Good multimedia support. y Relatively easier to implement than C++/Java y Well supported by 3D graphic accelerator. y Efficient graphic rendering pipeline. y Programmers have the most control over the program. Programmers can create their own effects, filters, special animations and all other tricks. y (DirectX) Rich multimedia, networking and I/O support. y Good network support. y Cross-platform. y Animation libraries and templates are available y Free development tools and libraries y Relatively simple to build. y Source project is more intuitive to understand. Virtool project components are graphical as oppose to texts in C++/Java. y Less programming skills required. y Provide project preview functionality. Programmers can see scenes or effects immediately.

y y y y

communities in SL. Not all communities provide educational services. Learners may be distracted by communities that provide entertainment services and forgot their original goal. Second Life is a big virtual space and, just like a real society, griefing, pranksters and spams happen everyday. Need a relatively large viewer. Limited support of 3D graphic acceleration. Powerful computer systems are needed. PV3D API has memory leaking problems.

y (OpenGL) No native sound or network support. y Relatively difficult to code y (DirectX) A proprietary platform. System developed with DirectX may have issues running under Non-Windows OS. y Good development tools (i.e. Visual Studio) are expensive. y Native mesh data compression is NOT supported. In fact, the standard mesh file stored by using XML is even bigger than other uncompressed format such as SMF or PLY. y Relatively low control over the application. Programmers may only use built-in components. Custom scripts are possible but doing so loses the advantage of simplicity. y A proprietary platform. Limited resources. y External plug-in needed. Also, only Windows and MacOS are currently supported. y Development software is expensive. Table 1: Development tools comparison table.

The platform we have chosen is Flash/ActionScript. According to the survey conducted by Adobe Systems, Flash Player is the most widely installed browser plug-in. With an average penetration rate of 98% (statistics for version 9), we can treat it as the standard software equipped in every PC. Extremely high compatibility, as well as the advantages of good native network support, multimedia support, and cross-platform make Flash/ActionScript the best candidate. 2.3 Role-playing based virtual learning environment Role-playing is one way to engage in situated learning. Studies have shown that immersive environments provided by role-playing or situated learning applications create a stronger sense of presence. It will motivate and thereby cause learners to cognitively process course materials more deeply. In addition, role-playing learning involves group interaction and development of social skills. Shih and Yang (2008) pointed out that learners prefer role-playing based learning because role-playing game contains a context and it offers opportunities to act as other roles. Younger learners are very familiar with novelty 3D graphical representations of the characters called avatars (Burdea & Coiffet, 2003; Turkle, 2007). Learners can act as a certain character in order to better understand its way of life (Burdea & Coiffet, 2003). In addition, Learners can perceive others within the simulation by using avatars (Sherman & Craig, 2003). Studies of Pan et al. (2006) have proven that as learners feel safe in a virtual environment, learners will express what they think and feel through their virtual characters. This stimulates their creativity and imagination. Role-playing based virtual learning environment not only provides this benefits, it also provides relaxations and fun.

2.4 Realizing role-playing virtual learning environment by using VR VR technologies provide excellent supports for building virtual environments. The immersion and simulation features of VR enable high level of realism, friendly interactivity and offer life-like situated learning experiences. VR enables easy creation of specific characteristics and personalities (Holmes, 2007) which are essentials in role-playing. Two examples role-playing learning systems are worth mentioning as they have successfully deployed VR technologies and achieved very effective virtual learning environments. The first application is the Virtual Big Beef Creek (VBBC) project developed by Campbell, Collins, Hadaway, Hedley, and Stoermer (2002). VBBC simulates a real estuary that allows users to navigate in the virtual environment, get data and information to learn about the ocean science. Users can use different avatars provided by VBBC to explore underwater environment, and every avatar has different viewpoint and navigation constraint. For example, when the user chooses to be a scientist, she can move as a human and obtain data such as water temperature, or if he chooses to be a bird, she can leave the sea level plane to navigate from high above. The second application is the Multiplayer Educational Gaming Application (MEGA) created Annetta, Minogue, Holmes, and Cheng (2009). MEGA is created to provide a virtual environment and cover key genetics concepts for high school biology course. The MEGA is a game-based educational tool, in the study, it uses to probe student understanding of genetic through a problem-based crime scene investigative mystery. Students can navigate and interact in the virtual environment, and solve the problem. 2.5 Guidelines for building a successful VR learning environment The 3D VR learning environment may fail without activities and tasks designed with appropriate pedagogy to meet learners’ needs (Shih & Yang, 2008). Educational software designers or educators face the challenge of how to employ VR features in their 3D VR courses. To address this issue, Simons et al. (2000) give three steps for guided learning and experiential learning: 1. Awaken learners’ curiosity. 2. Follow learners’ curiosity and interest. 3. Organize learning activity that curiosity arises from it. The concept brought up by Simons et al. (2000) indicates that intrinsic motivation is seen as crucial in constructivist learning. Intrinsically motivated learners do not necessarily put more effort in their learning or spend more time on it, but their learning performance is qualitatively different. On the other hand, it is important to note that the learners’ perception is crucial.

3. The Virtual Reality Teaching Hospital In accordance with the concepts and guidelines discussed above, we build a learning system to provide role-playing based learning environment for medical education. This system is named Virtual Reality Teaching Hospital (VR-TH). VR-TH allows individual students to take a role in the virtual hospital. Students can walk around, explore the scene, watch videos and images displayed in TV screen inside the virtual hospital, listen to virtual instructor’s lecture and interact with organ objects and other characters. 3.1 Design of VR-TH VR-TH is a Flash program written in ActionScript. 3D graphic components in VR-TH are drawn and rendered by using Papervision3D API. The design of VR-TH system follows object oriented programming paradigm. Figure 1 presents simplified class diagram of VR-TH. Notice that actual program contains lots more classes, attributes and methods. For space consideration, elements that are irrelevant to our focus are omitted. The teaching hospital consists of seven scenes. They are hospital lobby, a waiting room with five doors that lead to five consulting rooms and five independent consulting rooms for circulatory system, skeletal system, digestive system, urinary system and respiratory system respectively. Every scene class contains basic building elements such as walls and floor as shown in the Lobby class in Fig. 1. Videos are available in waiting room (SelectRoom class) and five consulting rooms. DAE and Collada models which represent doctors, instructors, peer students, furniture, medical equipments and 3D organs are contained in consulting rooms. All scene classes inherit from the BuildingBase class. BuildingBase class in turn inherits from PaperBase class. Such organization is specially designed for efficient implementation in ActionScript and Papervision3D API. BuildingBase class contains fundamental I/O event handlers. These methods manage player control, player movement

and camera view. BuildingBase also keeps track of system state by maintaining a static Status class. PaperBase abstract class handles underlying 3D graphic foundations which include view frustum setup, camera setup, renderer setup, and viewport layer (an efficient alternative to z-sorting) setup. Finally, PaperBase inherits the Flash’s Sprite class which is one of core display list class. This relationship is significant when it comes to program expandability. Expandability will be briefly addressed in section 3.2.5.

Viewport3D PaperBase #viewportWidth:Integer #viewportHeight:Integer

Scene3D ViewportLayer

VRHospital -StageChangeHandler(Event)

BasicRenderEngine

1

manages

CameraObject3D *

BuildingBase #KeybEvntHandler(Event) #MouseEvntHandler(Event)

Status +currentScene +controlMode DAE

Lobby -wall:Plane -ceiling:Plane -floor:Plane -chair:DAE -door:Plane -counter:Box -funitures:DAE #player:DAE +currState:Status

SelectRoom

Circulatory

Skeletal

Digestive

Urinary

Respiratory

DAE Collada Video

Figure 1: VR-TH class diagram. The entry point of the program is in VRHospital class. VRHospital class is responsible for putting correct scene class into the stage (a concept in Flash, it is a container of elements to be displayed) and handling stage change events that are triggered by a scene change. 3.2 VR-TH features Functionalities we implemented in VR-TH achieve more than basic VR features as discussed in the previous section. VR-TH provides learners with some unique features that make our system special and more useful.

3.2.1 Multiple view modes and all kinds of controls To provide the highest interactivity and enable multiple-angle observations, a lot of efforts had been spent on developing camera and character control management. Camera and character movements are associated because proper views are necessary for easy and intuitive character control. VR-TH allows users to control the character from first-person perspective or third-person perspective. In the first-person perspective, the view camera’s orientation and location coincide with virtual character’s eyes. Fig 2 depicts a sample view from first-person perspective. In this perspective, users can command the character to walk, run, strafe, turn and circle around a focus (such as an organ to be observed). In the third- perspective person, the view camera is by default located somewhere near and above the character and stares at the character. Fig 3 and Fig 4 depict this mode. In addition to player control, users can also control camera in this perspective. Users can freely move the view camera to any location and change its orientation. The view camera can also be set to circle around an object. It is particularly important for organ observation. In addition to manual control, VR-TH supplies smart player movement functionalities. Regardless of control mode, users can click any point on the scene to trigger automatic movement. For example, if the floor is clicked, the character will walk to the location clicked. If a bulletin board or TV screen is clicked, the character will walk to the location that is predetermined for best viewing and then adjust camera focus to be the bulletin board or screen. This operation is depicted in Fig. 3 to Fig. 5.

Figure 2: First-person perspective of the lobby.

Figure 3: Learner is initially watching TV.

Figure 4: Clicking bulletin board triggers automatic movement. Learner walks to a predetermined location and faces bulletin board.

Figure 5: A screen shot from first-person perspective. As shown in the figure, this location gives the best reading position.

3.2.2 Rich multimedia course materials VR-TH is a multimedia application. It delivers information to users through different media. Course materials are presented in the form of texts, images, sound, videos, and 3D human organ models. 3.2.3 Interactive course materials

VR-TH offers dynamic 3D organ models and interactive instructors. Appearance, position, scale, orientation of a 3D organ model as well as gestures and movements of the instructor will change according to lecture content. For example, as the instructor is teaching the ventricles and atriums, heart organ will appear and then rotate to proper place to give learners a clear view of ventricle and atrium. This is depicted in Fig 7 and Fig 8. As lecture progresses to the arteries and veins, heart organ will rotate and zoom to show these parts, as depicted in Fig 9.

Figure 6: To begin a lesson, learner walks (either manually or VR-TH does it automatically for users) to a proper spot.

Figure 7: A doctor walks in and begins lecture. He also speaks and makes gestures. A heart model will appear when necessary (see Fig. 8).

Figure 8: The doctor is introducing ventricles and atriums. Heart model appears and then oriented to provide learners an appropriate view.

Figure 9: The doctor is introducing the cardiovascular system. The heart model is translated and rotated to provide learners an appropriate view.

3.2.4 No setup overhead With special implementation and proper development platform selection, VR-TH requires no installation and setup. Quite a few VR applications require program installation and complex system environment setup. In VR-TH, graphic settings, audio setting and key mappings are automatically handled by the Flash player. This prevents users from worrying about technical issues. 3.2.5 Expandability Scenes in VR-TH are highly independent. Different scenes in VR-TH communicate through one simple static and public object (the Status class as shown in Fig. 1). Such design allows adding scenes easily. Further, every scene is inherently a Sprite (section 3.1), meaning that any Sprite type object can be recognized by VR-TH. Hence, VR-TH is able to take most Flash movies, provided that appropriate Stage change event handlers are set up. To build a custom 3D scene for VR-TH, one only needs to understand the Status class which is less than 20 lines. For people who are not familiar with Papersion3D or 3D graphics, they can use other 3D API or simply build 2D Flash course materials and put into VR-TH. 3.2.6 High compatibility and more expandability

The VR-TH program is packaged as a single Flash movie (a .swf file). In addition to be loaded by Flash player, this swf file can be embedded in web pages, PowerPoint slides and even Java programs. This feature leads to more expandability. Instructors do not have to design new Flash movies in order to expand the learning system. Institutions can easily merge VR-TH with their existing learning systems or learning management systems such as BlackBoard or WebCT. This level of compatibility can not be achieved by C++, Java or other proprietary platform based systems. 3.2.7 Multiple role selection As VR-TH is a role-playing based virtual reality learning system, it provides multiple characters for learners to act. 3.3 Screenshots Additional screenshots are presented to help readers have a better understand of VR-TH.

Figure 10: Learner interacts with a pharmacist. Dialogs are being played from computer speakers.

Figure 11: The waiting room. Each door leads to different consulting room.

Figure 12: User enters consulting room for circulatory system.

Figure 13: A doctor is lecturing on heart diseases. In addition to lecture slides shown, he also speaks (played from computer speakers) and makes some gestures.

Figure 14: Learner interacts with the doctor.

Figure 15: A wide angle view of hospital lobby.

4. Conclusion VR-TH serves as an example of building role-playing based learning environment in medical education by adopting inexpensive VR technologies. This learning system has the advantages of low development cost, cross-platform, high expandability and high compatibility. VR-TH provides immersive virtual environment for students to learn human organs. In the virtual environment, students control character and view camera, watch and listen lecture contents, explore and observe human organs. Learners’ curiosities and interests are motivated through realistic models, immersive environments and interesting course contents. Future improvements will include opening up more characters for learners to act, allowing collaborative tasks (such as surgery), and refining course contents. On the other hand, we believe that it will also be beneficial to apply VR technologies to the education of other disciplines. References Annetta, L.A., Minogue, J., Holmes, S.Y., Cheng, M.T. (2009). Investigating the impact of video games on high school students’ engagement and learning about genetics. Computers & Education, 53 (9), 74–85. Burdea, G. C. (1999). Haptic Feedback for Virtual Reality, Keynote Address of Proceedings of International Workshop on Virtual prototyping, Laval, France, 87-96. Burdea, G.C., & Coiffet, P. (2003). Virtual Reality Technology, 2nd ed. New York: John Wiley & Sons. Campbell, B., Collins, P., Hadaway, H., Hedley, N., Stoermer, M. (2002). Web3D in Ocean Science Learning Environments: Virtual Big Beef Creek, Proceedings of the 7th international conference on 3D web technology, Tempe, Arizona, USA, 85–91. Chittaro, L., & Ranon, R. (2007). Web3D technologies in learning, education and training:Motivations, issues, opportunities. Computers & Education, 49 (1), 3–18. Hanson, K., & Shelton, B. E. (2008). Design and development of virtual reality: analysis of challenges faced by educators. Educational Technology & Society, 11 (1), 118-131. Holmes, J. (2007). Designing agents to support learning by explaining. Computers & Education, 48 (4), 523–547. John, N. W. (2007). The impact of Web3D technologies on medical education and training. Computers & Education, 49 (1), 19-31. Jonassen, D. H. (2000). Transforming learning with technology: Beyond Modernism and Post-modernism or whoever controls the technology creates the reality. Educational Technology, 40 (2), 21-25. Monahan, T., McArdle, G., & Bertolotto, M. (2008). Virtual reality for collaborative e-learning. Computers &

Education, 50 (4), 1339-1353. Pan, Z., Cheok, A. D., Yang, H., Zhu, J., & Shi, J. (2006).Virtual reality and mixed reality for virtual learning environments. Computers & Graphics, 30 (1), 20-28. Rauch, U. (2007). Who owns this space anyway? The Arts 3D VL Metaverse as a network of imagination. In Proceedings of ED-MEDIA 2007, Vancouver, Canada, 4249-4253. Sherman, W. R. & Craig, A. B. (2003). Understanding Virtual Reality. New York: Morgan Kaufmann Publishers. Shih, Y.-C., & Yang, M.-T. (2008). A Collaborative Virtual Environment for Situated Language Learning Using VEC3D. Educational Technology & Society, 11 (1), 56-68. Simons, R.-J., van der Linden, J., & Duffy, T. (2000). New learning. Dordrecht, the Netherlands:Kluwer Academic. Turkle, S. (2007, Aug.). Constructions and Reconstructions of Self in Virtual Reality: Playing in the MUDs, http://web.mit.edu/sturkle/www/constructions.html Access: 2007/August.

Suggest Documents