Mobile Learning for All

3 downloads 876 Views 514KB Size Report
Further Development of the Context Categories of a Mobile Learning Framework ..... London, United Kingdom: Learning and Skills Development Agency.
www.rcetj.org ISSN 1948-075X

Volume 6, Number 1 Spring 2010 Edited by: Mark van ‘t Hooft Editor

A. Quinn Denzer Managing Editor

Special Issue: Handheld Learning 2009 Research Strand Papers

Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

Editor

Managing Editor

Mark van ‘t Hooft, PhD

A. Quinn Denzer

Advisory Board Joseph Bowman, Ph.D. State University at Albany

Cheryl Lemke Metiri Group

Rosemary Du Mont Kent State University

Robert Muffoletto, Ph.D. Appalachian State University

Ricki Goldman, Ph.D. NYU

Elliot Soloway, Ph.D. University of Michigan

Aliya Holmes St. John's University

Review Board Kadee Anstadt, Perrysburg City Schools Savilla Banister, Bowling Green State University William Bauer, Case Western Reserve University Albert Ingram, Kent State University John Jewell, College of Wooster Jan Kelly, Mogadore Local Schools Cindy Kovalik, Kent State University Annette Kratcoski, Kent State University Mary Lang, Coleman Foundation

Mary MacKay, Wake County Public School System Theresa Minick, Kent State University Jason Schenker, Kent State University Elizabeth Shevock, Kent State University Chris Simonavice, Florida State University Karen Swan, University of Illinois, Springfield Leonard Trujillo, East Carolina University Mark van ‘t Hooft, Kent State University Maggie Veres, Wright State University Yin Zhang, Kent State University

The Journal for the Research Center for Educational Technology is published twice a year by RCET (http://www.rcet.org). It provides a multimedia forum for the advancement of scholarly work on the effects of technology on teaching and learning. This online journal (http://www.rcetj.org) seeks to provide unique avenues for the dissemination of knowledge within the field of educational technology consistent with new and emergent pedagogical possibilities. In particular, journal articles are encouraged to include video and sound files as reference or evidence, links to data, illustrative animations, photographs, etc. The journal publishes the original, refereed work of researchers and practitioners twice a year in multimedia electronic format. It is distributed free of charge over the World Wide Web under the Creative Commons License (Attribution-Noncommercial-No Derivative Works 3.0 United States) to promote dialogue, research, and grounded practice.

Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

Volume 6, Number 1 Spring 2010

Introduction to the Special Issue Graham Brown-Martin

1 Long Papers

Will Student Devices Deliver Innovation, Inclusion, and Transformation? John Traxler

3

A Classification of M-Learning Applications from a Usability Perspective Robin Deegan and Paul Rothwell

16

Mobile Devices as ‘Boundary Objects’ on Field Trips Nicola Beddall-Hill and Jonathan Raper

28

Mobile Learning at Abilene Christian University: Successes, Challenges, and Results from Year One Scott Perkins and George Saltsman

47

Using Handheld Technologies for Student Support: A Model Jane Lunsford

55

Short Papers Further Development of the Context Categories of a Mobile Learning Framework Phil Marston and Sarah Cornelius

70

Combining Analogue Realities and Digital Truths: Teaching Kids How to Hold Productive Learning Conversations Using Pictochat on the Nintendo DS Karl Royle, Clair Jenkins, and Julie Nickless

76

Mobile Learning for All Marco Arrigo and Giovanni Ciprì

94

Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

Mobilizing The Open University: Case Studies in Strategic Mobile Development Rhodri Thomas

103

Mobile Technology as a Mechanism for Delivering Improved Quality of Life Andy Pulman

111

A Novel, Image-Based, Voting Tool Based on Handheld Devices Peter van Ooijen and André Broekema

122

Implications of 4G connectivity related to m-learning contexts Arturo Serrano Santoyo and Javier Organista-Sandoval

129

Fun, Fizzy and Formative Approaches to Assessment: Using Rapid Digital Feedback to Aid Learners' Progression Rowena Blair and Susan McLaren

136

Collaborative Mobile Knowledge Sharing for Language Learners Lyn Pemberton, Marcus Winter, and Sanaz Fallahkhair

144

The Open University Library in Your Pocket Keren Mills and Hassan Sheikh

149

MoLeaP, The Mobile Learning Project Database: A Pool for Projects and Tool for Systematic Description and Analysis of Mobile Learning Practice Judith Seipold and Norbert Pachler

157

Can Nintendo DS Consoles Be Used for Collaboration and Enquiry-Based Learning in Schools? Steve Bunce

172

Towards An Intelligent Learning System for the Natural Born Cyborg Deb Polson and Colleen Morgan

185

Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

RCETJ 6 (1), 94-102

Mobile Learning for All Marco Arrigo Giovanni Ciprì Italian National Research Council - Institute for Educational Technology Abstract This paper presents research regarding the accessibility design for a mobile learning activity carried out at the Italian National Research Council, Institute for Educational Technologies. In particular, we introduce some considerations about the methodology and the design steps used to build some educational tools on mobile devices that are fully accessible for students with special needs using a compact screen reader (on a Smartphone). Briefly, we outline the common problems of accessing an online learning management system through a Smartphone (services and information), and then we introduce a mobile learning environment, the Accessible Mobile Learning (AMobiLe), which we have designed with specific features for visually impaired students. One of the main aims of our research is to explore and evaluate ways of using mobile devices to stimulate collaborative learning, as well as to demolish barriers for disabled students in order to reduce the digital divide. Keywords Mobile Learning; Accessibility; Design for All; Inclusion; Multimodal Interface Introduction Information and Communication Technologies (ICT) have transformed the world we live in. They can provide significant economic benefits by increasing productivity, and attracting substantial financial investments. In today’s society, where information plays an important role, being able to fully use ICT opens up opportunities in people’s everyday lives. Thus, access to ICT is very important in order to fully participate in the Information Age and take advantage of what it has to offer. Consequently, people need not only more powerful computers to benefit from the pervasive presence of ICT, but also systems where Human-Computer Interaction (HCI) is well-designed with respect to interaction modality. In the literature, multimodality has been studied in different ways. First, although multimedia is obviously associated with multimodality (Carbonell, 2006), it is important to distinguish between them. Multimodality is related to the way in which we input information, while multimedia focuses on the simultaneous presence of different types of output (audio, video, etc.). Moreno and Mayer (2007) describe another interesting aspect of multimodality in didactic contexts, where learning can be empowered if multiple perceptual channels are involved. Our paper focuses on multimodality in a traditional sense, which is closer to HCI, like communication with computer systems through human perception, an input modality used commonly to interact with the real world. Although improving the quality of HCI could benefit all users, it is far more important for people with special needs. Despite the reported benefits of ICT, according to Kaye (2000), even today the use of these technologies by people with disabilities is relatively low. Often, most of the barriers to the use of ICT are connected to the issue of access in terms of system

Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

94

interaction. In order to improve the accessibility of these technologies, the interface design could become multimodal. Moreover, when we apply a multimodal design in conjunction with the new mobile devices and assistive technologies, new application scenarios are opened up, particularly in educational contexts. Wireless technologies provide support for new learning experiences that would be impossible using a desktop and/or wired environment. In fact, mobile devices offer an opportunity for learning on the go. They also add new educational opportunities because they are personal, portable, and permit new forms of interaction between all involved in the learning process and their respective surrounding environments. In the literature, there are several studies concerning accessibility through mobile devices in learning contexts. For example, the MOBIlearn (Lonsdale et al., 2007) project aims to improve access to knowledge for a great number of users, including people with special needs, by giving them ubiquitous access to appropriate learning objects. According to innovative paradigms and interfaces, the authors provide information and services by linking a variety of media (e.g. text, video, or prerecorded audio). The MotFal project (Malliou, Miliarakis, Savvas, Sotiriou, & Stratakis, 2004) is a joint initiative of pedagogical and technological experts, educators, and psychologists to carry out research into the possibilities of using mobile platforms with Internet access for educational purposes at the school level. This project was developed using a strict level of accessibility. While using a PDA connected to the Internet, a student visiting a historical building can access supplementary information about the artefacts. S/he can access videos, teacher-predefined historical reconstructions, or surf freely on the Internet to look for related web sites. Barbieri, Bianchi, Carella, Ferra, and Sbattella (2005) use a set of system interfaces (vocal, gestural, and tactile) in order to overcome barriers to accessibility. They have designed an e-learning platform to support users with different disabilities (blind, visually impaired, deaf, dyslectic students). The AccessSight project (Klante, Krosche, & Boll, 2004) proposes a system to support both blind and sighted users in a tourist experience which provides the same information to each user. For this to happen, the authors present tools to transform the data related to a visit so that they are presented in a modality the tourist needs. Arato, Juhasz, Blenkhorn, Evans, and Evreinov (2004) introduce the Java-Powered Braille Slate Talker, a “new” device to allow Braille input for visually impaired people. The proposed system is based on an ordinary handheld device with a fixed layout plastic guide placed over the touch screen. Through the transparent plastic film they have designed an accessible interface to control a set of multimedia device tools. Other interesting work concerning the learning accessibility by mobile devices are the CHAT - Cultural Heritage fruition & e-learning applications of new Advanced (multimodal) Technologies (Ardito, Pederson, & Costabile, 2006; Ardito, Pederson, Costabile, & Lanzillotti, 2006), a software infrastructure aiming to provide adaptable learning services based on the user’s preferred way of interacting as well as the physical context. In this paper we introduce AMobile (Accessible Mobile Learning), a fully accessible on-line environment for mobile learning. Through AMobile, we investigate and assess the modality of using mobile devices to support all students in their didactic activities, with an empowered multimodal interface system. In particular, we combine a well designed GUI (in conformity with accessibility standards) with a ‘Live’ Text To Speech (TTS) to improve the learning experience and offer significant opportunities to overcome some of the learning barriers. In next section we present the AMobile system, describing its features as well as the design methodology used. This is followed by a description of the multimodal interface of the system, highlighting the accessibility features, as well as our conclusions. The AMobiLe System According to Universal Design principles, a well-designed system must be useful and marketable to people with diverse abilities, it must accommodate a wide range of individual preferences and abilities, Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

95

and must also be easy to understand, regardless of the student's experience, knowledge, language skills, or level of concentration. Following these guidelines, we designed an Accessible Mobile Learning (AMobiLe) environment, an online environment for mobile learning with specific features for disabled students. We developed AMobiLe to support disabled students in their learning activities during on-site experiences. One of the main aims of the project is to explore and evaluate ways of using mobile devices to stimulate collaborative learning, as well as to demolish some of the barriers experienced by disabled students in order to reduce the digital divide. The project is focused particularly on student mobility and on contextualized information. Mobility means being within and moving around the places which are the object of study, while contextualization is also important because a student’s geographical position changes the learning context, and consequently that student’s learning experiences. Moreover, giving this opportunity to all students means enabling each one of them to achieve social integration. In the AMobiLe Project we use Smartphones supplied with GPS in order to link all activities carried out by the students with a specific location inside an area of interest. In particular, the students can access the AMobiLe system both through desktop computers, when they are in the classroom or at home, and through a Smartphone with GPS during on-site learning activities. Through the AMobiLe system teachers design learning activities for their students and then define the points of interest (POIs) correlated to that learning activity. A learning activity consists of a set of questions about historical buildings and/or art styles. Thus, during their visiting experience, students have to answer the questions prepared by their teachers. In the AMobiLe system the students gather information to carry out the learning activities. Therefore, when students visit a site, they can use mobile devices to gather textual notes, photos, and audio recordings. We designed a Mobile Note Tool in order for students to take multimedia notes, depending on their abilities and/or preferences. Moreover, during the on-site experience, the user is also supported in his/her learning activities with a vocal navigator tool which constantly communicates (visually as well as by a TTS) his/her position on the map, the distance from each learning activity POI, and the position of other students who have accessed the AMobiLe on a mobile device. When the notes are published in the system, they are collected in an online space. The notes can be used in two ways: to build learning activity hypermedia, and/or to build a personal student blog for a specific mobile learning experience. In the first case, the system automatically selects some of the notes that the students published on-line in AMobiLe, then the selected notes can be synthesized into a hypermedia artifact that represents the best answer for a learning activity. Therefore, the main issue in this case is how to choose the notes. Figure 1 illustrates an example of the process we used. First of all, the POI area is divided into rectangular zones, then we look for the zone with the maximum number of answers (notes) for a particular teacher question. These zones, clustered by the learning activity questions, are the “best answer zones” for the students who have had the mobile learning experience. Thus, these zones seem to be more effective representations for achieving intended learning outcomes. The next step is to choose a subset of notes for each best answer zone which are the most relevant for the corresponding question. For this to happen, we use the following two criteria: heterogeneous notes (different media types: textual, audio, photo) but not more than one for each student, and as many different authors as possible All of the above criteria are used only to build the initial synthesis of the hypermedia. In fact, the process of choosing the most representative answers should be as democratic as possible. To bring this about we have designed a tool that can be used to modify the most representative answer sets. In particular, students can vote for the notes which they think are the most relevant. The votes are then used to recalculate the most representative answer sets for each learning task. Consequently, the notes not selected in the first automatic compilation can become part of subsequent versions of the synthesized learning task hypermedia. In addition, through the automatic generation of these hypermedia, the notes allow the students to carry out the learning tasks and support the impaired students in problem-solving activities.

Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

96

Figure 1: Best Answer Zone Detection

Design of the Multimodal Interface The research we present in this paper reflects some considerations on the accessibility issues we have observed during the two years of experimentation of another innovative mobile learning environment (MoULe system) developed at the Italian National Research Council, Institute for Educational Technologies. The MoULe system is an online environment for collaborative learning, which enables educational activities based on the exploration of a geographical place to be carried out using Smartphones and portable devices (Arrigo et al., 2008; Gentile et al., 2007). The AMobiLe prototype was designed to increase user accessibility. In order to reach a high level of interaction (such as humanhuman interaction) and according to the multimodal interface guidelines (Reeves, Lai, Larson, & Oviatt, 2006) and the Universal Design principles, the AMobiLe system provides both complementary and redundant information. The AMobiLe system produces information redundancy so that every graphical and audio element has a corresponding textual description. Since image perception is very difficult for the visually impaired and, likewise, sound perception is difficult for partially deaf people, it is essential to provide a good alternative text description. When we were not able to supply an alternative description we removed multimedia information. In general, the information in the AMobiLe system is available in at least two modalities. Furthermore, we designed an adaptable multimodal interface that provides different ways of interacting, depending on user preferences as well as user abilities. In particular, the graphical interface was Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

97

designed to fully interact with most commercial TTS (Text To Speech) software. The vocal outputs can support blind and vision-impaired students, especially with beginning application usage. Therefore, integration of a high quality TTS engine which provides reliable access to all PDA applications, windows, and controls is one of the main issues to take into consideration in order to increase accessibility. For this to happen, in our testing phase of the mobile devices we have been using the Mobile Speak for Pocket PC (MSP), which we chose for its versatility, screen reader fidelity, and stability. In general for each screen view the system provides a content description and the available interaction controls. During our research we noticed that for a blind user who accesses software with TTS support the best way to select contents and interact with the PDA is through the tabulation key (TAB). In fact, every time the user changes the element focus, the screen reader describes the control type (button, input text box, etc.) and vocalizes its text name and the information associated to it. In this way the user can have a screen overview. Thus, it is very important to design a coherent tabulation in order to support disabled people in their AMobiLe interaction. However, it is only through experience that the user discovers which keys give the fastest access. We tried to design a clear multimodal interface with simple ways of interacting, using straightforward language to provide information, so that the TTS output can be easily understood. Furthermore, in the graphical interface design we chose a color combination that maximizes the contrast between background and text, to increase legibility for people with visual impairments. It is also possible for the user to choose his/her favorite color scheme from the preference menu. In addition, the user is notified of all application events (login, errors, warnings, etc.) through a visual message, an audio message, and/or audio feedback.

Figure 2: Interface of the Mobile Device Application

Since the PDA touch screen is difficult to use for people who have physical or sensorial impairments, we decided to control only part of the touch screen in AMobiLe. We propose an accessible, graphical schema on the screen, based on colored square zones for easy interaction with the application (Figure 2a). This interaction schema is the same for all application windows, with changes only of the corresponding function; we make sure that the action corresponding to each command is well described so that the user can choose the right function to reach his/her goal. Moreover, in order to make it easy to find the colored squares, we applied a thin transparent plastic film over the screen with raised transparent rubber markers that match the location of the colored squares (Figure 2b). In this way, we have created virtual hardware keys on the touch screen to reduce interaction errors and help blind individuals manage the application controls. When the user has to input information into the system, (e.g. to create a text note), s/he can choose from four different ways of writing. The first uses the Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

98

hardware keyboards integrated into all Smartphone models. These keyboards are designed to be as accessible as possible, have tactile references, and are familiar to users of PC keyboards. The second way of writing uses the on-screen keyboards provided by the operating system. This modality is useful only for users who do not have severe visual disabilities (color blindness, partial sight). The other two modalities are provided by the MSP screen reader. One modality is called “simulated keyboards“ and uses the PDA hardware keys; the other uses “virtual keyboards” which are based on virtual screen keyboards to insert the characters. All input modalities allow users to insert any letter, number, or symbol. Web access to the system is based on ASP technology in the .Net framework, and was designed to meet the strictest web accessibility standards, which conform to the highest accessibility level. Thus, the AMobiLe is accessible with widely available standard browsers (e.g. Internet Explorer, Firefox, and Safari). In fact; all of the content published in the system is validated to meet WAI-AAA as well as WCAG 1.0 (Web Content Accessibility Guidelines) standards. In our testing in conjunction with the web interface, we have used a Jaws TTS in order to allow visually disabled people and users who cannot otherwise access a GUI interface to fully use the system. In particular, Jaws supports students during their application navigation and interaction with the system. Both students and teachers who access the AMobiLe through the web interface (Figure 3) use the same entry point.

Figure 3: The AMobiLe Web Interface

Furthermore, at the login, the system selects each user’s profile and activates a set of permitted functions and information. Through the web access, the teacher can add a new learning activity or evaluate student activity. When s/he adds new content, s/he defines the learning activity and links some questions to the content, which the students have to answer to accomplish their educational task. Once the new learning activity is published it becomes available for all registered students, whether they access the system through the web interface or use a mobile device. Moreover, the teacher can navigate and evaluate hypermedia artifacts that the students produce during their learning activities. Finally, the teacher can observe the development of each class’ hypermedia in order to evaluate students’ contributions over time. This is a synthesis of the community knowledge building process. On the other hand, students who access the system using a web browser will find the learning activities Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

99

that the teacher has specified for them. They can view and/or edit the hypermedia artifacts they created by themselves for each learning activity in which they are involved. Furthermore, each student can participate in the building of class hypermedia through a voting procedure, and finally, s/he can view her/his classmates’ hypermedia. Testing The design of the test phase is based on the assumption that the subjects involved do not only use the system but also play an active part in the creation of the framework. In fact, we are convinced that the real perception of the tool’s usability belongs exclusively to the final user. For this reason, we have designed two testing phases: the first is pre-testing with a limited number of users, and the second involves testing with students at a school. The pre-testing phase was held in spring 2009. This involved two visually impaired users and two sighted users who used the AMobiLe system to test usability, provide feedback on difficulties in using the vocal interface, report bugs, and suggest improvements. After reviewing the pre-testing results we corrected several bugs and made changes to some parts of the program interface to improve usability. Since the AMobiLe system is still at a developmental stage we cannot report any real findings in this paper. We are currently testing the system in a more formal learning context (December 2009 - April 2010), involving students from the Institute for the Blind in Palermo. The testing has been designed to allow both visually impaired and sighted students to have an on-site experience with a mobile device and work collaboratively on a learning task. Due to limited project funds we have only 12 smart phones with GPS aerials and EDGE connections available. This was taken into account when we developed our testing methodology. Conclusions People with disabilities will benefit from the significant social, cultural, and economic benefits of ICT as long as the information and services are designed appropriately. Multimodality can play an important role in improving the accessibility of emergent technologies such as mobile devices. In this paper we have presented the AMobiLe, a fully accessible online environment for mobile learning. The system has been implemented by means of a multimodal software environment accessible both through desktop computers used by students in the classroom or at home, and mobile devices that are used to support on-site learning activities. Moreover, the adoption of the Universal Design principles means that all users can benefit from this system. In particular, the AMobiLe system can be used and accessed by students with special needs as well as those with normal abilities, and all can provide interpretations according to their various points of view, using hypermedia notes. As reported by a partially blind user who has tested the AMobiLe system …from the point of view of users with impairments such as blindness, the AMobiLe is important because its accessibility allows blind users to participate fully in the learning activities of a class. Using GPS technology, the system could indicate whether at a point of interest there are accessibility supports, like “Loges,” which are special paths designed to assist the mobility of the blind, or tactile maps with scale reproductions of historical buildings. The system indicates whether there are accessibility supports of this kind at points of interest. This system can also indicate public transport connections to the main points of interest or the distance from other users of the system by means of an integrated vocal navigator. This system can also be used by choosing between different colours on the screen depending on the needs of the user. Careful management of the touch screen means that a blind student cannot accidentally activate a function he does not require. Furthermore, our experience has led us to the conviction that adopting assistive technologies as well as following specific design guidelines can play an important role in the integration and social rehabilitation of people with sensory dysfunctions and improve their personal autonomy. Thus, we have presented considerations and proposed techniques we have found to be useful in increasing the accessibility of a Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

100

mobile learning environment for people with special needs, in particular for the visually impaired. We hope that some of the ideas presented here will stimulate other researchers to explore access strategies within their own work. In our opinion, multimodality and accessibility are essential for integration. Multimodality overcomes the limitations of various interfaces using raised touch screen keys, GPS navigators, mobile phones, handheld devices to introduce and consult data, and different color options for those who suffer from color blindness.

References Arato, A., Juhasz, Z., Blenkhorn, P., Evans, G., & Evreinov, G. (2004). Java-powered braille slate talker. In Proceedings of ICCHP 2004: Computers Helping People with Special Needs (pp. 506-513). Paris, France. Ardito, C., Pederson, T., & Costabile, M. F. (2006). CHAT – Towards a general-purpose infrastructure for multimodal situation-adaptive user assistance. International Workshop on Modelling and Designing User Assistance in Intelligent Environments (MODIE 2006). In T. Pederson, H. Pinto, M. Schmitz, C. Stahl, & L. Terrenghi (Eds.), MODIE 2006 (pp. 27-31). Saarbruecken, Germany : Saarland University. Retrieved from http://www.itu.dk/people/tped/pubs/ArditoEtALMODIE2006wspaper_w_confinfo.pdf Ardito, C., Pederson, T., Costabile, M., & Lanzilotti, R. (2006). CHAT: cultural heritage fruition & elearning applications of new advanced (multimodal) technologies. In Proceedings of the National Event on Virtual Mobile Guides, 2006. Turin, Italy. Arrigo, M., Gentile, M., Seta, L., Fulantelli, G., Di Giuseppe, O., Taibi, D., & Novara, G. (2008). Some consideration on a mobile learning experience in a secondary school. In J. Traxler, B. Riordan, C. Dennet (Eds.), Proceedings of mLearn 2008: 7th international conference on mobile Learning (pp. 20-27). Wolverhampton, United Kingdom: University of Wolverhampton. Barbieri, T., Bianchi, A., Carella, F., Ferra, M., & Sbattella, L. (2005). MultiAbile: A multimodal learning environment for the inclusion of impaired e-Learners using tactile feedbacks, voice, gesturing, and text simplification. Paper presentated at Assistive Technology Shaping the Future, AAATE 2005, Lille, France. Carbonell, N. (2006). Ambient multimodality: Towards advancing computer accessibility and assisted living. Universal Access in the Information Society, 5, 96–104. Retrieved from http://www.springerlink.com/content/8018618424v8753v/fulltext.pdf Gentile, M., Taibi, D., Seta, L., Arrigo, M., Fulantelli, G., Di Giuseppe, O.,& Novara, G. (2007). Social knowledge building in a mobile learning environment. In Proceedings of the Second International Workshop on Mobile and Networking Technologies for Social Applications (MONET 2007). Vilamoura, Algarve, Portugal. Kaye, H. S. (2000). Computer and Internet use among people with disabilities (Disability Statistics Report 13). Washington DC: U.S. Department of Education, National Institute on Disability and Rehabilitation Research. Retrieved from http://dsc.ucsf.edu/pdf/report13.pdf Klante, P., Krosche, J., & Boll, S. (2004). AccessSight: A multimodal location-aware mobile tourist information system. In Proceedings of ICCHP 2004: Computers Helping People with Special Needs (pp. 287-294), Paris, France.

Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

101

Lonsdale, P., Baber, C., Sharples, M., Byrne, W., Arvanitis, T. N., Brundell, P., & Beale, R. (2004) Context awareness for MOBIlearn: Creating an engaging learning experience in an art museum. In J. Attewell & C. Savill-Smith (Eds.), Mobile learning anytime, anywhere: A book of papers from mLearn 2004 (pp.115– 118). London, United Kingdom: Learning and Skills Development Agency. Malliou, E., Miliarakis, A., Savvas, S., Sotiriou, S., Stratakis, M. (2004). The MOTFAL project: Mobile technologies for ad-hoc learning. In J. Attewell & C. Savill-Smith (Eds.), Mobile learning anytime, anywhere: A book of papers from mLearn 2004 (pp.119–122). London, United Kingdom: Learning and Skills Development Agency. Moreno, R., & Mayer, R. (2007). Interactive multimodal learning environments. Educational Psychology Review, 19, 309–326. Reeves, L.M., Lai, J., Larson, J. A., & Oviatt, S. (2006). Guidelines for multimodal user interface design. Communication of the ACM, 47 (1), 57-59.

Journal of the Research Center for Educational Technology (RCET) Vol. 6, No. 1, Spring 2010

102