Virtual mobility trainer for visually impaired people - IOS Press

4 downloads 1047 Views 282KB Size Report
211. DOI 10.3233/TAD-140420. IOS Press. Virtual mobility trainer for visually impaired people. Reinhard ...... projektpdf.php?id=771&lang=en, 2010.
211

Technology and Disability 26 (2014) 211–219 DOI 10.3233/TAD-140420 IOS Press

Virtual mobility trainer for visually impaired people Reinhard Koutny∗ and Klaus Miesenberger Institute Integrated Study of the JKU, Linz, Austria

Abstract. This paper presents a prototype [1] of a location-based and context aware system supporting blind and visually impaired people to improve their mobility skills and in particular enhancing traditional mobility training. The system supports annotating pre-defined routes with information provided in standard mobility training sessions for blind people. This allows later on to reuse the information provided in a person to person session and even to share this expertise with other people. People in need of knowing by heart a certain route can go back to the stored mobility training information to better remember and learn how to manage this route independently. The virtual mobility trainer allows making repeatedly, time independent and location-based use of information provided from a human instructor. This paper presents a first prototype which allows designing routes, accessible to blind and visually impaired people so that it can be used for mobility training. Furthermore, this tool allows performing advanced orientation tasks assisting blind people in an unknown environment. The Digital Graffiti framework [9] was used as underlying framework, which supports the needed annotation of maps with virtual landmarks. Keywords: Assistive technology, mobility training, navigation, localization

1. Introduction Navigation and orientation are crucial activities of human beings which need to be achieved daily. The usual way to perform these tasks involves the visual sense, which is used e.g. to recognize known places and buildings or signs and structures which are familiar to the person. This could be the sign indicating a bus stop or a store of a known supermarket chain. It is central to keeping control over any aspect of the trip. People suffering from visual disabilities need support or alternatives for these techniques. Mobility training is a well-proven method to allow blind or visually impaired people to gain such alternative skills for independent and safe traveling. People learn how to orient along haptic (e.g. white cane) or acoustic cues of the environment. Practicing avoiding dangerous situations is of utmost importance. A mobility and orientation instructor performs these trainings, which is costly in terms of time and money. ∗ Corresponding author: Reinhard Koutny, Altenberger Straße 69, 4040 Linz, Austria. Tel.: +43 732 2468 3761; Fax: +43 732 246 23761; E-mail: [email protected].

In this paper mobility is defined as the ability to physically move from one place to another. People who are mobile have to navigate from one place to another. Therefore, navigation is an important term as well which includes tasks of figuring out where one actually is, called positioning. However, one still needs to know where to go to successfully travel. In Austria about 7.800 people suffer from severe visual impairment which makes it impossible for them to use their eyes for navigation and orientation [10]. A well-proven approach to support people with visual disabilities in increasing their mobility skills is mobility training. Many individuals of this group use white canes to discover the near surrounding environment. However, the correct use of a white cane is not sufficient for an adequate mobility. White canes can be used to achieve micro-positioning but mainly to avoid obstacles. The person still needs to know the environment and store it as mental map, which is a mental image of routes and objects, such as buildings helping this person in orientation, and obstacles blocking the way. This mental map is essential for every person who has mobility issues because of a lack of eye-sight. Mobility training fuels the generation of mental maps and helps

c 2014 – IOS Press and the authors. All rights reserved ISSN 1055-4181/14/$27.50 

212

R. Koutny and K. Miesenberger / Virtual mobility trainer for visually impaired people

to strengthen the self-confidence of the person by repracticing routes [1].

2. State-of-the-art Detailed research and studies show that there are approaches dealing with this topic. Anyhow, traditional approaches, like white canes or guide dogs, aim at detecting and avoiding obstacles [12]. This still requires the knowledge of the environment. The user would be lost without knowing his/her way from the current location to the destination. Obstacle detectors [7] are well-proven adaptations of the traditional approaches but they do not cover a broader purpose of use. Environmental image converters [12] have downsides [12] in terms of response time due to the enormous processing power which is needed to transform a sequence of images to meaningful representations understandable for the user. Besides, it is still a big challenge to additionally communicate distance. Orientation and navigation approaches suffer from inaccurate or expensive positioning methods. Tag-based infrastructures [14] are costly and time-consuming to setup [12]. Teleassistance [1] systems unconditionally need a sighted human operator which is costly as well as decreases the independence of the user. On the other hand, some of nowadays’ orientation and navigation approaches are quite cost-effective and can be operated on a smartphone. These devices are capable of positioning and communication with the user acoustically or via tactile signals. Usually they rely on GPS for positioning which works fine outdoors and provides a cheap basis for navigation, but approaches of this kind have fundamental downsides too, especially when it comes to indoor operation or accuracy in general. Applications of this category are amongst others MyWay [14], which is a system that allows blind users to create routes solely based on GPS points, or Ariadne GPS [1], which provides information about the surrounds, like street names, and the distance to preselected coordinates.

3. Motivation The goal is to provide a way to assist visually impaired people in training their routes, which they recently learned during the mobility training lesson. For that, GPS-based approaches look most promising since the lack of accuracy seems to be less important as localization does not need to be accurate to a meter as

the instructions from one waypoint to another use cues involving the environment like curbs of the sidewalk or similar. Indoor the framework, which is used as basis, implements positioning via Wifi. On this basis, our “Virtual Mobility Trainer” [1] uses the hardware of nowadays’ smartphones to achieve the positioning task and to communicate the required information in the form of acoustic and tactile signals for the blind or visually impaired user. In order to deliver a solution offering full tool support throughout the whole process, an easy-to-use editor for sighted people was developed as well which allows the intuitive creation of routes for the target group of the “Virtual Mobility Trainer” [1]. Unlike applications which provide orientation and navigation features solely relying on GPS, this approach is based upon the process of classic mobility training. It is supposed to simplify and support the process of remembering routes which avoids several downsides of traditional GPS-based applications, mainly arising from the lack of accuracy and reliability, since it involves mobility trainers in creating routes and verifying their correctness and enriching routes with essentially helpful hints of how to get from one waypoint to the next.

4. Conceptual solution The “Virtual Mobility Trainer” [1] consists of two applications. The first one, called “Route Creator”, allows the definition of routes of a mobility training session by a trainer or a qualified person. The second part is the “Virtual Mobility Training”, an application which allows the blind or visually impaired user to train routes and explore the environment. 4.1. Storage of the location-based representation of the training One basic objective was the definition of a system that is capable of storing the relevant information of a training session, reflecting the imagination of the user’s surrounding environment. This model of the relevant training information uses the Digital Graffiti Framework, which allows the definition of items tied to a certain geographical position. Anyway, besides the underlying technology there are various issues and challenges which need to be taken into account. First of all, the model shall be flexible enough to store all the important information, even for various target groups.

R. Koutny and K. Miesenberger / Virtual mobility trainer for visually impaired people

There might be users who are interested in a different kind of information or level of detail. Some users might want to practice a route; others might want to explore their environment. It is crucial to store the information in an extensible way to meet all these requirements and goals of every individual user. On the other hand, the system shall take the non-expert user by the hand and offer a guideline which makes it easier for him or her to create new elements holding the information. Moreover, it is easier for a user, who wants to do the training, to remember a limited number of item types. Considering all this thoughts, it was decided that the model stores route information in separate item types and not as optional information. In particular, four item types were defined; every one with a specific purpose and behavior. Each item of every item type has common features and types of information stored inside it. 4.1.1. General item information Every item has a name, which does not need to be unique. The motive for that rests upon the characteristics of Digital Graffiti items. Each item has two different ranges defined by two radii each with another purpose. Therefore, an item can only describe a circular area. If an object, like a building is longish, its shape can be better approximated by two or three similar circular items holding the same name and description. The first of the two radii specifies at which distance the item is transferred from the server to the mobile device. From this moment on the user can retrieve information of this element like the orientation, the distance or the description. This first radius is called visibility radius. The second one specifies when the user gets actively informed about the item. This radius is usually smaller and communicates the arrival at an area or object to the user. The purpose is that the system wants to warn the user of an object which might stand in his or her way or the user has reached a virtual item and the system wants to inform about this circumstance. This radius is called pop-up radius. Since too much options and input possibilities lead to a more complex and less intuitive system, the visibility radius is now hidden and a fixed value was defined, which is a relatively high value and consequently a large area specified by this radius. Therefore, the user can see items even if they are further away. The pop-up radius is depending on the type of item editable or fixed. 4.1.2. Waypoint The first item type is the waypoint. A waypoint is the basic item which builds up a route. There is always

213

one start waypoint and every waypoint is part of exactly one route and has a certain position in the route. One waypoint is not used for multiple routes as it holds information of how to get the next waypoint in the sequence which is unique. All waypoints are capable of storing a name, a description and the route information. This navigation cues are meant to guide the user from one waypoint to the next. For example could a navigation cue state, that the user can orientate himself or herself by following the curb of the sidewalk. The user can request the orientation to the next waypoint at all times, which is communicated via spatial sound. This makes it simple for the user to perform the route training. He or she just has to follow the instructions which are stored in every waypoint. Since this location-based system uses GPS and every item is linked to a position. The system always knows where the user is and it can provide the user with additional information which is stored in the location-based item manually. Therefore, the user is able to retrieve orientation and positioning information to make sure he or she is on the right way. This item type has a fixed pop-up radius. 4.1.3. Landmark Blind or visually impaired people usually use, amongst other things, their well-trained hearing to orientate themselves. This could be a fountain which makes a characteristic noise and is placed in the center of a square. However, besides acoustic landmarks, there are a lot of other objects which can be used for determining the current position. For instance, one could identify a coffeehouse by its distinctive odor. These landmarks are an important factor and component of the traditional mobility training not only for orientation but also to raise confidence that one is on the right track. Therefore, the user who creates the model can define a landmark item as well, indicating an object which can be identified by the blind or visually impaired user of the virtual mobility training. Landmarks do not need to be unique and the pop-up radius is editable by the user who creates the model of the training session. 4.1.4. Obstacle Another important thing to know for the user is the position of obstacles blocking his or her way. For example a garbage can which stands in one’s way could cause severe injuries. More critical are hanging or overhanging objects like mailboxes or branches. This kind of obstacles cannot be identified by white cane users and require additional guidance. Obstacle items can be placed at the position of such objects to warn

214

R. Koutny and K. Miesenberger / Virtual mobility trainer for visually impaired people

the user against them. They have an editable pop-up radius and the object does not need to be unique as well, because there is probably more than one garbage can which one does not need to distinguish from another. 4.1.5. Point of interest Trainees might not only want to remember routes but they also want to know more about their environment even if this is not crucial to get from one place to another. For example, a person wants to get from his or her home to the next grocery store. On the way he or she has to cross a square. The name of the square and the circumstance that at the square is a quite important church might be interesting for this person even if the person is not very religious. Normally, the person would need another person who tells him or her that there is a church or the blind person would have to randomly ask passersby if there are any interesting sights, buildings or places in the nearer surrounding. For that reason, it is possible to define items which are not necessarily important for the route training, but they provide additional information for the interested trainee, who wants to explore his or her environment. This type of item has an editable pop-up radius, which makes it possible to describe buildings, statues or larger areas like a football stadium. In addition, the point of interest item also provides a category property. This allows the person who creates an item of this type to choose one amongst categories like lecture room, restaurant, library, sight and many more. The trainee can retrieve this information too. 4.2. System to create routes Another task is to provide a usable system to create these location-based items. Basically there are various approaches to achieve this task and all have their strengths and weaknesses. An application running on a conventional desktop computer has the advantage that this computer has a big screen and a lot of processing power. The downside is that the person who wants to create a new route or other items needs to decide whether to remember all objects and places which are important for the further use, or he or she rather wants to make notes manually. The first option is neither user-friendly nor fail safe. Humans better recognize important things when they see them. For example if a person does a test walk to gather all important information for the items, he or she would probably easily recognize if a garbage can blocks the way. If this person does not take notes, it

would be very likely that this person forgets this obstacle if he or she wants to create a new route at home. On the other hand if the person takes notes, it is quite cumbersome to write the same thing twice, once at the test walk and once while defining the route. Another option would be a browser application. One could run it at his or her desktop computer and on the smartphone. Nowadays, browser apps still suffer from usability and performance issues. There are approaches which try to handle these issues, like PhoneGap [1], but they still have their weaknesses when it comes to multi-touch support to perform zooming. Furthermore, programmers would have to write device or rather platform dependent extensions to use things like the spatial audio for orientation. Last but not least, a native application for smartphones would be an option too. It would utilize a smartphone hardware at its best, both in terms of performance and usability. Native applications can use multi-touch capability and the native GUI. This makes it easy for the user to perform route creation the way he or she got used operating any other application on the phone. Furthermore, a mobile application running on the same device like the application for the training would increase the accuracy of the GPS positioning of the items. Depending on the surrounding buildings and environment, GPS positioning varies in precision, due to the error caused by the model of the geoid or the reflection of the GPS signal. These types of errors could be at least partly compensated by using a mobile application for route creation. If the person who creates the route walks the whole route by foot and creates items at the actual location, the application can use the GPS location of the smartphone for the current item. Since it uses the same method for positioning as the training application it annuls the effects of these error types at least partly. Therefore, the development of a mobile route creator application was decided which runs on the same platform as the training application and providing this feature. Nevertheless, the person creating the route is able to move the position of every item later if he or she wants to. For that reason, the route creator application is capable of loading existing annotations and routes to modify them later. 4.3. Outdoor route training The actual training application is about generating the mental map via repetition of routes defined during route creation. As mentioned before waypoints build up a route and therefore the user can train a route by

R. Koutny and K. Miesenberger / Virtual mobility trainer for visually impaired people

sequentially walking from one waypoint to the next. Furthermore, he or she should have the alternative to practice only parts of the route, namely those parts he or she does not feel comfortable with. Therefore, basically every route training starts with the first waypoint, but the trainee always has the opportunity to switch forward or backward in the list of waypoints. This means, that the user can totally flexible organize the training. Every waypoint offers navigation cues which guides him or her from one waypoint to the next. But one can skip waypoints and can retrieve the orientation and distance of an arbitrary waypoint anyway. The orientation is usually intuitively communicated via spatial sounds, however for noisy environments the user has the opportunity to fall back on vibration signals. It does not provide the same amount of information, like in which way the user has to turn, but the user can still distinguish the right direction as the phone vibrates when the user faces towards it. 4.4. Orientation via spatial sound In particular the orientation via spatial sound works the following way. Whenever the user wants to check if he or she is heading in the right direction he or she can start the orientation mode. Now the trainee needs to plug in the stereo earphones. The user is meant to hold the smartphone in the same direction as his or her line of sight. As the user turns, the volume of each channel separately changes with the user’s orientation. For example, if the next waypoint is on the right hand side, the volume of the right channel is at a higher level. Due to the shape of the human’s pinna, sounds from behind are at a lower volume. Moreover, the pinna influences the tone and its frequency spectrum as well. Our solution considers these phenomena, so that sounds from behind are quieter and duller. In particular, the gain and the acoustic color get adapted by the actual orientation of the smartphone. The user is supposed to use the phone like a magic wand pointing in the line of sight. By turning the user can easily determine the direction to the next waypoint. The smartphone uses the builtin digital compass to perform these calculations. Since the main focus lies on the determination of the right orientation while a correct production of 3D-sound is secondary, state-of-the-art techniques like HRTF and distance information [17] have been set aside and this simple but sufficient and inexpensive approach was chosen. 4.5. Exploration of environment Since the user might not only like to practice the route again and again, he or she also has the option

215

to explore his or her environment. If the user wants to know which objects and places are nearby, one has the option to start a special mode which comes in addition to the conventional training mode. This mode offers the feature to virtually explore the surroundings by using the phone’s digital compass. The user is able to retrieve information of objects which are in his or her line of sight. If the trainee turns further he gets informed that there is another object where he or she can request information as well. Besides the name, type and the description, the user also has the option to make an inquiry about the current distance to the object. 4.6. Interface designed for ease of use Since this solution aims at meeting the requirements of users who are usually non-technicians, the two applications need to be designed in an even more intuitive way. Both applications, the route creator and the training application, have different target user groups. Therefore, two separate applications were developed, each of them fully adapted to the needs of each target group. The reason was the circumstance that usually a route creating person would not often use the training application and the other way round. So each user group can start its preferred application separately and there is less overhead in each of the two program parts. 4.6.1. Route creator application Users of the route creator application are sighted persons who know the environment and have at least a good understanding of what mobility training is all about and which information blind or visually impaired people need to walk safely and perform a mobility training session successfully. Therefore, the application needs to be intuitively usable for a sighted person with a good understanding of the context but he or she does not need to know a lot about computing. Anyway, this person should be able to use a modern smartphone. The route creator application (see 4.6.1) is designed to be usable in a quite similar way as standard applications like Google Maps on smartphones and uses standard user interface elements considering user interface guidelines for mobile applications [17]. The map showing an overview of all created items behaves the same way as the Google Maps map view. One can zoom in via two finger gestures or alternatively via zoom buttons. The user can scroll via swipe gestures. Moreover, additional information is displayed by pressing on items and one can move them by dragging them over the map. All input op-

216

R. Koutny and K. Miesenberger / Virtual mobility trainer for visually impaired people

tions are narrowed down as far as possible. This means, that there are no configuration options which are barely used and would confuse the user. 4.6.2. Training application The training application uses spatial sounds for orientation, text to speech to communicate information to the user and vibration signal to warn the user against or inform him or her about reaching an item, or rather an object or place. The trainee controls the app via eight simple gestures. Every user input is confirmed by an audio feedback so that the user always knows which mode is active. What is more, every action the user performs is reversible. Therefore, the training application is completely useable by a blind person. Moreover, a sighted person has a visual component for this application giving a summary of all items. This optional view is for persons creating a route to quickly check there inputs. Hence, application can be used by both target groups, namely sighted persons who want to check if the items are placed correctly and blind or visually impaired users who want to train a route or explore the environment. 4.7. Overcoming challenges Some conceptional challenges of this approach were identified beforehand during the requirements elicitation phase as well which need to be considered. 4.7.1. Overloading the hearing Interviews with blind persons showed that the overloading of the hearing of a blind person is quite critical. A blind or visually impaired person uses his or her hearing to identify dangerous situations. Therefore, using earplugs during the whole training session is rather dangerous. This is why new events are communicated via vibration signals to the user. Every time a new event arises the user gets informed via vibrations. Each message type has a different signal. Arriving at a waypoint has a special signal and approaching an obstacle has another one. Now the user knows that a new message is available and he or she can listen to it. The user stops the walk and either plugs in the earphones and is now safely able to listen to the message or uses special earphones like bone conduction phones anyway. If the user wants to orientate him- or herself, he or she stops, plugs in the earphones and starts the orientation mode using spatial sound. This approach makes sure that the user does not expose himself or herself to danger because of listening to information provided by the training application. Nevertheless, the training application can provide content-rich information at the same time.

4.7.2. GPS accuracy As mentioned before, GPS accuracy is not precise enough to rely just on this technology. It is influenced by many factors including the difference between the shape of the earth and the geoidic model used, arrival time errors, numerical errors, atmospheric effects, signal shadowing and multipath errors, of which the last two are the most serious ones. The errors emerging from the difference between the shape of the earth and the geoidic model and others can partly be compensated by creating items using the actual location of the smartphone running the route creator application. However, there is still some inaccuracy which makes it impossible to fully trust a route just based on GPS coordinates. Therefore, each waypoint offers orientation cues which allow the user to find the way to the next waypoint even if the location differs from the real location. For example, you do not need a location which is absolutely precise if you have the information that you can orientate yourself at the curb of the sidewalk to get to the next waypoint. Blind users who are experienced in white cane walking can identify these cues even if they would expect the curb some meters earlier. Besides, this solution aims at assisting mobility training rather than completely replacing it, hence one can presume that the trainee already walked the route at least once before he or she starts the application and roughly knows which places might be dangerous.

5. Implementation Both applications are developed on Android and based on Digital Graffiti [9] which is available for Android too. It allows the creation of location-based general-purpose information items which can be created, received and modified on mobile devices. For that reason the device needs GPS, trilateration via mobile networks is recommended in addition. Some kind of mobile internet connection is required too since the devices receive new items from a server. A digital compass is needed for orientation and a stereo headphone jack, vibration notification and a touchscreen for user inputs.

6. Concept of operations The “Route Creator” [1] application provides an intuitive user interface (see Fig. 1) for sighted persons like mobility trainers. One can create, modify and edit

R. Koutny and K. Miesenberger / Virtual mobility trainer for visually impaired people

217

Fig. 1. Screenshots of route creator.

items of all four item types. It is based on Google Maps API [5] which allows developers to create their own map applications offering almost the same user experience as in the official Google Maps App when operating the map view. In particular, the “Route Creator” application allows pinch-to-zoom and as an alternative to traditional two-button zooming. Besides that, the user can rearrange items via drag and drop and show additional information and edit them by pressing them. In general, this application was designed to meet the requirements of users who do not have a technical background, so its interface is simplified as far as possible and reduces the input elements to a reasonable minimum. The whole “Training” [1] application can be controlled via gestures (see Fig. 1) and all actions are confirmed by an acoustic and tactile feedback. The user can undo every action and can repeatedly listen to every message he/she has received. This application basically consists of three different modes. The training mode is supposed to guide the user along the route. It outputs information over the acoustic and tactile channel. The user gets informed about new events like entering the range of an item via vibration notification. If he/she is interested in the event he/she can perform a gesture which starts the playback of an acoustic description. Furthermore, the user has the opportunity to

obtain additional information, like a description of the distance to the next waypoint. The exploration mode focuses in supporting the user in getting familiar with his/her surrounding environment. The user can explore his/her surroundings by holding the device in line of sight and turning himself/herself. If there are objects straight ahead, the user gets informed via vibration notification. Gestures initiate the playback of different types of information about this object. For instance, one could ask for the distance or for a more detailed description communicated via the earplugs. The orientation mode offers support to users who are not sure about their position and orientation. It indicates the direction to the next waypoint via spatial sound and vibration signals.

7. Evaluation This first informal user evaluation involved four high school students, two of them blind and two of them with visual disabilities, in addition two teachers took part. Both teachers were familiar with the concept of mobility training. Despite of the small number of participants, user tests are a useful tool for verification purpose of the user interface. Regarding to Nielson [10], also a small number of evaluators is useful

218

R. Koutny and K. Miesenberger / Virtual mobility trainer for visually impaired people

to identify most of the usability issues, which was the focus of this evaluation. The Virtual Mobility Trainer for Visually Impaired People is supposed to provide a user interface and interaction concept which enables blind people or people with visual disabilities to undertake training and exploration activities in an environment which is not fully known to them on their own. Therefore, usability is a key aspect which was considered from the very beginning of this project. The user evaluation was one important part which was used to test the user interface and the interaction concept and highlight flaws and weaknesses, but the goal was also to test the user acceptance. On that account, the following key aspects were identified as crucial for the evaluation: – Usability: Is the concept of operations clear and comprehensible? – Usability: Are the gestures easy to perform? – Benefit for the user: Is the training application beneficial for orientation? – Benefit for the user: Is the training application beneficial for exploration of the environment? – Benefit for the user: Is the inclusion of spatial sound for orientation beneficial? – Besides that, the test persons also were asked for weaknesses they identified and for own suggestions for improvement. The evaluation consisted of the following tasks: – Dry practice of the gestures with print-outs made on a Braille printer. – The actual user test included a one-to-one test run of one test person and one instructor. During the test run they were assigned to carry out the following tasks: 1. Switch via gestures to the orientation activity 2. Perform orientation via spatial sound to get the direction to the first waypoint 3. Perform gesture to get additional information including the distance to the first waypoint 4. Switch via gestures to the training activity 5. Go to the first waypoint and stop after getting the confirmation that the first waypoint is reached 6. Listen to the message queue, which includes among others that the first waypoint is reached 7. Switch via gestures to the orientation activity 8. Perform orientation via spatial sound to get the direction to the second waypoint 9. Switch via gestures to the training activity

10. Go to the second waypoint 11. Stop after getting a message that an obstacle is ahead 12. Listen to this message 13. Switch to the exploration activity and explore the surrounding, name the buildings next to the test route and point at their direction 14. Switch via gestures to the orientation activity 15. Perform orientation via spatial sound to get the direction to the second waypoint 16. Switch to the training activity and go to the second waypoint and stop after reaching it. At the same time questions were asked to identify usability flaws and weaknesses. – A questionnaire considering issues like the understandability of the concept of operations, the ease of interaction, the benefit of the different features of the application, including the orientation mode, the exploration mode and the spatial sound for orientation. These questions needed to be answered using a 5-level Likert Scale ranging from “very much = 1” to “not at all = 5”. It also included a section with open questions provoking the reader to suggest own ideas for improvement. The outcome of this evaluation was that the menu structure was found understandable (mean 1.5) and the test persons found the application and the concept very helpful for exploration of the environment (mean 1). However, there are still some weaknesses, orientation during walking was found less precise and therefore less helpful, but still beneficial (mean 2.75). Orientation with spatial sound was found also beneficial too (mean 2.75). The decision of using a set of eight gestures was partly successful. The test persons with visual impairment had no problems (mean 1.5), but the blind test persons struggled to perform them correctly after a dry training of 30 min (mean 3). The informal interviews during the training revealed the reasons for the weaknesses of this first functional prototype. Thirty minutes of preparation was too short, during the actual training the students got better. GPS was sometimes imprecise, therefore the test persons were able to determine the theoretical right direction, but it was sometimes displaced to the real position, which made a relatively big difference in our test run, because of the small distances (∼ 15–30 m) between the waypoints, this might be better in a real world scenario. The concept of using gestures for blind persons has still some room for improvement. As blind persons hardly do handwriting, gestures based on letters

R. Koutny and K. Miesenberger / Virtual mobility trainer for visually impaired people

require more practice for them. Some concepts of interaction are highly depended on the user, some prefer vibration signals over sounds and vice versa. Therefore, a configuration to a certain user would perfectly make sense. Another smartphone with tangibly (from the frame) distinguishable touchscreen would avoid some problems with the gestures too.

References [1] [2] [3]

[4]

8. Further work In addition to the issues revealed by the evaluation, an accessible “Route Creator” is planned too. This allows blind users to define routes on the fly. They basically walk along an undefined route and at every important location or branch they can leave an item, e.g. a waypoint, and add an audio description through the built-in recorder. It is also intended to combine this project with another currently running project at our institute. It is called Viator [12] and also involves partners like the Institute for Software Engineering at JKU, the Central European Institute for Technology in Vienna, the Austrian Federal Railways ÖBB, the public transport company Linz Linien, and the Upper Austrian transport authority OÖVV. It is based on DG [9] too, but it aims at providing holistic assistance during the whole process of traveling with different means of public transport, including trams, buses and trains. It offers special features to people with limited mobility, like blind people or wheelchair users. However, it is supposed to be used by every person using public transport. Therefore, it features route planning, notifications to change means of transport, gives directions at stations, especially adapted to the needs of people with limited mobility, and it is aware of delays, informing the user and re-planning the route if required. Especially, navigation at stations is where both projects could gear into each other. Blind persons or persons with visual impairment might want to perform mobility training to get to know the environment and ensure that they are able to get from one platform to another independent from the instructions on the phone, especially when they need to do that every day.

219

[5] [6]

[7]

[8] [9]

[10] [11]

[12] [13]

[14]

[15]

[16]

[17]

Adobe Inc., PhoneGap, http://html.adobe.com/edge/phonegap -build/, 2012. Ariadne GPS, Ariadne GPS, http://www.ariadnegps.eu/, 2013. N. Ayob, A. Hussin and H. Dahlan, Three Layers Design Guideline for Mobile Application, Information Management and Engineering, 2009. ICIME ’09. International Conference on, 2009. M. Bujacz, P. Baranski, M. Moranski, P. Strumillo and A. Materka, Remote guidance for the blind: A proposed teleassistance system and navigation trials, Conference on Human System Interactions, 2008. Google Inc., Google Maps Android API, https://developers. google.com/maps/documentation/android/, 2012. A. Honda, T. Shiose, Y. Kagiyama, H. Kawakami and O. Katai, Design of Human-Machine System for estimating pattern of white cane walking, ICROS-SICE International Joint Conference, 2009. T. Hoydal and J. Zelano, An alternative mobility aid for the blind: the ‘ultrasonic cane’, Bioengineering Conference, 1991., Proceedings of the 1991 IEEE Seventeenth Annual Northeast. R. Koutny, Virtual Mobility Trainer for Visually Impaired People, Master Thesis University of Linz, 2013. W. Narzt and W. Wasserburger, Digital Graffiti – A Comprehensive Location-Based Travel Information System, REAL CORP 2011, 2011. J. Nielsen, J. Nielsen and R.L. Mack, Heuristic evaluation, John Wiley & Sons, 1994. OEBSV, braille.at: Statistik Augenkrankheiten Blindheit und Sehbehinderung, http://www.braille.at/braille/augen-medizin/ statistik, 2012. G. Pomberger et al., VIATOR, http://www2.ffg.at/verkehr/ projektpdf.php?id=771&lang=en, 2010. V. Pradeep, G. Medioni and J. Weiland, Robot vision for the visually impaired, Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2010 IEEE, 2010. Schweizerischer Blinden-und Sehbehindertenverband, MyWay, http://ebs.beratung.www.sbv-fsa.ch/de/node/1061, 2013. J. Stewart, S. Bauman, M. Escobar, J. Hilden, K. Bihani and M.W. Newman, Accessible contextual information for urban orientation, Proceedings of the 10th international conference on Ubiquitous computing, ACM, http://doi.acm.org/ 10.1145/1409635.1409679, 2008. P. Strumillo, Electronic interfaces aiding the visually impaired in environmental access, mobility and navigation, 3rd Conference on Human System Interactions (HSI) 2010, 2010. M. Talbot and W. Cowan, On the audio representation of distance for blind users, Proceedings of the 27th international conference on Human factors in computing systems, ACM, http://doi.acm.org/10.1145/1518701.1518984, 2009.

Suggest Documents