2nd World Conference on Information Technology
A prototype for a blind navigation system F. Henriques1, H. Fernandes1a*, H. Paredes1, P. Martins1, J. Barroso1,2 1 University of Trás-os-Montes and Alto Douro, 5001-801 Vila Real, Portugal 2 Knowledge Engineering and Decision Support Research Group, ISEP, 4200-072 Porto, Portugal
Abstract In recent years the mobility and navigation of people with visual impairments started to be taken into high consideration, with chances of major increase in the years to come. Most navigation systems fail in terms of accuracy and availability of contextual information that is suitable for blind users, making current commercial systems useless to be used in this kind of scenario. The main goal of the SmartVision project is to fill this gap, providing accurate and contextually rich information about the environment around the user’s current location, simplifying the navigation and increasing the overall accuracy of the system, while preventing the user from dangerous locations. The technology is currently available and the prototype follows a modular structure, already developed. This paper presents a way for integrating all the available modules, in order to create a fully usable prototype for the SmartVision project.
Keywords: navigation system; blind; contextual information; decision; integration
1. Introduction Accessibility is becoming nowadays an important concern and our society has an increasing awareness of the limitations that visually impaired people face in terms of mobility. With current technology, it is possible to make everyday life easier, making available a wider range of services and trying to offer more ways to do normal things in ordinary everyday life. Systems that assist the navigation of blind users efficiently are still in an early development stage and the current commercial systems don’t provide enough accuracy or relevant contextual information during the navigation, and require a significant amount of training from the user. In order to fill this gap and provide a solution to these limitations we propose a system named SmartVision that aims to enable safe and intuitive navigation for blind users. This system has several main features like providing information about the environment around the user, enhanced accuracy of the user’s location and ways to get to a desired location safely. The system uses different technologies, divided into modules, that are used with the main goal of working together to feed the user with indications that will safely and accurately help him while moving around, either in familiar or unfamiliar environments. The fact that there is redundancy in information is a way to increase the overall reliability of the system, as even if one of the technologies fail another equivalent is used to keep the system running efficiently. This paper focuses on the integration of all existing modules with a decision module, responsible for the interaction between all the components of the system. The document consists of 7 sections. Section 2 presents an overview of the related work in this field. In section 3 we describe the SmartVision project and the prototype’s architecture. Section 4 presents the human-computer interface including the menu used by the blind user to interface with the prototype. Section 5 presents the integration requirements and our solution, the decision module. Section 6 describes the integration of all the modules, the way that the information is processed and the methods used in the decision module. In section 7 we present conclusions about the work done so far and future work.
* Corresponding author. Tel.:(+351) 259 350 383; fax: (+351) 259 350 356 E-mail address:
[email protected]
2. Related Work Assistive technology helps people with disabilities in everyday life and assists them in activities like communication, education, work and recreational activities. It can help these users to achieve greater independence and enhance their quality of life. From the various assistive technologies available nowadays, a special focus was put on those that help blind or visually impaired people with their mobility. Some commercial research and development (R&D) projects that currently describe the state of the art in outdoor navigation systems for assisting visually impaired people have been developed to provide users with instructions to get to a desired destination, or simply to facilitate the task of moving around in the environment. These systems can be classified according to their technological features or according to the kind of information they provide to the user. In the case of blind users, the main difference seen in all systems is the availability or unavailability of information about local obstacles in the environment surrounding the user. BrailleNote GPS [1], StreetTalk [2], Trekker [3], Navigator [4] and Drishti [5] and [6] are GPS based systems that assist the navigation of visually impaired people without providing information about surrounding obstacles. Their primary components are a PDA or laptop especially designed/adapted for people with visual disabilities, a Bluetooth GPS receiver and specially developed software for configuration, orientation and route mapping. The output user interaction can be from Braille display or a speech synthesizer. Navigation systems with local obstacle information provide better knowledge of the local environment, increasing the information quality provided to the blind user to overcome local obstacles. Several techniques are used to detect and measure object distances, multiple ultrasonic sensors (sonar) [7], Laser Range Scanner (LRS) [8]. Computer vision (CV) techniques [9], [10], [11], [12] and [13] have also been used to increase the amount of environmental information that can be given to assist the navigation. Although it is clear that efforts have been made to use stereo vision to help blind people in the task of navigating safely and accurately, there is still a lack of contextual information given to the user during the navigation. 3. SmartVision Project A system to assist the navigation of visually impaired people is currently being developed at the University of Trás-os-Montes and Alto Douro (UTAD), in Portugal. This project, named SmartVision, has the main goal of helping blind people to move around in familiar or unfamiliar environments, feeding the blind user with information about the environment and navigation instructions. The main difference between this system and the related systems is that the SmartVision prototype provides contextual information and points-of-interest (POI) like zebra-crossings, building entrances, public services or monuments. This useful information can help the user to make decisions quickly, based on the information provided by the different modules. Even if one module fails to return valid information for a period of time, using a priority hierarchy another identical module with lower accuracy may be used. This navigation system is built using a modular structure, which can collect and process information, helping the blind user in his navigation. These modules use different technologies to accomplish different tasks, but all with the common goal of enhancing the navigation and orientation of the user, providing contextual information and keeping him away from dangerous locations. The location module is responsible for obtaining geographic coordinates in real time, providing the navigation system with the ability of verifying the existence of new data about the current location of the user [14][15]. This module uses various technologies to retrieve information about current location, like the Global Positioning System (GPS) for outdoor positioning and WiFi for indoor positioning. It also uses Radio-Frequency Identification (RFID) both for indoor and outdoor navigation, being the most precise technology used in this module and one who can enhance the accuracy of GPS and WiFi technologies (Fig. 1).
Fig. 1 - Structure of location module.
The GIS module is composed by a geographic information system, consisting of a database for storage of geographical features and a web application used to view and manage the all the information. This web application is also responsible for creating a XML (Extensible Markup Language) representation of all the geographical features to be used by the navigation system while the user is using it, in other words, provide navigation instructions to the blind user. This XML representation of the database contains geographical information like the coordinate association to RFID tags, the location of an WiFi Access Point or a list of points-of-interest [16]. This file provided by the server containing the web application and the database. The information is updated by the navigation system whenever a newer version of the XML is found using an Internet connection. Since the urban environment is always changing, the system must always be up to date with newer obstacles or changes in the environment. The navigation module is responsible for route tracing and for providing notifications about POIs in the current location. This module gets coordinates from the location module and geographical context from the GIS module to check if there are any points-of-interest that the user needs to be warned about. The navigation module has the main goal of providing navigation capabilities to the user, in other words, keep the blind user informed about places, services, obstacles, etc., around him and try to create a safe route which he must follow. The vision module provides orientation about how the blind can be kept in a safe route, whether in indoor or outdoor environments. To achieve this objective, this module uses stereovision sensors to detect fiducial marks on the pavement. When correctly detected, the blind user is informed about the amount of correction he must apply to his current route. This value is obtained based on the determination of the slope of the line between the user location and the fiducial mark, in the image [17]. This module also uses stereovision with the purpose of detecting obstacles and objects that the blind can find when he moves around [18]. The interface is made through an interface module. This module is responsible for the communication between the system and the user. The interface module uses of text-to-speech technology to transform all relevant information into voice, with simple messages that can be quickly understood by the user. When the user initializes the system, to navigate, he receives information regarding orientation in the environment and about the context around him. The system can inform him about points-of interest, route correction for safe navigation, alerts of dangerous zones, etc. This module also provides an interactive menu so that the blind can explore the navigation system and fin de options that better suits his needs at the moment, making it possible to define routes, get points-ofinterest anytime and anywhere he wants to. The module also uses haptic sensors systems to alert the blind, without saturation of his hearing sense with large amounts of command voices [19]. 4. Menu interface development The navigation system must have the ability of being used by any user and a simple and intuitive interaction between the two parts is needed [20], so, in order to achieve this, an interactive menu was developed which can be used to control all the features of the navigation system. A command- pad (Fig. 2) was also developed to provide physical interaction with the user.
Fig. 2 - SmartVision system command control
The SmartVision menu is split in three levels in order to address the previously discussed aspects regarding accessibility and usability. These levels are as follows: Main menu. List of categories. List of options. The main menu is composed of the option “Go To” which is used by the blind user to create a route to a specific POI, which he can choose from a list of POI provided by the system; an option “Nearby Points-of-Interest” which provides information about all the points-of-interest around the user in a specified radius. The blind user can specify this radius. From this list of nearby POIs the user can also make a selection and create a route as well, similarly to the “Go To” option. The menu has an option “Options” to make system configurations, namely, the radius. To close the menus, the user uses the back button from the command-pad. All these options are split into categories and, furthermore, into a list of options that the blind uses for select a specific task. The menu has an acceptable dimension for the blind user’s limitations. All the options and the navigation are provided to the user using text-tospeech synthetizing. The menu options and categories are filled by data from the XML representation of the GIS module. A way to keep the menu safe from being overloaded with options is to get information from a specific area. Note that the difference between this two approaches is that the second does not cover an area as wide as the first one, providing less POIs and creating smaller menus 5. Module integration One of the main characteristics of the prototype is the fact that the various modules were developed individually and independently. The modules have different implementations and types of information that must be exchanged. To connect and assist the information exchange between all the modules, one last module has been created. It is a decision module, responsible to exchange messages between all the modules and make decisions that provide a good overall performance of the system. The main purpose of this module is to create an abstract layer under which all the other modules work together to keep the system under control. This module is independent and has the main task of controlling and interacting with all the other modules. The data from the available modules is interpreted and the decision module decides what information the prototype must provide to the blind user. This module is internally splits into two different tasks: Routing: the decision module is responsible to exchange the information between the modules. As a practical example, for controlling the navigation module, the decision module must get geographical positioning which is obtained from the location module. This information can be used to find points-of-interest in the neighborhood, which are then delivered to the user through the interface module. Decision-making: The decision task is used every time the system needs to prevent system malfunctioning. A practical example is a situation when the locations provided by the location module may overload the navigation module and in consequence overload the user with large amounts of information, in a rate that he cannot follow. This module is also responsible for filling the menu with content from the other modules, like georreferences from the maps stored the navigation module. When developing the decision module it is very important to keep in mind that the decision module must create a transparent way to access and exchange the different technologies available in the different modules. Every time each module requires data that is provided by another one, the decision module must interpret the information requirement, even when the data is represented in different types, according to each module output and the information must forwarded in the correct format. The solution adopted was the use of wrappers, nothing more than
middleware that facilitates the integration of different technologies by interpreting the data, format it and forward it in a type that the recipient can understand. To integrate the various modules that compose the SmartVision prototype it was necessary to elaborate a strategy to create relations between all the modules and integrate them with the decision module. In this process each module was transformed into a Dynamic Language Library (DLL or dll) and connected to the decision module [21]. All the modules were individually developed in C++ and C# programming language, which allow the creation of DLLs. The system integration was made according to the following steps: 5.1. Preparing module integration Like previously mentioned before, all modules were exported to a dll in order to be connected and used by the decision module. Each module provides methods that allow module initialization. After initialization they receive information through their public method’s arguments and export data by firing events each time that new information is available. Each module is represented by one main dll and the decision module only needs to be connected to it, in other words, the dependencies of each other module doesn´t need to be known by the decision module (Fig. 3).
Fig. 3 - Integration of all modules
When integrating the modules, another aspect to consider is the data flow between all of them. All the libraries that represent each module must provide methods for initialization, export only relevant information and provide ways to be totally conFig.d and controlled by the decision module, in real time, in order to follow priorities and hierarchy, create routes and module refresh. Only the public methods like the module start or stop must be accessible by the decision module. The module’s internal methods need to be private as they are not needed by the decision module and, from the developer point of view, may cause system malfunction if used inappropriately. 5.2. Integration of the location module The location module must provide the user’s location in terms of geographical coordinates. The decision module needs to manage the amount of location updates in order not to overload the other modules. The relation between these two modules also includes the need for the decision module to refresh location data (Fig. 4).
Fig. 4 - Location module integration
5.3. Integration of the navigation module The navigation module has the main task of generating navigation instructions that the user can follow: To do this it needs to get the user’s location from the location module, so the decision module needs to collect that information and provide it to the navigation module. Using the user’s location the navigation module can initiate a search of points-of-interest and, if any is found, it will be sent back to the decision module. Every time the user chooses to create a route, his request its sent to the navigation module with two locations, the route’s initial and final locations. If the route is created successfully, the navigation module will update the decision module with navigation instructions, during the navigation. Note that navigation requires a large amount of information exchange, like coordinates and POIs. The decision module needs to exchange all this information and update all the involved modules during this process (Fig. 5).
Fig. 5 - Navigation module integration
5.4. Integration of the vision module The vision module provides orientation instructions to the user. After the module is initialized the only communication existing between the decision and vision modules is when a new trajectory correction instruction is triggered (Fig. 6). This event occurs when the correction angle between the blind user and the fiducial marks on the pavement change, according to the categories described in Fig. 7.
Fig. 6 - Vision module integration
Fig. 7 - Correction angle intervals
5.5. Integration of the interface module The interface module, similarly to the navigation module, is a module with which the user has direct contact. This module provide an interface with the decision module in order to perform tasks like start navigating, request contextual information or conFig. the navigation system. The decision module, in turn, needs to refresh the data on the menu as well (Fig. 8).
Fig. 8 - Interface Module integration
5.6. Decision module internal management The decision module, as the name implicitly says, needs to take decisions in order to control the information exchange and ensure a good overall performance of the system. There are a few internal tasks that this module needs accomplish to assure bug-free performance (Fig. 9) while attending triggered events and information exchange effectively. The module is capable to receive and format the information before forwarding it to the appropriate recipient/module.
Fig. 9 - Decision module tasks
Priority-based management enables the system to choose what information must be forwarded to each module and what feedback the user should receive through the interface module. As an example, lets consider that coordinates obtained using RFID tags have higher priority than coordinates retrieved from the GPS antenna, and are the ones that, if present, should be delivered to the decision module. Another practical case is when the RFID location sub-module fails but the vision module found marks on the pavement. In this case orientation provided by the vision module has priority over the coordinate retrieved from the location module. Next in the hierarchy are the coordinates obtained from GPS and WiFi. The “System control” task is used when the navigation system needs to be initialized, along with each module and all events prepared to be triggered. The “Messaging” task is responsible for routing information between modules (information exchange). The decision module also registers all crashes and errors that eventually occur during the navigation system uptime. “Information control” is used to prevent system overload, namely when the amount messages from the location system can´t be all processed by the navigation module. Only coordinates captured within some time intervals are accepted. The system also needs to check if the information is updates in all modules. All this internal control features provide safe navigation instructions to a blind user when he is navigating. The redundancy present in the system was deliberately created so that, if one module crashes or stop functioning, another similar module can provide ways of keeping the user in a safe route. 6. Final considerations The main objective of this work was to create the first fully working and integrated prototype for SmartVision project. To achieve this we needed to encapsulate all modules as dynamic libraries and create a “decision module” responsible to control all the system and manage all the other modules. The decision module plays a major role in the hard task of managing the amount of information that all the modules produce, in order to keep a reliable performance. At the moment, each module has a fully functional dll and tested release, and the system integration is complete. At the same time, an interface menu was created for the interaction between the system and the blind users and to test the first prototype, following the guidelines of HCI. Future tests with real blind people are needed to enhance the overall reliability and intuitiveness of the system. This is the immediate next step, which we are starting to arrange. Contacts have been made with blind people to correctly test and evaluate the system in real conditions and proceed with the refinements that may arise. A test scenario is being assembled in the UTAD Campus specifically for this purpose. Acknowledgements This research was supported by the Portuguese Foundation for Science and Technology (FCT), through the project RIPD/ADA/109690/2009– “Blavigator: a cheap and reliable navigation aid for the blind”.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
15. 16.
17.
18. 19. 20. 21.
Sendero GPS, http://www.senderogroup.com, last viewed on January 2011. StreetTalk, http://www.freedomscientific.com, last viewed on January 2011. Trekker, http://www.humanware.ca, last viewed on January 2011. R. Kowalik, S. Kwasniewski, “Navigator - A Talking GPS Receiver for the Blind”, Lecture Notes in Computer Science, Springer Berlin / Heidelberg, vol.3118, pp.446–449, 2004. L. Ran, S. Helal, S. Moore, “Drishti: An Integrated Indoor/Outdoor Blind Navigation System and Service”, Proceedings of the Second IEEE International Conference on Pervasive Computing and Communication (PerCom’04), pp.23-31, 2004. B. Moulton, G. Pradhan, Z. Chaczko, ”Voice Operated Guidance Systems for Vision Impaired People - Investigating a User-Centered Open Source Model”, JDCTA: International Journal of Digital Content Technology and its Applications, vol.3, no.4, pp.60-68, 2009. N. Molton, S. Se, M. Brady, D. Lee, P. Probert, “A stereo vision-based aid for the visually impaired”, Image and Vision Computing, vol.16, pp.251–263, 1998. N. Bourbakis, D. Kavraki, “A 2D Vibration Array for Sensing Dynamic Changes and 3D space for Blinds’ Navigation”, Proceedings of the 5th IEEE Symposium on Bioinformatics and Bioengineering (BIBE’05), pp.222-226, 2005. S. Krishna, G. Little, J. Black, S. Panchanathan, “iCARE - A wearable face recognition system for individuals with visual impairments”, Assets '05 Proceedings of the 7th international ACM SIGACCESS conference on Computers and accessibility, 2005. J. Zelek, “Seeing by touch (haptics) for wayfinding”, Proceedings of the International Congress Series, vol.1282, pp.1108– 1112, 2005. F. Dellaert, B. Walker, “SWAN - System for Wearable Audio Navigation”, http://sonify.psych.gatech.edu/research/swan/, last viewed on January 2011. S. Meers, W. Koren, “A Vision System for Providing 3D Perception of the Environment via Transcutaneous Electro-Neural Stimulation”, Proceedings of the Eighth International Conference on Information Visualisation (IV’04), pp.546-552, 2005. N. Trichakis, S. Taplidou, E. Korkontzila, D. Bisias, L. Hadjileontiadis, “SmartEyes: an enhanced orientation and navigation system for blind or visually impaired people”, IEEE Computer Society, CSIDC 2003 Final Report, pp.1-20, 2003. J.M.H. du Buf, J. Barroso, J.M.F. Rodrigues, H. Paredes, M. Farrajota, H. Fernandes, J. José, V. Teixeira, M. Saleiro, “The SmartVision Navigation Prototype for the Blind”, Proceedings of the 3rd International Conference on Software Development for Enhancing Accessibility and Fighting Info-exclusion (DSAI2010), Oxford, United Kingdom, 2010 J. Faria, S. Lopes, H. Fernandes, P. Martins, J. Barroso, “Electronic white cane for blind people navigation assistance”, Proceedings of the World Automation Congress 2010 – WAC2010, Kobe, Japan, 2010 H. Fernandes, T. Adão, N. Conceição, H. Paredes, P. Araújo, J. Barroso, “Using GIS platforms to support accessibility: the case of GIS UTAD”, Proceedings of the International Conference on Universal Technologies – UNITECH2010, pp. 71-81, ISBN 978-82-5192546-4, Tapir Academic Press, Oslo, Norway, 2010 A. Penedo, P. Costa, H. Fenandes, A. Pereira, J. Barroso, “Image segmentation in systems of stereo vision for visually impaired people” in DSAI2009 - Proceedings of 2nd International Conference on Software Development for Enhancing Accessibility and Fighting Info-exclusion, ISBN 978-972-669-913-2, pp. 149-156, Lisbon, Portugal, UTAD Publishing Services P. Costa, H. Fernandes, V. Vasconcelos, P. Coelho, J. Barroso, L. Hadjileontiadis, "Landmarks detection to assist the navigation of visually impaired people", accepted for publication in the proceedings of HCI International 2011, Orlando, Florida, USA, 2011 H. Fernandes, N. Conceição, H. Paredes, A. Pereira, P. Araújo, J. Barroso, “Providing accessibility to blind people using GIS”, accepted for publication as invited paper in the journal UAIS - Universal Access in the Information Society, Springer, 2010 J. Jacko, G. Salvendy, R. Koubek, Modelling of menu design in computerized work, Interacting with computers (vol. 7, n. 3), pp. 304330, 1995 http://msdn.microsoft.com/enus/library/ms173184(v=vs.80).aspx , last viewed on February, 2011.