Pedestrian crossing aid device for the visually impaired

3 downloads 37 Views 189KB Size Report
PL Weiss & AL Brooks (Eds.). 2008. 263-70. 5. Korean road traffic ... SShoval, Shraga, Iwan Ulrich, and Johann Borenstein. "NavBelt and the Guide-Cane ...
Pedestrian crossing aid device for the visually impaired Song Han Jun, Ponghiran Wachirawit Korea Advanced Institute of Science and Technology Department of Electrical Engineering, Daejeon, Republic of Korea [email protected], [email protected]

I. Introduction and Related Works Loss of sight is considered to be the most severe sensory disability.1,2 Regarding the statistical information, there are around 252,000 visually impaired (VI) people in South Korea.3 Everyday those people face many difficulties including local navigation. Also, an inability to independently mobilize is listed as the most significantly barrier of daily life.4 Especially, where dangerous activities can easily occur (e.g. crosswalk), the VI who mostly rely on surrounding crowds and sound signaling systems may have a hard time. Considering the fact that sound generating system are installed only 7.2% of crosswalks in South Korea and 46.9% of those lack of regular maintenance5, possibility of safe crossing is remarkably low. Figure 1 shows sound generating systems with poor maintenance.

Figure 1 Sound signaling system with poor maintenance In recent years, several researches have been carried on to enhance pedestrian safety. The technology based on electronic devices to enhance individual safety in travel is commonly known as electronic travel aids (ETA). Many approaches has been proposed to convey user’s surrounding space, including ultrasonic sensor6,7, global positioning system8,9,10, radio-frequency identification based position measurement system11,12, and computer vision13,14,15. Though numerous researches have accomplished to aid the VI so far, those ETAs cannot fully overcome navigation concerns due to their drawbacks. The evaluation of ultrasonic sensor based systems revealed that it generates many excessive feedbacks that draw user’s attention.16 The geographic informationbased system is highly dependent on prior location knowledge; as a result, information-based system can be used on specific areas. Also, there is no well-established framework for computer vision based system. Thus, this research aims to establish a real-time robust visual object detection algorithm and develop a pedestrian crossing aid device based on that algorithm. The aid device complements the use of a guiding cane, providing the VI with brief recorded message to cross the road safely. II. Research question Which detector approach is the most suitable for detecting pedestrian signal on a mobile device? How can the road crossing behavior be understood and how can the position obtained from the detector be used to deliver the VI essential information to cross the road safely?

1

III. Method A. System overview An illustration and flow chart of the pedestrian crossing aid system is shown in figure 2. It contains three important processes, namely a candidate selection, a pedestrian signal verifier, and a decision scheme. Candidate selection picked out all possible candidates of pedestrian signal and sent their corresponding position to the pedestrian signal verifier. The pedestrian signal verifier confirmed the existence of the pedestrian signal and discarded all mismatched candidates. Then, the coordination of the pedestrian signal was sent to decision scheme for determining an appropriate speech output. The detailed information of each process can be found in the following section.

Figure 2 Pedestrian crossing aid device illustration and flowchart B. Object detection framework In order to develop the crossing aid device, object detection is a critical process to obtain a position of the pedestrian signal from input image. Choosing an inappropriate detector can greatly affect the real-time performance and user’s safety. In this research, a combination of top-down and bottom-up detection method was used. This is due to the fact that using either top-down or bottom-up detection alone has several drawbacks17 and integration of these two can effectively provide real-time speed as well as detection accuracy. So far, there have been several attempts to combine top-down and bottom-up approaches.18,19,20 Those approaches commonly use top-down detection to reduce searching area and bottom-up detection to find the object of interest. In this research, similar framework was replicated by using candidate selection as top-down approach and pedestrian signal verifier as bottom-up approach. C. Candidate Selection Candidate selection is the first process that was applied to an input frame. This process selected the candidates of pedestrian signals based on their color properties using threshold segmentation. Firstly, candidate selection partitioned an image into non-overlapping regions. 22 In this work, the region was defined as a homogenous group of connected pixels with respect to their color properties in HSV color space. Secondly, regions with similar color properties of pedestrian signals were considered as possible candidates while mismatch areas were neglected and no longer used for further computation. To accurately detect pedestrian signals, 200 samples of pedestrian signals were selected at different backgrounds for statistical reference. Furthermore, the characteristic parameters were 10% increased to prevent the loss of the detecting target. Lastly, a set of the possible positions was returned as a result of the candidate selection. The set included both true and false detections, but the false detections were removed through the following pedestrian signal verifier.

2

D. Pedestrian Signal Verifier In order to achieve an effective recognition for various pedestrian signals, machine-learning-based detection was employed to confirm the signals. Machine-learning based detector can be trained to handle multiple pedestrian signals at different shapes and sizes. In this research, local binary pattern (LBP) was selected as a detection feature. LBP outperforms many other features in terms of precision and is widely used in various applications, e.g., face detection23. Even though LBP has a slightly lower detection accuracy than a traditional Haar feature, LBP requires less computation cost and is suitable for limited resource hardware like an embedded system. At the beginning LBP of pedestrian signals were trained by a cascade of classifiers called AdaBoost. AdaBoost was adopted to increase detection performance and reduce computation time. Afterwards a robust realtime machine learning detector was given. This detector searches all the candidates obtained from the candidate selection and instantly rejects all mismatched ones. If searching-windows matched with the trained criteria, the same procedure was repeated with tighter constraints for a certain time. At the end the position of a pedestrian signal was obtained and used by the decision scheme for generating a guiding message. Figure 3 shows the process of detection framework.

Fig 3 Process of object detection framework E. Decision Scheme After successfully locating a pedestrian signal in an input frame, two tasks of decision scheme were conducted. One was identifying light color and another was deciding an appropriate response for the VI. Because both green and red lights have unique color properties, threshold segmentation was again employed, but with different constraints. A light color was considered to be either green or red if the light color properties in HSV color space fell in a certain range. The statistical references of HSV ranges were calculated from 200 samples of lights at different illuminations. Based on the collection of data, it was found out that a green color covers a range from H=70, S=40, V=150 to H=90, S=255, V=255 and a red color covers a range from H=0, S=80, V=150 to H=6, S=255, V=255. Figure 4 shows examples of pedestrian signals after thresholding with the HSV ranges of green and red colors.

Figure 4 Examples of pedestrian signal after thresholding with green and red color

3

As soon as the light color was interpreted, analysis of light changing pattern was performed as the final process of the system. For effectiveness, pedestrian crossing behavior was described by the finite state machine. Any change of the light color caused a transition from the one state to the others on the finite state machine. The next state was decided based on the present light color and previous information. For example, a change from red to green causes one who waits to start crossing. For each state transition, the system provided a brief message relevant to the transition for the VI. If there are any changes that violate possible transition behavior (e.g. change from red to blinking), the system goes to the safety state and informs the user to act accordingly. IV. Results and Discussion The pedestrian crossing aid device is currently in the developing state. However, the detection algorithm was successfully developed under C++ programming environment and OpenCV library. The algorithm was combined with simple version of decision scheme to test its functionalities. With an assumption that detection begins at crosswalk, the aid device effectively detected a pedestrian signal at real-time speed and was able to help a user cross the road safely. An evaluation shows that it has 77.3% object detection accuracy, 96.2% color interpreting accuracy, and 74.4% overall accuracy. Those numbers indicates that the proposed detection algorithm works in practice and the aid device can be further developed to increase its reliability. For the next step, this crossing aid program should be developed to be compatible with a mobile device. Furthermore, additional program features such as depth detection, and geographic information should be combined to increase the user safety in traveling. V. Relevance in ICCHP context The crossing aid device aims to overcome navigation concerns for the VI especially at the crosswalk where accident can easily occur. By employing several computer vision techniques and automation system, we focus to bring out today’s potential technology to support the VI in everyday life. VI. References 1. Engelberg, Alan L. Guides to the evaluation of permanent impairment. American Medical Association, 1988. 277-304. 2. Strumillo, Pawel. "Electronic interfaces aiding the visually impaired in environmental access, mobility and navigation." Human System Interactions (HSI), 2010 3rd Conference on. IEEE, 2010. 17-24. 3. Korean Ministry of Health and Welfare. Statistics Korea. Sectoral indicators, 2013. Web. 26 Dec. 2013. . 4. Bujacz, M., et al. "Remote mobility and navigation aid for the visually disabled."Proc. 7th Intl Conf. on Disability, Virtual Reality and Assoc. Technologies with Art ArtAbilitation, in PM Sharkey, P. Lopesdos-Santos, PL Weiss & AL Brooks (Eds.). 2008. 263-70. 5. Korean road traffic authority. The Road Traffic Authority. Statistic of sound generating system in South Korea, 2008. Web. 23 Dec. 2013. . 6. SShoval, Shraga, Iwan Ulrich, and Johann Borenstein. "NavBelt and the Guide-Cane [obstacle-avoidance systems for the blind and visually impaired]."Robotics & Automation Magazine, IEEE 10.1 (2003): 9-20. 7. Ulrich, Iwan, and Johann Borenstein. "The GuideCane-applying mobile robot technologies to assist the visually impaired." Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on 31.2 (2001): 131136.

4

8. Loomis, Jack M., Reginald D. Golledge, and Roberta L. Klatzky. "GPS-based navigation systems for the visually impaired." (2001). 9. Mayerhofer, Bernhard, Bettina Pressl, and Manfred Wieser. "ODILIA-A Mobility Concept for the Visually Impaired." Computers Helping People with Special Needs. Springer Berlin Heidelberg, 2008. 1109-1116. 10. Ran, Lisa, Sumi Helal, and Steve Moore. "Drishti: an integrated indoor/outdoor blind navigation system and service." Pervasive Computing and Communications, 2004. PerCom 2004. Proceedings of the Second IEEE Annual Conference on. IEEE, 2004. 11. Kulyukin, Vladimir, et al. "RFID in robot-assisted indoor navigation for the visually impaired." Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on. Vol. 2. IEEE, 2004. 12. Chang, Tsung-Hsiang, et al. "iCane–A partner for the visually impaired."Embedded and Ubiquitous Computing–EUC 2005 Workshops. Springer Berlin Heidelberg, 2005. 13. Coughlan, James M., and Huiying Shen. "Crosswatch: a system for providing guidance to visually impaired travelers at traffic intersection." Journal of assistive technologies 7.2 (2013): 131-142. 14. Gnana Praveen, and Roy P. Paily. “Blind Navigation Assistance for Visually Impaired based on Local Depth Hypothesis from a Single Image.” Procedia Engineering. Vol. 64. 2013. 351-360. 15. Shen, Huiying, et al. "A mobile phone system to find crosswalks for visually impaired pedestrians." Technology and disability 20.3 (2008): 217-224. 16. Peng, En, et al. "A smartphone-based obstacle sensor for the visually impaired." Ubiquitous Intelligence and Computing. Springer Berlin Heidelberg, 2010. 590-604. 17. Wang, Liming, et al. "Object detection combining recognition and segmentation." Computer Vision–ACCV 2007. Springer Berlin Heidelberg, 2007. 189-199. 18. Fidler, Sanja, et al. "Bottom-up segmentation for top-down detection." Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on. IEEE, 2013. 19. Oliva, Aude, et al. "Top-down control of visual attention in object detection."Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on. Vol. 1. IEEE, 2003. 20. Carreira, Joao, et al. "Semantic segmentation with second-order pooling."Computer Vision–ECCV 2012. Springer Berlin Heidelberg, 2012. 430-443. 21. Navalpakkam, Vidhya, and Laurent Itti. "An integrated model of top-down and bottom-up attention for optimizing detection speed." Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on. Vol. 2. IEEE, 2006. 22. Glasbey, Chris A., and Graham W. Horgan. Image analysis for the biological sciences. New York: John Wiley & Sons, 1995. 23. Ahonen, Timo, Abdenour Hadid, and Matti Pietikainen. "Face description with local binary patterns: Application to face recognition." Pattern Analysis and Machine Intelligence, IEEE Transactions on 28.12 (2006): 2037-2041. Song Han-Jun is a M.S. student in Unmanned System Research Group, Department of Aerospace Engineering at Korea Advanced Institute of Science and Technology (KAIST). He received his B.S. in Mechanical Engineering and Electrical Engineering as double major from KAIST in 2014. His research interests include Robotics and Autonomous System. Ponghiran Wachirawit is currently a B.S. student in Electrical Engineering at KAIST. His research interest focuses on Computer Architecture, SoC Design, and VSLI for Multimedia Application.

5