Mobility Extensions for a PDA - CiteSeerX

8 downloads 6849 Views 3MB Size Report
Feb 9, 2010 - here at UMIST, and for their help on Software Engineering and Systems Modelling ...... desk to the door of my office, open the door and continue a short ..... textual descriptions became realisable with the advent of cheaper ...
Standardising Electronic Travel Aid Interaction for Visually Impaired People

Simon HARPER

A thesis submitted to The University of Manchester Institute of Science and Technology for the degree of Master of Philosophy.

Supervisor Dr P N Green

Department of Computation

1998

Abstract

Orientation, Navigation and Mobility are perhaps three of the most important aspects of ordinary human life. Most aspects of the dissemination of information to aid navigation and cues for active mobility are passed to humans via the most complex sensory system, the vision system. This visual information forms the basis for most navigational tasks and so with impaired vision an individual is at a disadvantage because appropriate information about the environment is not available.

This project is concerned with addressing this inequality by proposing and modelling a system of mobility extensions integrated into a Personal Digital Assistant, to provide appropriate Orientation, Navigation and Mobility cues, such that current disparate technologies can be integrated into one useful system.

Declaration No Portion of the work referred to in this thesis has been submitted in support of an application for another degree or qualification of this or any other university or other institution of learning.

Acknowledgements

I would like to thank: •

Peter Green for being my supervisor, for the expert amendments to my thesis, for the suggestions on how to proceed, and for his support (I now know not to use ‘;’ unless explicitly needed).



Gareth Evans and Peter Green for giving me the chance to write this thesis by employing me on the MOOSE project with the Computer Systems Design Group here at UMIST, and for their help on Software Engineering and Systems Modelling matters.



Paul Blenkhorn and Gareth Evans for enabling me to develop an interest in disability and visual impairment with the Technology for Disabled People Unit again at UMIST, and for their help on Disability matters.



William Love for ‘taking me to task’ when my models weren’t right.



My fellow Research Assistants for their help and support.

On a more personal note I’d like to thank: •

Steve Pettitt for checking it all, being the sounding board, for his general support, and for constantly saying “chop this out, you’re talking waffle!’.



Chris Smith for buying the beer, and helping me drinking it.



Sis Corna for making the tea, and putting me in my basket.



Paul Blagborough for the walking, and helping me drink Chris’s beer.



Andy, Marie, and Ben for stress relief.



Kristel De Cat for chips, mayonnaise, chocolate, and beer (all Belgian naturally!).



The staff of ‘This and That’ for lots of curry.



My family, friends, friends who are like family, and everyone else not explicitly mentioned but who helped although they may not have realised it.

Preface

I came to this project taking a rather different route than the normal 18-year-old student. After seven years working for British Coal as a Computer Liaison Officer within the Sales and Despatch arm of their Marketing Department, I decided to formalise my education by taking a series of Open University and Nottingham Polytechnic courses to enable me to attend UMIST full-time. I was accepted into the second year of the Software Engineering course in 1993. After completing my second year successfully I transferred to the Master of Engineering course in Software Engineering, and gained a 2.1 MEng (Hons) Software Engineering in 1996. Each summer at UMIST I was employed by the Technology for Disabled People Unit, and inevitably this field became the focus of my academic and employment aspirations.

After 1996 I was employed full time by the Technology for Disabled People Unit, and in 1997 transferred to the MOOSE project to continue similar work and this thesis. Although carried out at the same time this thesis is completely separate from my research work on the MOOSE project in the Computer Systems Design Group.

My immediate aim is to continue on to write a PhD in Rehabilitation engineering (possibly by continuing this project), and then to be employed at an academic institution concerned with research into rehabilitation issues.

Contents 1

INTRODUCTION ............................................................................................................................. 1 1.1 THE GOAL OF THE PROJECT .......................................................................................................... 1 1.2 THE CONTEXT OF THE PROBLEM .................................................................................................. 2 1.3 REHABILITATION OVERVIEW ........................................................................................................ 3 1.4 MODELLING A MOBILITY SYSTEM ................................................................................................ 4 1.5 MODEL BASED OBJECT ORIENTED SYSTEM ENGINEERING OVERVIEW......................................... 5 1.6 HOW TO PROCEED ........................................................................................................................ 6 1.7 A FINAL NOTE ON THE LANGUAGE USED ..................................................................................... 6 2 VISUAL IMPAIRMENT .................................................................................................................. 7 2.1 INTRODUCTION ............................................................................................................................. 7 2.2 WHAT IS VISUAL IMPAIRMENT? ................................................................................................... 9 2.3 VISUAL IMPAIRMENT .................................................................................................................... 9 2.4 HOW MANY PEOPLE ARE VISUALLY IMPAIRED? ........................................................................ 11 2.5 HOW CAN THIS PROJECT HELP ................................................................................................... 12 3 TRAVEL........................................................................................................................................... 13 3.1 INTRODUCTION ........................................................................................................................... 13 3.2 TRAVEL IN THE CONTEXT OF VISUAL IMPAIRMENT .................................................................... 15 3.3 THE TRAVEL TASK ..................................................................................................................... 17 3.4 CONCLUSION .............................................................................................................................. 21 4 ELECTRONIC TRAVEL AIDS..................................................................................................... 22 4.1 INTRODUCTION ........................................................................................................................... 22 4.2 HISTORICAL OVERVIEW OF TRAVEL AIDS .................................................................................. 22 4.3 ELECTRONIC TRAVEL AIDS......................................................................................................... 25 4.3.1 Route Planning .................................................................................................................. 25 4.3.2 Obstacle Detection and Avoidance .................................................................................... 28 4.3.3 Orientation and Waypoints ................................................................................................ 36 4.3.4 Information Points ............................................................................................................. 39 4.3.5 In-Route Guidance ............................................................................................................. 40 4.4 CONCLUSIONS............................................................................................................................. 41 5 THE MODEL BASED OBJECT ORIENTED SYSTEMS ENGINEERING (MOOSE) METHOD ................................................................................................................................................. 44 5.1 INTRODUCTION ........................................................................................................................... 44 5.2 THE MOOSE METHOD ............................................................................................................... 44 5.3 THE MOOSE LIFECYCLE ........................................................................................................... 46 6 SYSTEM DEFINITION.................................................................................................................. 49 6.1 INTRODUCTION ........................................................................................................................... 49 6.2 SYSTEM OVERVIEW .................................................................................................................... 49 6.3 SYSTEM REFINEMENT ................................................................................................................. 54 6.4 DESCRIBING A JOURNEY ............................................................................................................. 55 6.4.1 The Journey........................................................................................................................ 55 6.5 MOVING FROM A USER’S VIEW TO A MODELLING VIEW OF THE SYSTEM ................................... 57 6.5.1 Gateway POLI ................................................................................................................... 57 6.5.2 Standalone POLI................................................................................................................ 59 6.5.3 Standalone POLI with Additional Information .................................................................. 61 6.6 TECHNICAL OVERVIEW............................................................................................................... 63 7 A MODEL FOR A UNIFIED PERSONAL ORIENTATION AND LOCALISATION INTERFACE FOR VISUALLY IMPAIRED PEOPLE....................................................................... 64 7.1 INTRODUCTION ........................................................................................................................... 64 7.2 MODEL DESCRIPTION ................................................................................................................. 64 7.3 REQUIREMENTS ANALYSIS ......................................................................................................... 66 7.3.1 Functional Requirements ................................................................................................... 66 7.3.2 Non-functional Requirements ............................................................................................ 66 7.3.3 Design Decisions ............................................................................................................... 67 7.3.4 Design Objectives .............................................................................................................. 67 7.4 A FINAL SYSTEM SPECIFICATION ............................................................................................... 68

7.4.1 Overview ............................................................................................................................ 68 7.4.2 Summary ............................................................................................................................ 85 8 TESTING THE MODEL ................................................................................................................ 86 8.1 INTRODUCTION ........................................................................................................................... 86 8.2 USING A MOOSE EXECUTABLE MODEL FOR TESTING ............................................................... 86 8.3 GENERAL TESTING ..................................................................................................................... 88 8.3.1 Tests to validate independent stimulus-response paths through the model........................ 89 8.3.2 Tests to investigate context sensitive responses ................................................................. 89 8.3.3 Tests to investigate the effect of corrupt or erroneous inputs ............................................ 90 8.3.4 Tests to explore potentially anomalous timing dependent behaviour ................................ 90 8.3.5 Tests to test the correct initialisation of a model ............................................................... 90 8.4 SIMULATION USING THE MOOSE ANIMATOR ............................................................................ 90 8.5 TESTING THE MODEL BY SIMULATING A JOURNEY ..................................................................... 92 8.6 RESULTS OF MODEL EXECUTION AND SIMULATION ................................................................. 101 8.7 SUMMARY ................................................................................................................................ 102 9 TECHNICAL ISSUES AND MODEL TRANSFORMATION ................................................. 103 9.1 INTRODUCTION ......................................................................................................................... 103 9.2 TECHNICAL ISSUES ................................................................................................................... 103 9.2.1 System Communications .................................................................................................. 103 9.2.2 Required data................................................................................................................... 104 9.2.3 Cost and Specification ..................................................................................................... 104 9.2.4 Personal Digital Assistants .............................................................................................. 104 9.2.5 Problems with Spatial Visual Representation of Information.......................................... 105 9.2.6 Alternatives to Visual Representation .............................................................................. 106 9.2.7 Problems with Speech in Embedded Systems................................................................... 108 9.2.8 Possible Solutions ............................................................................................................ 109 9.2.9 Commercial Issues ........................................................................................................... 110 9.2.10 Product Design Issues...................................................................................................... 110 9.2.11 Sound Localisation........................................................................................................... 111 9.3 MODEL TRANSFORMATION ....................................................................................................... 112 9.3.1 Transformation Overview ................................................................................................ 112 9.3.2 Transformation Issues...................................................................................................... 113 9.3.3 External Interface ............................................................................................................ 114 9.3.4 Model Elements in an Implemented System ..................................................................... 115 9.3.5 Device Identification ........................................................................................................ 116 9.4 SUMMARY ................................................................................................................................ 117 10 CONCLUSIONS AND FURTHER WORK ............................................................................ 118 10.1 CONCLUSIONS........................................................................................................................... 118 10.2 FURTHER WORK ....................................................................................................................... 120 10.2.1 Moving to a Full Implementation..................................................................................... 120 10.2.2 User Feedback/Testing .................................................................................................... 121 10.2.3 Enhancements .................................................................................................................. 121 11 APPENDICES..................................................................................................................... CXXIII 11.1 APPENDIX 1 – MOOSE NOTATION ....................................................................................... CXXIII 11.1.1 Introduction ...................................................................................................................cxxiii 11.1.2 The Object Interaction Diagram Hierarchy...................................................................cxxiii 11.1.3 Extending the Mechanisms of Object Orientation ......................................................... cxxiv 11.1.4 Objects - Notation ........................................................................................................... cxxv 11.1.5 The Model Dictionary .................................................................................................... cxxvi 12 REFERENCES................................................................................................................. CXXVIII 13 BIBLOGRAPHY................................................................................................................ CXXXV

List of Illustrations Figure 1: Brambring's General locomotion problems of blind persons. ................................................... 17 Figure 2: Breaking Down the Travel Task ................................................................................................ 18 Figure 3: Stages and Sequence of a Journey............................................................................................. 19 Figure 4: The MOOSE Method ................................................................................................................. 45 Figure 5: Schematic of Use and Interactions ............................................................................................ 52 Figure 6: User Device ............................................................................................................................... 53 Figure 7: Mapping from Travel Tasks to Model Elements ........................................................................ 54 Figure 8: Gateway POLI User View.......................................................................................................... 57 Figure 9: Gateway POLI Intermediate Step .............................................................................................. 58 Figure 10: Gateway POLI Model View ..................................................................................................... 58 Figure 11: Standalone POLI User View.................................................................................................... 59 Figure 12: Standalone POLI Intermediate Step ........................................................................................ 59 Figure 13: Standalone POLI Model View ................................................................................................. 60 Figure 14: Standalone POLI Device (with Additional Information) User View........................................ 61 Figure 15: Standalone POLI Device (with Additional Information) Intermediate Step ............................ 62 Figure 16: Standalone POLI Device (with Additional Information) Model View ..................................... 62 Figure 17: External POLI View................................................................................................................. 71 Figure 18: Interaction Objects .................................................................................................................. 72 Figure 19: Standalone POLI Interface ...................................................................................................... 74 Figure 20: Gateway POLI Interface.......................................................................................................... 77 Figure 21: User POLI Device.................................................................................................................... 80 Figure 22: Additional Information ............................................................................................................ 82 Figure 23: A Standalone POLI Device Broadcast Transmission .............................................................. 93 Figure 24: Information Moving to the User's Portable Device ................................................................. 93 Figure 25: Information Arriving in the User's Portable Device................................................................ 94 Figure 26: Passing Received Information ................................................................................................. 95 Figure 27: Passing Contact Information ................................................................................................... 95 Figure 28: Collecting Returned Information ............................................................................................. 96 Figure 29: Progress of Returned Information ........................................................................................... 96 Figure 30: Returning Information to User's Portable Device ................................................................... 97 Figure 31: Storing Returned Information.................................................................................................. 97 Figure 32: Collecting Information............................................................................................................. 98 Figure 33: Getting Information Ready for Output..................................................................................... 98 Figure 34: Outputting Information........................................................................................................... 99 Figure 35: Output Information from POLI.............................................................................................. 100 Figure 36: Output Information to User ................................................................................................... 100 Figure 37: Schematic of User Device ...................................................................................................... 114

List of Tables Table 1: Visual Impairment Breakdown .................................................................................................... 11 Table 2: The Travel Task Model................................................................................................................ 20 Table 3: Problems with Speech in Embedded Systems ............................................................................ 108 Table 4: Solutions to Speech Problems.................................................................................................... 109 Table 5: Object Type Definitions............................................................................................................. 115

List of Code Fragments Code Fragment 1: Device Information...................................................................................................... 75 Code Fragment 2: Communications Structure .......................................................................................... 78 Code Fragment 3: Use Remote Data......................................................................................................... 81

List of Examples Example 1: Possible Device Information................................................................................................... 75 Example 2: Returned Packet...................................................................................................................... 78 Example 3: Remote Data Request.............................................................................................................. 81

Introduction

1 1.1

Introduction The Goal of the Project

This project is concerned with proposing a research model to aid the proving of an assertion, being:

“A universal interface to aid visually impaired users to interact with a number of different mobility systems is: needed, possible, logical, and novel.”

This thesis will consider methods of providing appropriate means of facilitating independent mobility (navigation and orientation) for visually impaired people. This will be accomplished by proposing a system to integrate current and future mobility devices and aids together such that they can be seamlessly used by a visually impaired traveller.

The aim of this project is to study the interaction, and methods of enhancing the interaction, between visually impaired people and real world environments. Moreover, to support this study by the examination of a useful system for using Personal Digital Assistants (PDAs) to incorporate Mobility operations and be used as part of a mobility system.

Therefore, the aims of the project are: •

To examine issues in supporting the mobility of visually impaired people using computer based technology. To do this the issues of how a journey is accomplished by both sighted and visually impaired people must be considered.



To model a series of mobility extensions in both hardware and software (as required) to a Personal Digital Assistant. These extensions will pay specific attention to enabling visually impaired people to navigate an environment more easily. Moreover, the extension will be general and will be proposed in such a way

Page 1

Introduction

as to accommodate different platforms (if possible). These platforms may be based on conventional platforms such as Windows95 ™ / WindowsCE ™ / Psion 5 operating systems.

This approach will have the benefits of: •

Enabling a useful assistive device for visually impaired people to be modelled and evaluated.



Facilitating the study of methods of navigating environments by visually impaired people.



Providing a strong platform of research to allow future embedded assistive device development.

In order to achieve this set of goals as outlined in this section, an overview of the context of the problem may be helpful.

1.2

The Context of the Problem

Basic independent mobility is vital throughout all Sections of society. Mobility and the ability to navigate complex environments enable humans to accomplish many different physical and mental goals. As will be shown, vision is important to independent mobility and in probability this is a major reason why many people see visual impairment as a state to be pitied and feared.

Individual orientation and navigation between points in the urban environment is often taken for granted by a large percentage of society. Environments are designed with an abstract, stylised and average view of an individual or group in mind that confines the evolution of the environment to assist that ‘general case’ individual. While the ‘general case’ person is assisted in their navigation of and orientation within an environment, departures from this ‘general case’ are not assisted and indeed may be hindered. For example, signs to aid navigation around an external area are erected to allow sighted people to identify their location within an area, however because blind or visually

Page 2

Introduction

impaired people are often not considered in the general case then nothing appropriate is provided.

Individuals within society that is not covered by this ‘general case’ need a standard provision of appropriate features mirroring those used by individuals covered in the general case. The needs of all individuals are different and should be taken into account when any design is undertaken, however addressing these needs may be constrained by cost.

All individuals within society are required to interact with the state in similar ways. To exclude areas of society because they do not fit a general case, because their individuality has been disregarded suggests a lack of understanding. While it is expected that these people support a society (financially as well as psychologically), they are excluded to a greater or lesser extent from reaping the benefits of being part of it and this is inherently unacceptable.

As mobility is a key factor in all aspects of life it is the focus of this project to propose a system to allow blind or visually impaired people to orient themselves in, and navigate through, complex urban environments.

Now that the goals and context of the project have been outlined, a brief note on rehabilitation and disability will help to clarify some of the problem areas addressed by the project.

1.3

Rehabilitation Overview

There have always been problems associated with the presentation of visual information to visually impaired people, and a visually impaired person’s subsequent interaction with this information and the environment. Many solutions have been proposed and in some cases implemented to make this interaction possible. The problem was magnified with the advent of complex internal and external urban environments and the requirement for greater interaction with visual information. Many devices exist to assist visually impaired people in navigating environments. These systems will be discussed

Page 3

Introduction

later, however these devices mainly relay on complex and very specific technology which is both required by the system and cumbersome to carry.

A number of research institutes and software companies are working on solutions to the problems of navigational information for visually impaired people. These initiatives focus on using speech and sounds to audibly map an area1. The size and speed of new computer systems allow a significant proportion of system resources to be directed into assisting visually impaired peoples’ interaction with the environment. However, small devices like hand-held organisers and other embedded devices have been neglected as it is perceived that they do not have the resources to spare on system-wide support for visually impaired people. This means that useful devices, sometimes vital devices, with a high degree of portability and a small amount (by today’s standards) of computational power, such as personal digital assistants2, are overlooked in the context of useful mobile assistive devices.

1.4

Modelling a Mobility System

Many systems are complex to develop, and in this respect a universal interface to aid visually impaired users to interact with a number of different mobility systems is no different.

However as previously stated, this project is concerned with proposing an interface system, and then using modelling, simulation and design techniques to support that proposition. This means that the system will not be taken to full development (due to cost and time constraints), and so it will be necessary to model it by considering the logical functionality and abstracting the implementation specific details which can often limit a good design.

In light of the previous Sections, the objective is then to propose a system and supporting infrastructure that can assist visually impaired people in making journeys. Although the system will not be developed a reasonably detailed logical design will be developed which will then be rendered executable by techniques to be discussed in 1

Although tactile and Braille displays are also being developed

Page 4

Introduction

Chapter 5, enabling the behaviour of the system to be validated. In order to be able to develop a design model of the proposed system that can be converted into an executable form the Model Based Object Oriented Systems Engineering (MOOSE) method for computer system design has been used.

1.5

Model Based Object Oriented System Engineering Overview

The MOOSE method is an approach to the development of embedded systems based on the Object Oriented Paradigm. However, in contrast to most object oriented methods, MOOSE

can

deal

effectively

with

the

hardware

and

software

elements

[MORRa95][MORRb95].

Much attention has been paid to the development of methods for engineering the hardware

and

software

components

that

make

up

computer

systems[YOUR89][BOOCH94] [HAREL87][LOU95]. Methods that consider the concurrent development of both the software and hardware in order to provide an integrated approach have recently been proposed[MORRc95]. This co-design methodology is mainly, but not

exclusively,

aimed

at

embedded

system

development[MORR93][MORRe95][GREEN95]. MOOSE has a number of especially attractive characteristics that make it a good choice for this project. It deals with complex hardware and software systems that are not the focus of other methods [WARD85][HATLE87] and clearly the system in this thesis will contain substantial amounts of both. However, the lifecycle of the MOOSE method is particularly appropriate for the project since it involves the development of an abstract object model (known as a behavioural model) of the system under development. This model captures the required functionality of the system and can then be rendered executable to enable the systems behaviour to be validated against the functional requirements. Clearly this is very helpful in the context of this project [UMIST97][MORR96].

The applications envisaged for MOOSE include the expanding areas of consumer products that require computer-based control, for example: telephony; entertainment systems; personal digital assistants etc. MOOSE tries to achieve fitness for purpose 2

Abbreviated to PDA

Page 5

Introduction

through good design practice involving relevant consultation,

review and

simulation[MORRd95][MORRf95], and therefore is an ideal methodology for this thesis.

1.6

How to Proceed

Because of the inter-disciplinary nature of this thesis a number of background areas need to be covered. As such the issues of visual impairment, mobility, and mobility aids will be presented to provide an insight into the complex nature of the problem.

Next a discussion of the development details are presented including a guide to the ‘Model Based Object Oriented Engineering’ approach to systems development. These details range from a proposition for a system through to a discussion of the requirements for just such a system. The thesis continues with a discussion of the actual system development and simulation and the problems encountered will then be presented.

Finally, the thesis ends by discussing any conclusions that can be drawn at this time, along with presenting accomplishments and failings of the project and a consideration of future enhancements to the system.

1.7

A Final Note on the Language Used

At this point, before continuing further, it is necessary to highlight aspects of the changing nature of terminology within the rehabilitation-engineering field. It is not intended to present any new material here, however this thesis will conform where appropriate to the style proposed in the initial preface of Alistair Edwards book, ‘Extra Ordinary Human Computer Interaction’[EDWa95].

Page 6

Visual Impairment

2

2.1

Visual Impairment

Introduction

Many people have some knowledge of visual impairment and blindness; for most people just the phrase ‘the blind’, ‘blind person’ or ‘visually impaired’ generates a number of different emotions based mainly on an individual’s social development. While this may be overlooked for the general population, researchers and technologists involved in this field must not have any false assumptions about visual impairment. Visual impairment is a complex individual state which still enables people to be productive individuals in all contexts even if a greater number of obstacles are placed in front of them. In this chapter it is intended to give an overview of visual impairment, clarify some terms, and present some information on the numbers of people with visual impairments. The objective is to discuss visual impairment as a background to the technical aspects of the project.

It may be useful to suggest some definitions of what is meant by ‘disabled people’ to give a general overview of visual impairment. According to the United Nations Declaration on the Rights of Disabled Persons:

“The term ‘disabled person’ means any person unable to ensure by himself or herself, wholly or partly, the necessities of a normal individual and/or social life, as a result of a deficiency, either congenital or not, in his or her physical or mental capabilities.” (UN, 1981) [EDWa95]

Page 7

Visual Impairment

The Americans with Disabilities Act (ADA) is more specific. A Disability is:

“A physical or mental impairment which substantially limits one or more major life activities.” [EDWa95]

It can be seen that even the ADA definition in effect could refer to most of us at some point in our lives. As people become older, they may lose the use of some faculties. Alternatively, it could be argued that moving to a country where a person does not speak the native language could be classed as a disability as it limits a “major life activity”. How many people wear glasses? Without glasses major life activities would surely be more limited. However the definitions are applied, they are still just that; fixed general case definitions. It is suggested therefore, that what really should be addressed, is the need to provide methods to allow major life activities to be as unlimited as possible regardless of disability.

Visually impaired people are handicapped in their efforts to find employment[KIR95], to interact more fully with society at large[BEA95], to move freely around areas without assistance[BEN95], and in countless other ways. They are disabled by their visual impairment but are handicapped by some individuals within society and indeed, by society itself[BAR89][RNIB92].

Basic independent mobility is vital throughout all sections of society. Mobility and the ability to navigate complex environments enables humans to accomplish many different physical and mental goals. To enable easier interaction between blind people and the environment would therefore be removing a limit on a large part of a major life activity[BEN95]. As additional benefits, employment and personal contact may also be enhanced.

Page 8

Visual Impairment

2.2

What is Visual Impairment?

In general very few blind people can see nothing at all. Some can see the periphery only, some may see a blur, or a patchwork. However a person is affected by visual impairment it is both physically and mentally a very individual state.

Some people do not like to be called blind or partially sighted. These individuals feel the terms blind or partially sighted are misleading and in some cases derogatory. For this reason, they prefer to be called visually impaired or people with a visual impairment, suggesting that their sight does not work, or does not work as well as it should.

There is however a distinction to be made between blindness and partial sight. Blindness means a high degree of vision loss. Although they will have much less sight than normal, it is very rare for there to be no vision at all. The World Health Organisation defines ‘profound blindness’ as the inability to distinguish fingers at a distance of three metres or less[RNIBa96].

There are a number of variations on this in the real world. It may be possible for an individual to see these fingers from the side but not by observing them directly in front, or vice-versa. Hence ‘partial sight’ is a less severe loss of vision and consequently partially sighted people can see more than blind people but less than sighted people[RNIBa96]. The World Health Organisation defines ‘partial sight’ or ‘severe low vision’ as the inability to distinguish fingers at a distance of six metres or less.

2.3

Visual Impairment

‘Visual impairment’, ‘blindness’, ‘partial sight’ - these are all descriptions that have differing meanings to different individuals, most of these meanings being incorrect or exaggerated, and many have connotations of pity. The people behind all these ‘tags’ are often not seen as individuals but distorted by the conflicting emotions generated by the ‘label’. While this may be understandable, in a modern society it is not acceptable[MAU92].

Page 9

Visual Impairment

There is a distinct difference between a disability and a handicap although most people would not necessarily see this. The term visual disability is used to describe the fact that an individual is unable to see very clearly, however many people consider a visual handicap to be a state imposed upon an individual by society[RNIBa96]. The term ‘handicap’ is used to differentiate the fact that the disability is imposed on an individual by other people or society, rather than by a person’s lack of sight.

There are many causes of visual impairment; people are either born with a visual impairment or become visually impaired, maybe due to a disease or as the result of an accident. The majority of people in the UK at least, lose their sight in the later years of life due to macular degeneration. This form of visual impairment normally affects the central vision, so an individual must use peripheral vision to see complex visual information[RNIBa96]. Consequently reading and all detailed work become difficult, and so magnification or other types of aid for visual input must be used. Other conditions like cataracts, retinal detachment or diabetic retinopathy also affect the clarity of vision. Retinitis pigmentosa is another form of visually impairing disease, and leads to a condition known as tunnel vision when an individual can only see what is in front of them. This obviously makes mobility difficult as it becomes far more difficult to spot potential hazards in the periphery of vision.

Being visually impaired and becoming visually impaired are by definition very individual states. One person’s experience of visual impairment will not be the same as another’s, and so vast generalisations about these conditions are unhelpful in the context of visual impairment as an experience[SACKS95]. Each person, as in most life activity, must find appropriate solutions for themselves, and this project is intended to provide the choice of another possibly appropriate solution for a number of individuals.

Many myths exist with regard to visually impaired people. Visually impaired people do not have any special senses or out-of-the-ordinary sensory powers. As with everything, the more a sense is used the more adept the user becomes. Blind people don’t feel faces to establish a tactile pattern of an individual. Seventy seven percent of blind people

Page 10

Visual Impairment

retain enough sight to recognise people at a close distance. Very few visually impaired people read Braille, in fact only thirteen thousand in this country. There are obviously many barriers to having a guide dog so most visually impaired people use a cane, and in fact only four thousand people in the UK have a guide dog.

Visually impaired people can perform everyday ‘normal’ jobs just like anyone else in society, although four out of five visually impaired people of working age are unemployed[RNIBb96]. In this age of new technology, there now exists the scope for visually impaired people to participate more fully in everyday employment and activity. Blind and partially sighted people are exactly like everyone else, with similar aspirations and views, or more to the point, a variety of differing views and aspirations that makes up the community at large. These aspirations and views may currently be coloured by the lack of enabling provision within society[BAR89].

2.4

How Many People are Visually Impaired?

Visual impairment is one of the most common causes of disability worldwide. The figures in ‘Table 1: Visual Impairment Breakdown’ show the breakdown of blind and partially sighted people within the UK, although no demographic information is presented. There are a quarter of a million people in the UK who are registered as visually impaired. However, the UK actually has nearly one million people entitled to register as a visually impaired person, and 1.7 million with a seeing difficulty. This represents over three percent of the UK population. Worldwide there are around forty million visually impaired people, and it is estimated there could possible be a lot more[RNIBb96]. Table 1: Visual Impairment Breakdown England

31/03/91

Scotland

31/03/92

Wales

31/03/90

Northern Ireland

31/03/90

United Kingdom Total Visually Impaired

Blind Partially Sighted Blind Partially Sighted Blind Partially Sighted Blind Partially Sighted Blind Partially Sighted Both

126 828 93 777 19 506 6 210 8 511 6 206 2 497 869 166 709 107 062 273 771

Page 11

Visual Impairment

In Britain, more than twenty thousand children grow up with a visual impairment, and there are two hundred vision-related accidents per day in the UK alone. Although these may seem like high figures only eight percent of visually impaired people are born with any impairment. By far the largest numbers of visually impaired people fall into the senior citizen category, in fact sixty-six percent of people with impaired vision are over seventy-five. This means that over one person in five over seventy-five has some form of visual impairment.

2.5

How Can This Project Help

To add mobility extensions to a Personal Digital Assistant (PDA) such that the user interface is universal for all mobility aids using the system, regardless of the third-part development using it, seems like an appropriate way of addressing the problem of mobility in a visually impaired context. Extending a PDA such that it may provide appropriate mobility information for a visually impaired user must take into account the individual using it. It must also provide some form of mobility information suitable for people with limited vision, and in addition, aid people with tunnel vision to identify more easily usable mobility information. These extensions must be easy to use as older people (lacking experience with computer technology) may have need of it, and in addition the information should be provided frequently and in a simple format[BEN95]. The system must fit in with other non-electronic mobility aids such as the cane or dog, as such devices are still very useful for object detection and avoidance and are preferred by many users[BEN95][SACKS95]. The output from these extensions must also not be of the Braille type or any other representation types that are not met in the everyday life of a sighted person (for instance Moon3) as a very limited number of visually impaired people can read them.

It is obvious that such extensions cannot be implemented without reference to the external environment, and so applicable options must be evaluated. It is also obvious that this is a vast and complex problem and thus an across the board solution will not be found in this project. However a start will be made. 3

Another Braille like method of reading

Page 12

Travel

3

3.1

Travel

Introduction

A discussion of travel is presented in this chapter so that a basis for developing a system proposal can be formed. It is proposed that travel should not be thought of as a single activity but a sequence of different activities. This means there must be a number of different technologies (both simple and complex) that should be implemented to support effective travelling for visually impaired people. These trends are discussed in “Chapter 4. Electronic Travel Aids” based on the framework established in this chapter. The objective of this chapter is to analyse and suggest a model for how individuals travel, review travel specifically in the context of people with a visual impairment and make suggestions about travel in the context of the previously suggested “travel model”.

Most animals survive through movement in some form or another. A degree of mobility is essential for most species and this is no different for humans. Much human activity is accomplished through the ability to travel from one point to another. Travel then is essential not because of the human capacity for mobility, but because of the enabling aspects implicit in this mobility. Everyday life tasks usually require some degree of mobility. These may be as simple as moving from one room in a house to another, physical exercise, or shopping. However, people must be independently mobile to achieve a personal sense of independence and self-worth (teenagers’ rush to buy their first cars and children board a bus unaccompanied for the first time precisely to achieve this independence). Not only does mobility affect an individual perception of self, but also it enables the vast majority of people to earn a living by enabling them to travel to work or to perform a work related activity. Personal independent mobility might also be said to be very important in maintaining an individual quality of life.

Travel can be thought of as the whole experience of moving from one place to another and in this context a successful journey is one in which the desired location is easily

Page 13

Travel

reached with minimal or no other external human (as opposed to say a guide dog) involvement. Conventionally, travel can be separated into two aspects, those of (1) mobility and (2) orientation. Mobility is thought of as movement within an environment, and orientation is concerned with knowing about the environment in a useful way.

Orientation can be thought of as knowledge of the basic spatial relationships between objects within the environment. It is used as a term to suggest a comprehension of a travel environment or objects that relate to travel within the environment. How a person is oriented for travel is crucial to successful travelling. Information about position, direction, desired location, route, route planning etc. are all bound up with the concept of orientation.

Mobility on the other hand suggests an ability of movement within the local environment. Mobility is the ability to move and as such a knowledge of immediate objects and obstacles, of the formation of the ground (holes, stairs, flooring etc.), and of dangers both moving and stationary are all required for a successful travel experience.

Most information provided to aid the independent traveller is in the form of visual cues. These cues may range from graphical signs, to coloured lights and road markings, to printed public transport information. Whatever type of information is presented it is normally accomplished using only visual information. Presenting information in this way does not take into account around two percent of the UK population who have a visual impairment. Therefore, the visually impaired traveller in addition has a number of society-imposed handicaps to overcome before and during a successful journey.

Environments need to be usefully described regardless of the physical senses that are used to facilitate a journey, and this is especially true in complex urban environments and in the provision of public transport facilities. The objective is to allow individuals to quickly and accurately perceive an environment such that successful journeys can be made.

Page 14

Travel

3.2

Travel in the Context of Visual Impairment

Visually impaired people travel a journey in a slightly different way and using a number of different cues to sighted people. Because the travel task is second nature to most sighted people, and is learnt implicitly from an early age, the actual mechanics of it are often not explicitly considered. When studying travel in the context of a visually impaired person however, knowledge of how visually impaired people actually travel (i.e. without instruction) is important to the understanding of supplementary travel aids.

Visually impaired people have no preview of coming objects or obstacles and therefore the use of some type of preview device is important[JANS84]. Major concerns when travelling unassisted are down steps, kerbs, and stairs etc. Hedges, walls and other obstructions are not normally thought of but in the context of visually impaired travel the issue of ‘inside track’ obstacles are important and need to be recognised. Because of this lack of a preview visually impaired people sometimes get body contacts with the environment. Consequently the stride length and therefore walking speed of a visually impaired person is less that that of a sighted person, as is the continuity of progress[HEY83]. Travel for visually impaired people can be taxing and experiments often show a rise in heart rate with any travel task (although this is less for a familiar route[BRAM84]). This may also explain why visually impaired people orient themselves to a waypoint about every 40 metres as opposed to 100 metres for sighted people[BRAM84].

Although visually impaired people do not exhibit ‘super-human’ powers of sensory perception using the other senses, studies do show that an increased use of mental maps is present. The use of hearing is likewise increased and movement is centred on getting to a point and not moving along an edge[JANS84]. Due to the nature of travelling without vision, less information on the environment and more relating to the visually impaired person is used, and this is known as an egocentric view of the environment[BRAM84].

Visually impaired persons also use more temporal and egocentric terminology and less spatial and environmental terminology in defining points[BRAM84] and make explicit

Page 15

Travel

statements on distance more often. Body rotation is also used to describe parts of a journey and route descriptions are more complex when given by a blind person. The route is broken into a greater and more complex number of stages than when sighted people describe it, confirming the importance of a large number of fixed points to a visually impaired traveller[BRAM84]. Obstacle information is also more specific and present in greater detail when a route is described by a visually impaired person. Moreover, visually impaired people also use more simple information more frequently than complex information[BEN95].

The lack of vision, even with a primary assistive mobility device, does limit the travel experience. In studies, many visually impaired people relate that they would normally only travel independently in man-made urban environments (with regular features) and not in countryside environments. In addition to this preference for urban environments, many also stated that they would normally only travel unassisted in areas that were familiar[JANS84][BRAM84].

Although visual impairment is a blanket phrase covering many kinds of sight impaired people it does not make a distinction between congenital impairment and adventitious4 impairment. A study of individuals visually impaired from birth (congenitally visually impaired) highlights the fact that some differences exist between the two types. Many congenitally visually impaired people find it difficult to track their position against spatial information although there is no significant loss of mobility. It does however suggest that adventitiously visually impaired people, who had previous visual experience, are better at decoding spatial information. This has obvious implications on the usefulness of pre-planning devices, such as tactile maps, for congenitally visually impaired people[DODD82].

It seems therefore that any device that purports to be a travel aid for visually impaired people must by definition address one or many of the issues of travelling as a visually impaired person as previously discussed.

4

An ‘acquired’ impairment

Page 16

Travel

3.3

The Travel Task

The actual task of travelling is often overlooked, as has been previously stated. Because it is implicitly learnt from an early age many people overlook the importance of breaking it down into manageable portions to represent how people travel. Currently the travel task is thought of as being grouped into two areas, these being mobility and orientation. It is the contention of this chapter that the travel task is far more complex than this, and like Brambring[BRAM84] it is considered that a series of interrelated tasks are performed. The view proposed here differs somewhat from that of Brambring, who considers travel as being split into a tree of differing tasks (see Figure 1), in that it is thought that the travel task can be best likened to a ‘flow of travel’ (Table 2: The Travel Task Model, page 20). In addition it is suggested that devices to support travel by visually impaired people should be grouped according to which part of the travel task they aid (see Chapter 4). This will help to show which areas are currently neglected in the provision of primary and secondary electronic travel aids.

Locomotion Problems of the Blind

Perception of Objects

Detection of Obstacles

Process of Orientation

Identification of Landmarks

Spatial Orientation

Geographic Orientation

Figure 1: Brambring's General locomotion problems of blind persons.

Most people are able to describe a frequently travelled simple route, however it is something not normally done, a sighted individual typically performs the journey without thinking explicitly about the actual route. However if a simple journey is examined then a number of similar parts can be distinguished. These parts are not specific to the journey but to the process of performing the journey, and it is these that are of interest. To clarify this concept of a ‘flow of travel’, a short journey will now be described (from the authors office to the train station and into a train. This

Page 17

Travel

decomposition is illustrated in Figure 2. The sequence can be broken down into a series of sub-tasks that represent this ‘flow of travel’. Key to Description Breakdown (print style denotes travel activity) • • • • •

Object detection and avoidance Waypoint/Orientation point Complex Information Given or processed Direction Information Distance Information

The starting point and destination are defined implicitly, and any pre-planning is implicit as it is a journey made many times. “While all the time making sure I don’t walk into anything or into anybody. Walk from my desk to the door of my office, open the door and continue a short distance past the lifts until turning right and proceeding down a long corridor. At the point where I reach a set of double doors I continue through them and down a flight of stairs, turn 180 degrees and down another flight and then turn 180 degrees and then down another flight. I am now at ground level and exit the building by walking straight out of the single door. On the outside I turn right and make for the pedestrian crossing turn left to cross it, and turn right again to continue about 500m to the train station. I enter through the double doors and move directly to the electronic departure time display, find the time and platform of the train I wish to board, and follow the signs to that location. Here I wait until the train has arrived, and then, when the doors have opened and are clear of disembarking passengers, board it.”

Figure 2: Breaking Down the Travel Task It is a simple journey but one that is adequate to break-down into parts that will describe the entire travel task. It may now be useful to re-examine the journey description highlighting certain relevant words and phrases to see how the journey breaks down.

Breaking down the task in this way allows a slightly clearer view of how journeys can be Sectionalised. This is all very well, however what is now needed is some kind of flow model to allow a more complete way of thinking about the travel task and therefore to allow Electronic Travel Aids to be categorised. An initial textual breakdown can be seen in ‘Figure 3: Stages and Sequence of a Journey’ (Pg. 19).

Page 18

Travel

1. Define Start and End points 2. Pre-planning of journey (either by maps or preliminary assisted travel, descriptions or memory) 3. Start Journey 3.1. While detecting and avoiding objects both static and moving 3.1.1. Orientation to start point 3.1.2. Derive complex information that may be present 3.1.3. Define initial direction to first waypoint 3.1.3.1. Move distance required in direction required 3.1.3.2. Continually update distance and direction 3.1.3.3. Achieve first waypoint 3.2.1. Orientation to next waypoint 3.2.2. Derive complex information that may be present 3.2.3. Define initial direction to next waypoint 3.2.3.1. Move distance required in direction required 3.2.3.2. Continually update distance and direction 3.2.3.3. Achieve next waypoint And so on until… 3.3.1. Orientation to end point 3.3.2. Derive complex information that may be present 3.3.3. Define initial direction to end point 3.3.3.1. Move distance required in direction required 3.3.3.2. Continually update distance and direction 3.3.3.3. Achieve end point 4. Finish journey

Figure 3: Stages and Sequence of a Journey

This description is however a little verbose and so ‘Table 2: The Travel Task Model’ (Pg. 20) is presented to allow a more succinct form of description.

In the model, waypoints, orientation points and information points are all intended to represent some form of information giving object. A waypoint for instance may be just an arbitrary (implicit) point (say, where two roads/tracks meet) or it may be a specific (explicit) point intended to be a waypoint (a beeping sound marker, say). It is however intended that the information point will represent some form of device that gives complex information (for example a timetable, or street map ‘information point’).

Although not explicitly stated in the model, the concept of a ‘way-edge’ should also be considered. Previously, only the concept of some discrete fixed point has been considered. It is asserted however, that individuals may also use a continuous or largescale object as a kind of waypoint. This could be called a ‘way-edge’, as it is possible that this object is followed until it ends or some other factor or object is met. For

Page 19

Travel

instance, an individual may walk along the edge of a wall using the entire structure as a reference point to where they are (a sequence of closely spaced waypoints if you will).

Finally, the concept of ‘track’ should be considered. This is the route from one information point or waypoint to the next. Following the route down a road until the next point is discovered can be thought of as a track. It is therefore important that individuals keep on track until the next waypoint is achieved. The track may be bending or have a number of turns however if the next waypoint is missed the individual goes astray.

Table 2: The Travel Task Model 1

This is just the act the deciding where an individual wants to go to and from where they will start.

2

This may be with the aid of a map, or with descriptions of directions from other people, from information already known by the individual (or from being shown the route by someone who already knows it).

Start the journey Throughout the journey obstacle detection and avoidance must be performed

3

Maintain safety by detecting obstacles and avoiding them. Do this at all stages of the journey.

Orient to a waypoint, information point or orientation point

4

Can either be a starting point or destination or mid-point in the journey. It can be a waypoint a ‘way-edge’, a sign or information-giving component like a timetable board, or can be a physical orientation mark. Perform the following sequence on achieving each waypoint.

Decide on distance and direction to next point

5

Orient in the correct direction and have an idea of the distance until next waypoint. Follow track. Continue to detect the direction and distance throughout.

In-Route Guidance

6

In-route guidance may or may not occur. It could be achieved using a simple paper-map, or other more sophisticated methods.

Moving to next point

7

Move to next point using information about the ‘track’ given by the environment and waypoint information. Continue to detect the direction and distance throughout.

Achieve next point

8

This can be the next waypoint or the destination. Start towards the next waypoint, and continue to maintain all safety measures.

Decide on start and destination

Pre-Plan the route

Waypoints may give information on direction and distance to next point. However an Information Point gives information in a 2-way fashion.

Page 20

Travel

3.4

Conclusion

As can be seen a journey involves very many complex real-time challenges. However, a dissection of the journey presented in the chapter has enabled the formulation of a number of different elements to be identified based on the concepts of mobility and orientation. From this investigation it was found that the travel task could be brokendown into:

1.

Route Planning

Plan a journey, and decide on a route before hand, based on maps and/or previous knowledge of the route or journey.

2.

Obstacle Detection and Avoidance

While travelling, constantly detect and avoid any obstacles either stationary (lampposts, walls etc) or moving (people, cars, etc).

3.

Orientation and Waypoints

The journey will be Sectioned into waypoints / wayedges. These allow some means of orientation, and travellers naturally divide a journey into Sections with a waypoint as the start and finish (road junctions, landmarks, etc).

4.

Information Points

Information points are points along a journey where information about the journey is available (timetables, next bus information, etc).

5.

In-Route Guidance

In-Route guidance is sometimes performed. This may be by asking for directions or by carrying a map.

Quite obviously, the travel task is a very complex sequence of stages where large amounts of complex information about the changing environment and the destination are constantly factored and updated. Although some of the detail will naturally be lost in any attempt to derive a general purpose model, it is asserted that the ‘Travel Task Model’ (as described above) represents a concise, understandable and structured basis for any future discussion on individual unassisted travel.

Page 21

Electronic Travel Aids

4

4.1

Electronic Travel Aids

Introduction

Over the past century there have been many developments in the provision of travel (mobility and orientation) aids for visually impaired people. These have ranged from the simple cane to advanced electronic aids.

While the development of other devices to aid visually impaired people in their everyday life has been increasing and in some cases adequate solutions to providing sensory supplementation have been very effective (from Braille through to “electronic reading machines”), truly adequate solutions to travelling as a visually impaired person have not yet been developed.

Consequently, complex sets of devices and aids have been produced to aid easy and successful travelling. These range from planning aids like maps and charts, to orientation aids such as street signs, pedestrian markings and road furniture.

Environments are complex and vision is used to the exclusion of many other senses by a vast amount of the population. A number of devices have already been developed to address some of the difficulties faced by visually impaired people with regard to travel. These devices will be reviewed and catalogued with reference to the travel model presented in chapter 3, such that each type of device and its place in the travel task can be more fully understood.

4.2

Historical Overview of Travel Aids

The nature of visual impairment is very complex, however in some areas (such as reading) sensory supplementation devices have proved successful. In the field of travel, while there has been some limited successes (mainly in the areas of obstacle detection and avoidance) little progress has been made. This is not necessarily because devices

Page 22

Electronic Travel Aids

have not been developed but because of the complex nature of the problem and society’s lack of will (mainly for monetary reasons) to provided suitable alternatives for visually impaired people.

Travel is one major and obvious area that can be difficult or limited due to visual impairment. As such there has been much and early research into the provision of devices to aid the individual. Although guide dogs and canes are often linked with visually impaired people when thinking about mobility these were not the first formalised methods of supporting mobility by visually impaired people. Obviously some form of stick for probing when travelling has been used informally for centuries, as opposed to long and short canes of a standard length. The long cane however (now the most used mobility device) did not come into popular usage until the middle of the century (circa 1950)[BRAM84]; and as previously stated guide dogs still only number around four thousand in the UK.

Simple Electronic Travel Aids (ETA’s) have been in development since 1897[WAR84]. Real and more complex developments occurred after the Second World War and through the 1950s and 60s[DUP63]. With the advent of the possibilities of remote sensing in the form of Ultrasound (sonar) and RADAR more research effort was directed at the problems of remote sensing of the environment for visually impaired people. Advances in electronics and circuit miniaturisation also aided the development of these devices into portable mobility machines and a number of devices using these technologies were developed such, as the ‘Franklin Institute Electronic Cane’[GIB63], ‘Mowat Sensor’[KAY84], and the ‘Pathsounder’[WAR84].

Through the 1960s and 70s obstacle detection devices continued to be developed using a variety of sensing methods, notably lasers. However advances in the pre-planning of routes were also taking place and with the advent of ‘capsule paper’ (which expands when heated) tactile maps could be produced more easily than previously existing methods. Later in the 1980s and 1990s further research allowed a form of tactile map that talked to be developed.

Page 23

Electronic Travel Aids

Recently through the early 1990s the focus has switched from mobility and obstacle detection to orientation and location. These systems (and circa 1998, these are numerous), called ‘Audible Signs’, ‘Sound Buoys’ etc, transmit some form of remote signal once a user gets into range of the device, which then delivers an audible message, either as a tone or speech (described later). While these systems do solve some problems and despite being relatively inexpensive, it can be expensive to place these signs extensively in an environment.

The increasing power and shrinking size and price of general-purpose computer equipment through the 1980s and 90s has enabled many advances to be made in the areas of blind mobility. Although current devices seem to be directed at making obstacle detection more accurate, there are a number of projects that have tried to provide other useful information. These try to give accurate positional information based on electronic maps stored in general-purpose computers and using the Global Positioning System (GPS) to allow a fix to be made[MOB97]. Problems exist with the accuracy of GPS and so Differential GPS (DGPS) will be available from 1998. This is still not completely satisfactory, since although the accuracy (6 meters) is significantly better than GPS (100 metres) but is still not good enough to provide the pin-point accuracy needed for travel (a 6 or 100 metre error will still allow a user to fall down a flight of stairs).

There are currently many different research projects worldwide looking into many forms of accessibility to mobility and orientation information for visually impaired people. These involve technologies as diverse as GPS, sound signs, vision substitution systems, obstacle detection and avoidance, vision enhancement, optical recognition, and computer networking, through to vision research at NASA, biotechnology, genetics and nanotechnology (described later).

Page 24

Electronic Travel Aids

4.3

Electronic Travel Aids

Now that an understanding of the ‘Travel Task’ has been accomplished (chapter 3), the discussion can move on to considering the ETA’s that have been developed, are currently available, and are being developed. As previously stated these ETA’s will be discussed with reference to the ‘Travel Task Model’. From the model, it can be seen that the categories can be partitioned into:

1. Route Planning 2. Obstacle Detection and Avoidance 3. Waypoint/Orientation Point Provision (including distance and direction information) 4. Information Point Provision 5. In-Route Guidance Providers

This last category may be just a combination of sophisticated route planning and waypoint provision, or it could involve portable maps etc. 4.3.1 Route Planning Preparation for any task usually leads to a more successful outcome then executing the task without a plan, and planning a route is no exception especially when traversing an unknown area. This pre-planning is useful to both sighted and visually impaired individuals alike. Visually impaired people do, however, have a disadvantage as most pre-planning takes the form of consulting maps, diagrams and charts.

It has been well established that tactile maps can provide a means of making available some of the spatial knowledge inherent in graphical information to visually impaired people[BLISS63] (similar techniques are also used in writing[PES78]). These maps take the form of a series of raised edges, which allow visually impaired people to feel the spatial arrangement of features present on the map[BEN79]. Different types of maps are currently available and new forms of tactile maps have been produced in an attempt to aid readability and portability[GOLL91].

Page 25

Electronic Travel Aids

However, problems still exist with tactile maps. One of these problems is centred on the resolution of the diagram, the physical size of any textual descriptions and the limit to the size of description this consequently implies. A method to overcome this limit of textual descriptions became realisable with the advent of cheaper computing technology and a number of research projects were instituted in the early to mid 1990s. These projects focused around the goals of using supplementary speech to overcome the problems of label length[KOCH96].

4.3.1.1 Electronic Tactile Maps An example of a mapping technique known as ‘Talking-Tactile Maps’ (TTM), and its successor ‘aMie’, used a standard multipurpose IBM compatible PC connected to a touch sensitive pad with a tactile paper overlay in its implementation. Raised areas on the paper are associated with electronic areas defined in the TTM system. These areas then have speech labels associated with them. The paper overlay is attached to the overlay keyboard (which is plugged into the computer). Once the user presses an area on the paper the overlay keyboard is also pressed at the same time and a signal sent to the TTM system. The software also provides for multiple levels of speech labels around similar tactile overlays[BLEN94], such that the TTM system output may change but the paper overlay does not require changing. Additional advantages are also incurred as the maps could now be implemented using sequences of maps with speech labels dynamically loaded (in suitable cases). Therefore, a number of floor plans for a building could be entered into the system and the floor plan for a specific floor loaded dynamically on request. This of course can only be useful if the tactile paper overlay does not need to dynamically change (in content) while in use, as would be the case with road maps[CRA82].

Further developments have taken place on these product types up to the current day (circa 1998) and indeed these litter the ‘ICA Maps and Diagrams for Blind and Visually Impaired People conference[HOL96]. Although electronic tactile maps are at present the main focus of pre-planning technology, some systems have been developed that do not require the tactile element. These systems do not require touch sensitive pads or tactile overlays but instead provide information about the graphic/map in an audible format.

Page 26

Electronic Travel Aids

4.3.1.2 Electronic Audible Maps By examining an image using digital signal processing it is possible to convert the visual representation of an area of the image into sound. This processing can occur such that the image is converted in a specific manner (say from left to right, top to bottom). After this conversion has taken place a sound sequence is produced such that the image is now converted to ‘sound-image’. To prove that the sound is a true representation of the image it can be converted back and a graphic is reformed (with some degradation)[MEI92]. Although this is not immediately useful as an electronic travel aid due to the complexity of maps and the resolution required it is still worth considering.

The conventional spoken word is also a much used pre-planning ‘tool’. Information about the route given by someone who has already travelled it gives a large amount of information to sighted and visually impaired people alike. These spoken narratives are widely used in mobility and orientation training, and in assisting visually impaired people on new routes. But in everyday life any description given by a sighted person may be useless to visually impaired people if it relies on visual spatial cues, and as previously stated many visually impaired people are egocentric in relation to their personal travel.

Although this may initially seem problematic, there are methods to overcome this information mismatch on both the journey teacher’s side, and on the part of the journey student[MIL88]. Methods such as guided imagery[SOM90] allow formalised training to be given to both parties to make the information transfer more understandable. However, problems arise with ‘teachers’ not trained in any description techniques.

Page 27

Electronic Travel Aids

4.3.1.3 Magnetic Audio Tapes These problems can be overcome by using tape recorders to allow accurate intelligible descriptions of journeys both at the pre-planning and in-route stages of the journey. Standard portable tape recorders can be used to store this information and then to play it back at an appropriate time[LEV89]. These tape devices can also be used more ‘intelligently’ as part of complex systems. These systems allow the route to be previewed and then replayed automatically in-route by activating the tape when certain waypoints (electronic, and placed in the environment) are achieved[MIL88].

Although standard tape devices are useable and appropriate in some cases, problems exist when a large amount of detailed and descriptive information is needed in a random sequence based on location. With training and practice many people can understand speech at much higher rates than is normally used in everyday activity. It follows then that a system to deliver speech at higher rates would be advantageous. This concept was originally implemented to allow faster access to talking books[JOH77], however the technology can be just as well used in the context of travel.

A Note Clarifying Route Planners and In-Route Guidance. All of these pre-planning devices can also be used as In-Route Guidance Providers. Some may need slight modification to make them portable, others may need no modification at all. Because of the dual nature of the devices they will only be discussed in this Section and not subsequently even though they may logically also fit into a later Section. 4.3.2 Obstacle Detection and Avoidance Since the middle of the century obstacle detection and avoidance has been the focus of much research and development into mobility and orientation. The objective of obstacle detection is to provide a means for visually impaired people to preview an upcoming environment. The goal of this body of research is to represent visual stimuli in some other way for example, through the use of an electronic vision prostheses. However, this has so far proved largely impossible to the present time. Other systems have been developed that, while useful, give a less complex view of the environment than can be expected through visual stimuli.

Page 28

Electronic Travel Aids

These systems range from experiments with sonar through to the use of lasers to detect obstacles in the environment. There are currently a number of systems and devices to aid a visually impaired person in obstacle avoidance, these systems focus on the use of ultrasound to give an audible (varying tone) picture of the immediate upcoming environment[WHIT95].

The "Holy Grail" of obstacle detection and avoidance, and for that matter research into visual impairment in general, is the production of vision systems that accurately represent a physical visual environment. The logical conclusion of this research is the production of an electronic device to physically interface with the brain. This is sometimes known as Electro-cortical stimulation.

4.3.2.1 Electronic Vision Some research has been performed with the intention of stimulating nerve fibres such that the visual cortex is "fooled" into believing that a fully functioning eye is present. This research, performed in the United Kingdom from as long ago as the early 1970s, has investigated the possibility of providing an electronic device that in some way provides reasonable visual stimulus to the brain[KAY84].

This research led to the development of an Electro-cortical prosthesis that has been tested on a human subject. The research actually shows that when an area of the visual cortex is stimulated the patient will see a white spot of light that if the eyes are kept steady the current position will be fixed in the head. If a different point is stimulated a new spot will ‘appear’. The most recent visual prosthesis was implanted in April 1982. It uses row and column logic and provides a hundred and eighty output channels that stimulate areas of the brain hence enabling some form of limited vision based on the appearance of these white spots[DON83].

Advances in the field of computer vision and image processing has enabled researchers to pursue the goal of analysing complex information in real time even with a portable computer[DEE84]. However, these systems are not yet speedy or accurate enough to

Page 29

Electronic Travel Aids

provide truly accurate feedback that is sufficiently rich in information as to assist in navigation.

Since the mid-1980s researchers have judged that computer vision to facilitate mobility and orientation (specifically object detection and avoidance) was becoming more realisable and hence a series of projects was initiated to research this field[TOU84]. Although to date there are no commercial systems that use computer vision and image processing techniques to give feedback in the real world, in the context of mobility, there have been some usable research systems developed and there are some systems in development[POLL84].

The systems rely on edge detection to accurately describe upcoming obstacles. On all the systems a camera detects the physical environment, either as continuous image frames or as a single image snapshot. Edges (and in some cases other more subtle components of the image) are then processed based on a known set of rules (to identify vehicles, or paths, etc) thus creating a computer generated virtual representation of the physical environment. After this the information can be relayed to the user either as a spoken textual description, or as some sort of audible landscape, or in some cases as tactile stimuli[ADJ92].

Electronic mobility and orientation devices, and specifically those concerned with obstacle detection and avoidance, have not yet reached a level of sophistication whereby they can be used independently (as the primary travel aid) of other non-electronic aids. Presently these non-electronic aids are the long and short canes and the Guide Dog (seeing eye dog). Therefore, most electronic devices are used by visually impaired people as secondary travel aids, and so most obstacle detection aids are developed within this context[BEN63].

4.3.2.2 Robotics Technology to aid mobility and orientation is often developed in a different context from that in which it is used. This has been the case with sonar and RADAR and the crossovers between subjects allow orientation and mobility devices to be more easily developed.

Page 30

Electronic Travel Aids

For instance consider mobile robots: these robots need to sense obstacles in order to avoid them in factories. Many robots are equipped with standard navigational devices. These maybe bumper switches which detect small collisions between the robot and an obstacle, ultrasound, infra-red (discussed later) or radar[MCM96] to detect objects at a distance. Hence, much technology used for their development can also be applied to the task of mobility and orientation for visually impaired people[MCM96]. Obviously there are advantages when using these technology types with intelligent users, as a user can make a more complex and informed decision based on their knowledge of the travel task. There are also disadvantages in that the interface methods between the stimuli collected by the system and the feedback provided by the system can be complex and difficult to implement.

A number of systems for obstacle detection have been developed mainly using (as previously stated) sonar, ultrasound, and lasers. These developments took place from the middle of the century onwards and are still used and are relevant today.

4.3.2.3 Ultrasound and Sonar Many obstacle detection systems used ultrasound because shorter distances (

Suggest Documents