USB. Universal Serial Bus. W3C. World Wide Web Consortium. WHO ...... of, and recovery from short-term or temporary disabilities and illnesses. .... The environment context is one of the two types of data that the system might not have.
Using a Common Accessibility Profile to Improve Accessibility
A Thesis Submitted to the College of Graduate Studies and Research in Partial Fulfillment of the Requirements for the degree of Master of Science in the Department of Computer Science University of Saskatchewan Saskatoon
By David W. Fourney
©David W. Fourney, November 2007. All rights reserved.
Permission to Use In presenting this thesis in partial fulfilment of the requirements for a Postgraduate degree from the University of Saskatchewan, I agree that the Libraries of this University may make it freely available for inspection. I further agree that permission for copying of this thesis in any manner, in whole or in part, for scholarly purposes may be granted by the professor or professors who supervised my thesis work or, in their absence, by the Head of the Department or the Dean of the College in which my thesis work was done. It is understood that any copying or publication or use of this thesis or parts thereof for financial gain shall not be allowed without my written permission. It is also understood that due recognition shall be given to me and to the University of Saskatchewan in any scholarly use which may be made of any material in my thesis. Requests for permission to copy or to make other use of material in this thesis in whole or part should be addressed to:
Head of the Department of Computer Science 176 Thorvaldson Building 110 Science Place University of Saskatchewan Saskatoon, Saskatchewan Canada S7N 5C9
i
Abstract
People have diffculties using computers. Some have more diffculties than others. There is a need for guidance in how to evaluate and improve the accessibility of systems for users. Since different users have considerably different accessibility needs, accessibility is a very complex issue. ISO 9241-171 defines accessibility as the “usability of a product, service, environment or facility by people with the widest range of capabilities.” While this definition can help manufacturers make their products more accessible to more people, it does not ensure that a given product is accessible to a particular individual. A reference model is presented to act as a theoretical foundation. This Universal Access Reference Model (UARM) focuses on the accessibility of the interaction between users and systems, and provides a mechanism to share knowledge and abilities between users and systems. The UARM also suggests the role assistive technologies (ATs) can play in this interaction. The Common Accessibility Profile (CAP), which is based on the UARM, can be used to describe accessibility. The CAP is a framework for identifying the accessibility issues of individual users with particular systems configurations. It profiles the capabilities of systems and users to communicate. The CAP can also profile environmental interference to this communication and the use of ATs to transform communication abilities. The CAP model can be extended as further general or domain specific requirements are standardized. The CAP provides a model that can be used to structure various specifications in a manner that, in the future, will allow computational combination and comparison of profiles. Recognizing its potential impact, the CAP is now being standardized by the User Interface subcommittee the International Organization for Standardization and the International Electrotechnical Commission.
ii
Acknowledgements My graduate education, including this work, would not have been possible without the support and caring of several people. I would like to thank my wife, Colleen for her love and patience. At the start of this journey, I had been advised that graduate school is hard on a marriage. Colleen’s dedication and support helped me to focus on my work in the most stressful times. I would like to thank Dr. Jim Carter for believing in me. From championing my admission to the College of Graduate Studies to advocating for my eventual admission into the Master’s program, I would have given up the fight if it had not been for his unwavering support. Further, I would like to thank Jim for his patience, guidance and support over the past five years which has culminated in this work. Thank you to the other members of my Committee. Thanks to both Dr. John Cooke and Dr. Jean-Paul Tremblay for taking time out of their retirement to support me. Thanks also to Dr. Gregg Vanderheiden of the University of Wisconsin-Madison for agreeing to examine this Thesis, and to Dr. Grant Cheston, a big thank you for agreeing to chair the examination on short notice. Finally, I want to thank the members of ISO/TC 159/SC 4/WG 5 and ISO/JTC 1/SC 35 for their contributions to this work. The ongoing development of ISO/TS 16071 and ISO 9241-171 during the period of my studies lead to several ideas that eventually made their way into this work. The interest in my work from ISO/JTC 1/SC 35 boosted my confidence and my desire to continue the effort.
iii
In memory of my grandmother, Ella.
She was a mother, teacher, author, and community leader who never allowed her own disabilities to limit her dreams. While she will never benefit from this work, she was a source of inspiration for it.
iv
Contents
Permission to Use
i
Abstract
ii
Acknowledgements
iii
Contents
v
List of Tables
ix
List of Figures
xii
List of Abbreviations
xiii
List of Listings
xiv
1 Introduction 1.1 Improving Accessibility of Computing Systems . . . . . . . . . . . . . 1.1.1 Usability and Accessibility . . . . . . . . . . . . . . . . . . . . . 1.1.2 Creating Accessibility . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 (Dis)Abilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.4 Handicaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.5 Focusing on Abilities to Minimize Handicaps . . . . . . . . . . 1.2 Research Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Defining Common Accessibility Profile Structure and Contents 1.3.2 Qualifying Information in Relationships within CAPs . . . . . 1.3.3 Developing Example CAPs . . . . . . . . . . . . . . . . . . . . 1.3.4 Validating the CAP . . . . . . . . . . . . . . . . . . . . . . . . 1.3.5 Additional Materials . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
1 1 1 3 4 8 9 9 9 10 10 10 11 11
2 Background 2.1 Modelling Usability . . . . . . . . . . . . . . . . . . . . . 2.1.1 Usability and User-Centred Development . . . . 2.1.1.1 Usability in ISO 9241-11 . . . . . . . . 2.1.1.2 Principles of User-Centred Development 2.1.2 Universal Access Reference Model (UARM) . . . 2.1.2.1 Systems . . . . . . . . . . . . . . . . . . 2.1.2.2 Users . . . . . . . . . . . . . . . . . . . 2.1.2.3 Interaction . . . . . . . . . . . . . . . . 2.1.2.4 Handicaps . . . . . . . . . . . . . . . . 2.1.2.5 Contexts . . . . . . . . . . . . . . . . . 2.1.2.6 Environments . . . . . . . . . . . . . . 2.2 Further Considerations on UARM Components . . . . . 2.2.1 Users . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1.1 The User’s Interface . . . . . . . . . . . 2.2.1.2 The User’s Profile . . . . . . . . . . . . 2.2.1.3 Life Changes . . . . . . . . . . . . . . . 2.2.2 Systems . . . . . . . . . . . . . . . . . . . . . . . 2.2.2.1 The System’s Interface . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
12 12 13 13 14 15 16 16 16 17 18 19 20 20 21 21 23 24 24
v
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
2.3
2.4
2.5
2.2.2.2 Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2.3 The System’s Interaction Components . . . . . . . . . . . . . . . . . 2.2.2.4 System Stored Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4.1 User Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4.2 System Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.5 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multi-System Models of Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 UARM and Assistive Technologies . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 AT and Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Specifying a Total System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common Accessibility Profile (CAP) . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Describing Input Receptors, Output Transmitters, and Processing Functions 2.4.1.1 Interacting Components . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1.2 Direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1.3 Modality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Using Channels in the CAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2.1 Considering the Need for Channels . . . . . . . . . . . . . . . . . . . 2.4.2.2 Properties of Channels and the CFs they Connect . . . . . . . . . . 2.4.3 Applying the CAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Applying the CAP to Identifying Handicaps . . . . . . . . . . . . . . . . . . . 2.4.4.1 Applying the CAP to Selecting ATs . . . . . . . . . . . . . . . . . . 2.4.4.2 Applying the CAP to Managing ATs . . . . . . . . . . . . . . . . . 2.4.5 Tying It All Together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Next Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 CAP Structure and Specification 3.1 CAP Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Identification Information . . . . . . . . . . . . . . . . . . . 3.1.2 Linkages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Type-Specific Information . . . . . . . . . . . . . . . . . . . 3.2 CAP Specification Details . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 CAPO High Level CAP . . . . . . . . . . . . . . . . . . . . 3.2.2 Interacting Component CAP(s) . . . . . . . . . . . . . . . . 3.2.3 Component Feature CAP(s) . . . . . . . . . . . . . . . . . . 3.2.3.1 Modality CF-Specific Information . . . . . . . . . 3.2.3.2 Language CF-Specific Information . . . . . . . . . 3.2.3.3 Syntax for Adding Capability-Specific Information 3.2.3.4 Examples of Capability Specific Information . . . 3.2.4 Additional Properties . . . . . . . . . . . . . . . . . . . . . 3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Relationships within CAPs 4.1 Operations on CAPs . . . . . . . . . . . 4.1.1 Unary Operations . . . . . . . . 4.1.1.1 Required (SHALL) . . 4.1.1.2 Optional (MAY) . . . . 4.1.1.3 Exclusion (NOT) . . . 4.1.1.4 Other Candidate Unary 4.1.1.5 Summary . . . . . . . . 4.1.2 Binary Operations on CAPs that 4.1.2.1 Included (AND) . . . . vi
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operations . . . . . . . . . . . . . . . . . Users Interact With . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . to the CAP . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . .
25 25 26 27 30 30 31 31 34 34 38 40 40 41 42 43 43 44 46 46 52 56 56 57 58 60 60
. . . . . . . . . . . . . .
62 62 64 65 66 66 66 66 68 73 77 81 87 93 93
. . . . . . . . .
95 96 96 96 97 98 99 100 101 101
. . . .
. . . .
. . . .
. . . .
. . . .
101 102 103 103
5 Example CAPs 5.1 Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Using Personas to Develop Sample User CAPs . . . . . . . . . . . . . 5.1.2 Clark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2.1 Clark’s Description . . . . . . . . . . . . . . . . . . . . . . . 5.1.2.2 Clark’s CAP . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 Johann . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3.1 Johann’s Description . . . . . . . . . . . . . . . . . . . . . . 5.1.3.2 Johann’s CAP . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4 Pam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4.1 Pam’s Description . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4.2 Pam’s CAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.5 Tae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.5.1 Tae’s Description . . . . . . . . . . . . . . . . . . . . . . . . 5.1.5.2 Tae’s CAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 The CAP Approach to Systems . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Standard System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2.1 Standard System Description . . . . . . . . . . . . . . . . . . 5.2.2.2 Standard System CAP . . . . . . . . . . . . . . . . . . . . . 5.2.3 System with Some Additional Software AT Support . . . . . . . . . . 5.2.3.1 System with Software AT Support Description . . . . . . . . 5.2.3.2 System with Software AT Support CAP . . . . . . . . . . . . 5.2.4 System with Additional Software and Hardware AT Support . . . . . 5.2.4.1 System with Software and Hardware AT Support Description 5.2.4.2 System with Software and Hardware AT Support CAP . . . 5.2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 The CAP Approach to Environments . . . . . . . . . . . . . . . . . . 5.3.2 Conference Exhibition Hall . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2.1 Conference Exhibition Hall Description . . . . . . . . . . . . 5.3.2.2 Conference Exhibition Hall CAP . . . . . . . . . . . . . . . . 5.3.3 A Darkened Kitchen . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3.1 Darkened Kitchen Description . . . . . . . . . . . . . . . . . 5.3.3.2 Darkened Kitchen CAP . . . . . . . . . . . . . . . . . . . . . 5.3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Next Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
113 114 115 115 115 117 120 120 122 126 126 127 135 135 136 141 141 142 143 143 144 157 157 158 170 170 171 179 179 180 181 181 182 188 188 189 191 191
. . . . . . . . .
192 192 193 193 194 194 195 196 196 196
4.2
4.1.2.2 Substitutable (OR) . . . . 4.1.2.3 Mutually Exclusive (XOR) 4.1.2.4 Summary . . . . . . . . . . Evolving the CAP Structure . . . . . . . . .
6 Validation 6.1 Validity and Reliability . . . . . . . . . . . 6.1.1 Validity . . . . . . . . . . . . . . . . 6.1.1.1 Face Validity . . . . . . . . 6.1.1.2 Content Validity . . . . . . 6.1.1.3 Criterion-Oriented Validity 6.1.1.4 Construct Validity . . . . . 6.1.2 Reliability . . . . . . . . . . . . . . . 6.1.2.1 Test-Retest Reliability . . . 6.1.2.2 Inter-Rater Reliability . . . vii
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
197 197 199 200 201 202 203 204 205
7 Discussion and Conclusion 7.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Contribution and Future Directions . . . . . . . . . . . . . . . . . 7.2.1 Future Directions of Applying CAPs . . . . . . . . . . . . 7.2.2 Developing a User CAP . . . . . . . . . . . . . . . . . . . 7.2.3 Acquiring / Developing System / Environment CAPs . . 7.2.4 Using CAPs to Customize a System . . . . . . . . . . . . 7.2.5 Comparing CAPs to Evaluate Potential Systems / ATs . 7.2.6 Using CAPs to Specify Legal / Contractual Requirements 7.2.7 Tools to Support these Applications . . . . . . . . . . . . 7.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
206 206 208 208 209 210 211 212 215 215 216
6.2 6.3
6.1.2.3 Expert Evaluation of Reliability . . . . . . . . Evaluating the CAP via ISO/IEC Expert Voting . . . . . . . . Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 The Standards Approach to Metrics . . . . . . . . . . . 6.3.1.1 Approaches to Usability Metrics in Standards 6.3.2 The CAP’s Approach to Metrics . . . . . . . . . . . . . 6.3.2.1 Need for Standardized Capability Names . . . 6.3.2.2 Precision of CAP-Based Metrics . . . . . . . . 6.3.2.3 Use of Logical Operators . . . . . . . . . . . .
References
226
A Glossary
227
B Example of Gesture in Interaction 229 B.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 B.2 Gameplay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 C Notes on CAP Implementation C.1 Data Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.2 Data Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.3 A Secure CAP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
230 230 230 231
D Additional Attributes Beyond those Needed to Compute D.1 CAPU SE Type-Specific Information . . . . . . . . . . . . . D.2 CAPSY S Type-Specific Information . . . . . . . . . . . . . . D.3 CAPEN V Type-Specific Information . . . . . . . . . . . . .
232 232 232 233
CAPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E ISO/IEC FCD 24756 234 E.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 F Metric Scales F.1 Scales . . . . . . . . . . . . . . . F.1.1 Nominal Scale . . . . . . F.1.2 Ordinal Scale . . . . . . . F.1.3 Interval Scale . . . . . . . F.1.4 Ratio Scale . . . . . . . . F.1.5 Absolute Scale . . . . . . F.1.6 Nominal-Categorical Scale
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
viii
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
235 235 235 235 236 236 237 238
List of Tables 1.1
Categories of Definitions of “Disability” . . . . . . . . . . . . . . . . . . . . . . . . .
2.1
Subdivisions of “Media” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18
High Level CAPO Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interacting Component CAPIC Structure . . . . . . . . . . . . . . . . . . . . IC Component Feature CAPCF Structure . . . . . . . . . . . . . . . . . . . . CAP General Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linkages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CAPO High Level Specification Format . . . . . . . . . . . . . . . . . . . . . CAPIC Specification General Format . . . . . . . . . . . . . . . . . . . . . . . CAPCF Specification General Format . . . . . . . . . . . . . . . . . . . . . . Subdivisions of “Media” Revisited . . . . . . . . . . . . . . . . . . . . . . . . Examples of Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example ISO 639-2 Language Codes . . . . . . . . . . . . . . . . . . . . . . . Example ISO 15924 Script Codes . . . . . . . . . . . . . . . . . . . . . . . . . Frequency as an Example IR/OT Capability-Specific Information . . . . . . . Frequency and Intensity as Example IR/OT Capability-Specific Information . Time-Out as an Example IR/OT Capability-Specific Information . . . . . . . Force as an Example IR/OT Capability-Specific Information . . . . . . . . . Cursor Requirements as an Example IR/OT Capability-Specific Information . Frequency as an Example PF Capability-Specific Information . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
63 63 64 64 65 66 67 69 74 77 79 80 88 89 89 90 91 92
4.1 4.2 4.3 4.4 4.5 4.6
Summary of Unary Operators . . . . . . . . . . . . . . Summary of Binary Operators . . . . . . . . . . . . . CAPCF Specification General Format . . . . . . . . . CAPCF Modality-Specific Information Identification . CAPCF Capability-Specific Information Identification CAPCF Processing-Specific Information Identification
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
100 103 105 107 109 111
5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18 5.19 5.20
The CAPO . . . . . . . . . . . . . . . . . . . . . A CAP for Clark . . . . . . . . . . . . . . . . . . A CAP for Clark’s Modality Input Capabilities . A CAP for Clark’s Modality Output Capabilities Clark’s Only Modality Record . . . . . . . . . . . Structure of CAPU SE for “Clark” . . . . . . . . . A CAP for Johann . . . . . . . . . . . . . . . . . Johann’s Visual Modality Input . . . . . . . . . . Johann’s Visual Modality Input Media . . . . . . Johann’s Auditory Modality Input . . . . . . . . Johann’s Auditory Modality Input Media . . . . Johann’s Auditory Modality Input Capabilities . Structure of CAPU SE for “Johann” . . . . . . . . A CAP for Pam . . . . . . . . . . . . . . . . . . . Pam’s Tactile Modality Input . . . . . . . . . . . Pam’s Tactile Modality Input Media . . . . . . . Pam’s Tactile Modality Input Capabilities . . . . Pam’s Auditory Modality Input . . . . . . . . . . Pam’s Auditory Modality Input Media . . . . . . Pam’s Auditory Modality Output . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
113 116 117 118 119 119 120 122 123 124 124 125 126 126 128 128 129 130 130 131
ix
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
5
5.21 5.22 5.23 5.24 5.25 5.26 5.27 5.28 5.29 5.30 5.31 5.32 5.33 5.34 5.35 5.36 5.37 5.38 5.39 5.40 5.41 5.42 5.43 5.44 5.45 5.46 5.47 5.48 5.49 5.50 5.51 5.52 5.53 5.54 5.55 5.56 5.57 5.58 5.59 5.60 5.61 5.62 5.63 5.64 5.65 5.66 5.67 5.68 5.69 5.70 5.71 5.72 5.73 5.74
Pam’s Auditory Modality Output Media . . . . . . . . . . . . . . . . . . . . . . . . Pam’s Auditory Modality Output Capabilities . . . . . . . . . . . . . . . . . . . . . Pam’s Tactile Modality Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pam’s Tactile Modality Output Media . . . . . . . . . . . . . . . . . . . . . . . . . Structure of CAPU SE for “Pam” . . . . . . . . . . . . . . . . . . . . . . . . . . . . A CAP for Tae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tae’s Visual Modality Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tae’s Visual Modality Input Media . . . . . . . . . . . . . . . . . . . . . . . . . . . Tae’s Monochrome Visual Modality Input Capabilities . . . . . . . . . . . . . . . . Tae’s Magnification Capabilities for Visual Modality Input . . . . . . . . . . . . . . Tae’s Speech-Based Auditory Modality Input Capabilities . . . . . . . . . . . . . . Structure of CAPU SE for “Tae” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A CAP for a “Standard System” . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tactile Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Keyboard Input Modality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Keyboard Input Modality Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . Left Mouse Button Input Modality . . . . . . . . . . . . . . . . . . . . . . . . . . . An Ergonomically Right-Handed Mouse for a Medium-Sized Hand . . . . . . . . . Auditory Input Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Auditory Input Modality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Auditory Modality Input Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . Visual Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Visual Modality Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Visual Modality Output Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . Auditory Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Auditory Output Modality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Auditory Output Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structure of CAPSY S for “Standard System” . . . . . . . . . . . . . . . . . . . . . A CAP for “System with Some Additional Software AT Support” . . . . . . . . . . Auditory Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Auditory Modality Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Auditory Input Modality Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . Visual Output Modality Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . Earphone-Based Auditory Output . . . . . . . . . . . . . . . . . . . . . . . . . . . Earphone-Based Auditory Output Modality . . . . . . . . . . . . . . . . . . . . . . Auditory Output Modality Capabilities . . . . . . . . . . . . . . . . . . . . . . . . A Screen Reader CAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Screen Reader Input Modalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Screen Reader Output Modalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . Screen Reader Output Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . Screen Reader Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Screen Magnifier CAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structure of CAPSY S for “System with Some Additional Software AT Support” . A CAP for a “System with both Additional Software and Hardware AT Support” A HeadMouse is Just a Mouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A HeadMouse can Simulate the Left Mouse Button . . . . . . . . . . . . . . . . . . HeadMouse Tactile Modality Capabilities . . . . . . . . . . . . . . . . . . . . . . . An Onscreen Keyboard CAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Onscreen Keyboard Input Modalities . . . . . . . . . . . . . . . . . . . . . . . . . . Onscreen Keyboard Output Modalities . . . . . . . . . . . . . . . . . . . . . . . . . Onscreen Keyboard Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . Onscreen Keyboards Transform their Input . . . . . . . . . . . . . . . . . . . . . . Structure of CAPSY S for “System with Software and Hardware AT Support” . . . A CAP for a “Conference Hall” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
132 133 134 134 135 135 137 137 138 139 140 141 144 145 146 147 147 149 150 150 151 152 153 153 154 155 156 157 159 159 160 161 161 162 163 164 165 166 166 167 168 169 170 172 173 173 174 175 175 176 177 178 179 182
5.75 5.76 5.77 5.78 5.79 5.80 5.81 5.82 5.83 5.84
Auditory Input CAP . . . . . . . . . . . . . . . . . . Auditory Modality Capabilities . . . . . . . . . . . . A CAP for Using a Headset in a “Conference Hall” . Micro-Environment’s Auditory Input . . . . . . . . . Micro-Environment’s Auditory Input Modality . . . Micro-Environment’s Auditory Input Capabilities . . Structure of “Conference Hall” Environment CAP . A CAP for a “Darkened Kitchen” . . . . . . . . . . . Illumination of the “Darkened Kitchen” . . . . . . . Structure of “Darkened Kitchen” Environment CAP
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
183 184 185 186 186 187 187 189 190 191
F.1 A Classification of Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
xi
List of Figures 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 2.18 2.19 2.20 2.21 2.22
A first attempt at a reference model . . . . . . . . . . . . . . Adding context to a reference model . . . . . . . . . . . . . . The “Universal Access Reference Model” . . . . . . . . . . . . A model of a user . . . . . . . . . . . . . . . . . . . . . . . . . A model of a system . . . . . . . . . . . . . . . . . . . . . . . “Communication Theory” view of interaction . . . . . . . . . Media channels and handicaps . . . . . . . . . . . . . . . . . Shared context and accessibility . . . . . . . . . . . . . . . . . Not all context is shared . . . . . . . . . . . . . . . . . . . . . Accessibility interfacing and processing . . . . . . . . . . . . . Assistive technology in the UARM . . . . . . . . . . . . . . . Environment in a multi-system model . . . . . . . . . . . . . A model of assistive technology . . . . . . . . . . . . . . . . . Using AT to help interface between components . . . . . . . . Components of accessibility . . . . . . . . . . . . . . . . . . . A fully accessible system . . . . . . . . . . . . . . . . . . . . . A system with access issues for the user . . . . . . . . . . . . Multiple channels not interfering with each other . . . . . . . Auditory channel conflict (insufficient channel capacity) . . . Adding a boom microphone auditory modality to the system User capabilities for multiple input channels . . . . . . . . . . Identifying handicaps based on the CAP . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
17 18 19 20 24 27 28 32 33 34 35 36 37 38 39 47 48 49 50 51 52 57
3.1
Visualization of the CAP structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.1
A four-level CAP structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
B.1 The arcade game “Police 911” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
xii
List of Abbreviations ACCLIP ACCMD AT ANSI CAP CAPAT CAPCF CAPEN V CAPIC CAPIR CAPOT CAPP F CAPSY S CAPU SE CD CF CIF DPI FCD FLOSS IC IM IR ICF IEC IMS ISO ISO/TS LIP OT PAN PC PDA PF RA SI UARM UNS URL USB W3C WHO XML
IMS Accessibility and Learner Information Profile IMS Accessibility Metadata Assistive Technology American National Standards Institute Common Accessibility Profile Assistive Technology CAP Specification Component Feature CAP Specification Environment CAP Specification Interacting Component CAP Specification Input Receptor CAP Specification Output Transmitter CAP Specification Processing Function CAP Specification System CAP Specification User CAP Specification ISO Committee Draft Component Feature Common Industry Format for Usability Test Reports Disabled Peoples’ International ISO Final Committee Draft Free/Libre/Open-Source Software Interacting Component Instant Messaging Input Receptor International Classification of Functioning, Disability, and Health International Electrotechnical Commission Instructional Management Systems Global Learning Consortium, Inc. International Organisation for Standardisation ISO Technical Specification IMS Accessibility for Learner Information Package Output Transmitter Personal Area Network Personal Computer Personal Data Assistant Processing Function Registry Authority Syst`eme International Universal Access Reference Model User Needs Summary Uniform Resource Locator Universal Serial Bus World Wide Web Consortium World Health Organization Extensible Markup Language
xiii
List of Listings C.1 Example XML Declaration for a CAP . . . . . . . . . . . . . . . . . . . . . . . . . . 230
xiv
Chapter 1 Introduction People can have difficulties using computers. These people can be of any age, ability, or background. Some people have more difficulty than others.
1.1
Improving Accessibility of Computing Systems
Numerous studies have identified different usability issues. One study comparing the usability experience of several web sites for both younger and older adults found that overall usability was slightly more than twice as good for non-seniors as it was for seniors (Nielsen, 2002). A survey of the role of computer-based Assistive Technology (AT) in Canadian post-secondary environments found that, among post-secondary students with disabilities, although 95% of the 800 respondents use a computer, only a quarter of respondents currently use an AT with their computer and half report the need for an AT (Fichten, Asuncion, & Barile, 2001). Research experience with children suggests that product usability is closely related to children’s enjoyment (Hanna, Risden, Czerwinski, & Alexander, 1998). While some will blame the people who have difficulty, others will blame the computer. What is important is to remove the difficulty without focusing on the blame. Designing universally usable systems is one way to do that.
1.1.1
Usability and Accessibility
Different people have different impressions about usability and accessibility. The concepts of usability and accessibility are closely related to each other. They are also separate and distinct. International Organisation for Standardisation (ISO) 9241-11 is an international standard that describes how to identify the information required when specifying or evaluating usability in terms of measures of user performance and satisfaction. This standard provides guidance on how to describe the context of use of the product and the measures of usability in an explicit way. It also includes an explanation of how the usability of a product can be specified and evaluated as part of a quality system (International Organization for Standardization [ISO], 1998a). 1
The definition of usability in ISO 9241-11 describes a composite of effectiveness, efficiency, and satisfaction with which specified users achieve specified goals in particular environments: usability the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use (ISO, 1998a) The three terms effectiveness, efficiency, and satisfaction refer respectively to how well the system does the job for the user, how easily the job is done for the user, and how users feel about the process overall. In this context, the notions of effectiveness and efficiency are highly coupled. Thus, usability focuses on specific users and may ignore or exclude other users. Usability problems impact all users equally, regardless of ability. A person with a disability is not disadvantaged to a greater extent by usability issues than a person without a disability (Thatcher et al., 2002). A usability issue becomes an accessibility issue when it has a disproportionate affect on people with disabilities (Vanderheiden, 2007). When accessibility is separated from usability, it is often perceived as a “special case” of usability where special designs are needed to fit the requirements of users with specific types of disability. This view suggests that specific users with specific disabilities should be required to purchase a significantly more expensive version of common consumer products which have been specially designed to meet their needs. This is not practical because each user group is so small that no product can be provided at reasonable cost to the consumer (or producer) (Vanderheiden, 1990). In addition, this approach does not meet the needs of the majority of consumers with disabilities because it excludes the widest possible range of users — specific designs for specific groups of users with disabilities by definition miss other specific groups of users with disabilities. Sometimes this “special case” perspective can lead designers to believe that everything needs to be designed to be used by everyone. This is based on a misperception that everything needs to be designed so that it is accessible to every possible person with a disability. It may be “impractical, if not impossible, to design everything so that it is accessible by everyone regardless of their limitations” (Vanderheiden, 1990) and it may be “unreasonable to design everything so that it can be used by everyone” (Vanderheiden, 1990). The danger with the “special case” view of accessibility is: • A product could be “usable” without being “accessible.” That is, when designers view accessible design as “outside of” usable design, they have “permission” to design an inaccessible product. • A product could be “accessible” without being “usable.” This occurs when a designer of an “accessible” product does not account for the usability requirements of the specific population for which the product is designed. 2
However, many people believe that there should be no distinction between usability and accessibility — “Accessibility is good usability.” (Moulton, Huyler, Hertz, & Levenson, 2002) — and thus the term “usability” should encompass the meaning of both terms. According to ISO Technical Specification (ISO/TS) 160711 , which provides guidelines and recommendations for the design of systems and software that will enable users with disabilities greater accessibility to computer systems, accessibility is defined as: accessibility usability of a product, service, environment or facility by people with the widest range of capabilities (International Organization for Standardization [ISO], 2003b) Accessibility focuses on the widest range of users recognizing that different users have different needs and may require different accommodations. It is reasoned that, if usability is about producing products and systems that are easy to use and perform the function for which they were designed, then accessible design is about producing products and systems that are usable by all persons regardless of (dis)ability. While usability focuses on a society of “average” users to the exclusion of those who do not fit the “average”, accessibility focuses on single individual users with the goal of producing products that are usable by everyone. This definition suggests that the solution lies somewhere between the extremes discussed above: not designing to exclude users with disabilities, not designing specific products for specific users with disabilities, but designing for the widest possible range of users regardless of disability. This definition suggests that it is possible to design everything for everyone — or to at least get as close as possible. However, such a goal requires designers to consider the needs of all users, including users with disabilities, from the beginning. For the purpose of this Thesis, the ISO/TS 16071 definition of accessibility will be used.
1.1.2
Creating Accessibility
There are three views of how to create accessibility. Each view has a different focus: the user, the product, or the interaction between the user and the product (Carter & Fourney, 2004b). Focus on the User : This view forces the user to work around the usability of the computing agent and corresponds to bio-medical and functional-rehabilitative models of disability; that is, that a disability is an “illness” to be “treated” or “fixed.” The user seems to bear the “blame” for any problems. In this view, the “barrier” is perceived to somehow be an attribute of the user not the system or environment (e.g., Vitense, Jacko, & Emery, 2002 refer to, “individuals with barriers limiting one or more channels of perception, such as a visual impairment”). 1 In 2007, ISO/TS 16071 will be succeeded by ISO 9241-171 (International Organization for Standardization [ISO], 2006f).
3
In this view, meeting the needs of users and universal access is often seen as a burden rather than as a design goal. For example, International Organisation for Standardisation (ISO) and International Electrotechnical Commission (IEC) Guide 71, which describes a set of guidelines used within ISO to ensure the needs of older persons and people with disabilities are addressed when developing standards, states that accessibility involves, “. . . extending standard design to people with some type of performance limitation.” (2001b). ISO/IEC Guide 71 places the focus of “the problem” squarely on the user. This view has the distinct disadvantage of blaming the users for their limitations (Hong Kong Equal Opportunities Commission, 1999). Focus on the Product: This view is focused on the usability of the computing agent and away from the user. It forces the computing agent to resolve usability problems. However, it continues to see human-computer interaction as a two-party affair and assigns blame for any problems on the computing agent. For example, the ISO/TS 16071 definition of “accessibility” quoted above has the advantage of focusing the “problem” of accessibility on the usability of the computing agent and away from the user, however, it continues to see human-computer interaction as a two-party affair and still assigns blame to one of them. Focus on the Interaction between the user and the product: The problem of accessibility is neither the “fault” of the user nor the system. The focus is on, “. . . removing barriers that prevent people . . . from participating in substantial life activities . . . .” (Bergman & Johnson, 1995). There is a third party involved, the “barrier” — something which handicaps the interaction between the user and the system. This interaction can be described as a negotiation among parties where each party brings their own “terms” into the negotiation and a working partnership is sought. Attributing “blame” to one side or the other is undesirable. Parties can only communicate if they have some shared context, which may include various types of knowledge and abilities. Anything missing in the shared context of the interaction will handicap the communication between the user or the product (Carter & Fourney, 2004b). Focusing on the interaction between the user and the product means that adaptations need not be one-sided. In the world of computing there are two different kinds of accessibility: physical accessibility (i.e., the “stuff outside the box,” such as the keyboard or mouse) and “logical accessibility” (i.e., software which does not require any further adaptation). A focus on the interaction between the user and the product encompasses both of these views.
1.1.3
(Dis)Abilities
To have successful interactions, there needs to be a suitable matching of user and system capabilities. Many people focus on disabilities rather than abilities because they assume that a disability leads 4
to failure. When used in a personal sense (i.e., “disabled person”), the word “disabled” negates a person, and largely dismisses any abilities the person may posses. “Disability” implies that a person is not able to do some things or anything (Baldwin, 2000). Most definitions of disability touch on only one aspect of disability and not the full range of issues that contribute to disability. A review of contemporary definitions of disability suggests definitions of disability reflect four paradigms (summarized in Table 1.1) which fall mainly into one of two categories of focus: the Individual or the Society (Rioux, 1997).
Table 1.1: Categories of Definitions of “Disability” (adapted from Rioux, 1997) Category I: Category II: Individual as unit of analysis Society as unit of analysis Biological or Functional or Environmental Human rights Medical model Rehabilitation model model model Emphasis on at- Emphasis on proAttention directed to Focus is on the rights tributes in the moting or restoring ecological barriers: to which all peoindividual. fuller functioning in social, economic, po- ple, including people the individual. litical, institutional with disabilities, are and legal, which can entitled. result in disability. Definitions that focus on the individual as the unit of analysis tend to centre on either a biomedical model with emphasis on the attributes of the individual, or on a functional-rehabilitative model with emphasis on the promotion or restoration of fuller functioning in the individual. Such definitions frame disability with reference to the individual and in terms of individual deficits. Most of the models in this category assume a “norm,” below which a person should fall if she or he is identified as a person with a disability. Medical definitions have the view that the etiology of disability resides in the individual as a consequence of events such as disease, accident, war, genetic structure, birth trauma, or other acute causes and that by identifying the aetiology involved, disability can be “treated”, “cured”, or “prevented”. The functional-rehabilitative approach has the view that emphasizes the actions or activities that an individual can perform (with or without assistance) as a result of a bio-medical condition (Hong Kong Equal Opportunities Commission, 1999). Definitions that focus on the society as the unit of analysis tend to centre either on an environmental model with attention directed to ecological barriers (social, economic, political, institutional and legal) which can result in disability, or a human rights model with emphasis on the rights to which all people, including those with disabilities, are entitled. Such definitions focus not on the individual but on the social, economic, political, institutional and legal conditions that can result in disability. Environmental definitions recognize that a physical, sensory or intellectual impairment will not limit many individuals as much as being denied an education, the right to employment, or the right to marry and have a family of their own. Human rights approaches to disability are 5
premised on the recognition of a set of fundamental rights to which all people are entitled regardless of one’s individual characteristics, consequently reducing the need for a specific definition of disability (Hong Kong Equal Opportunities Commission, 1999). While a medical model view of disability ignores the imperfection and deficiencies of the environment (United Nations Commission for Social Development on Disability, 1993), definitions with a focus on society recognize that, “disability has too long been viewed as a problem of the individual and not the relationship between an individual and his/her environment” (Disabled Peoples’ International [DPI], 1993). Thus, unlike definitions that focus on an individual’s deficits, definitions that focus on society require that the structural conditions in a society that result in disability be addressed and ameliorated. The International Classification of Functioning, Disability, and Health (ICF) presents a third model, one that attempts to combine what is true of both the medical and social approaches without reducing the entire notion of disability into one or the other’s aspects. This “biopsychosocial model” views disability and functioning as outcomes of interactions between health conditions and contextual factors each of which impact how disability is experienced by the individual. Contextual factors include both environmental factors (e.g., social attitudes, legal structures, built or natural environment, climate, etc.) and personal factors (e.g., personal attitude, gender, age, education, past behaviour, etc.). Thus, in the ICF model, disability involves dysfunction at one or more human functioning levels: physical impairment, activity limitations, and participation restrictions (World Health Organization [WHO], 2002). On their own, although both are partially valid, neither a medical nor social focus fully captures the complexity of disability. Their combination, as suggested by the ICF, does not fully grasp the notion either. The reality is that, “Both the causes and consequences of disability vary throughout the world.” (United Nations Commission for Social Development on Disability, 1993). Table 1.1 shows little agreement over the definition of disability because issues such as culture and economics changes the view of what constitutes a “disability” and whether or not a person has a disability from one society to another, in essence, “ . . . disability is a social construct.” (Kaplan, n.d.) Most people believe they know what is and is not a disability. If you imagine ‘the disabled’ at one end of a spectrum and people who are extremely physically and mentally capable at the other, the distinction appears to be clear. (Kaplan, n.d.) Just because one is perceived to have a disability in one society does not necessarily mean one is also perceived to have a disability (or even the same disability) in another. For example, the island of Martha’s Vineyard once had a strain of hereditary deafness such that a large number of residents were born deaf. As a result, most people on the island knew at least one person who was deaf and a large number of islanders were fluent in the local Sign Language. At a time when mainland United States did not allow deaf persons to hold office, much less vote, there was little social differentiation based on hearing status on Martha’s Vineyard (Groce, 1985). 6
Typical definitions of disability refer to an impairment that results in loss of function in one or more major life activities (e.g., walking, seeing, hearing, learning, etc.) The implication of such definitions is that what limits people with disabilities is the inability to see, walk, hear, and so on. While that has obvious validity, the evidence is clear that people with disabilities can live full, productive lives. Disability is both a problem at the level of one’s body as well as a complex social phenomenon. Underestimating the innate abilities of people with disabilities is rooted in archaic notions from a time when it was difficult for people with disabilities to participate in their community. Underestimating people with disabilities results in lowered expectations as to what people can achieve, stereotypes, self-fulfilling prophecies, and attaches a stigma to having a disability (Goodwin, 1997). Disabilities have to be contextualized. As the World Health Organization (WHO) points out, “Disability is always an interaction between features of the person and features of the overall context in which the person lives, . . . ” (WHO, 2002). If in a context you do not need to use a certain ability, then any impairments to that ability are irrelevant. What is relevant are the abilities an individual brings into the context. Before 1980 the word “disabled” was used to label minority groups who had conditions that made them physically, intellectually, or mentally different. In the 1980’s the term “people with disabilities” was introduced as social recognition of this minority group as people living in the community increased. Toward the end of the 1990’s, changes in technologies pushed the term “disability” out the door. Words such as “ability”, “inclusion”, and “normalization” have forced a reconsideration of “disabled”. This created a new awareness, focusing on the person’s abilities first, rather than the disability (Baldwin, 2000). The work of Benjamin Bloom and his colleagues provides a list of abilities to draw on. They identify three domains of educational activities: cognitive, affective, and psychomotor. The cognitive domain has six different major categories and involves knowledge and the development of intellectual skills. The affective domain has five major categories and includes the manner in which we deal with things emotionally, such as feelings, values, appreciation, enthusiasms, motivations, and attitudes. The psychomotor domain has seven major categories and includes physical movement, co-ordination, and use of the motor-skill areas. Development of these skills requires practice and is measured in terms of speed, precision, distance, procedures, or techniques in execution. Each major category of skill can be thought of as a degree of difficulty — one must be mastered before the next can take place (Bloom et al., 1956). [“Disability”] is a label of liability. “Ability” is a word of action and asset in the minds and eyes of individuals (Baldwin, 2000). Instead of focusing on the obvious weaknesses, a focus on abilities recognizes the strengths in an individual (West, 1999) and the opportunities flexibility can create (Gibilisco, 2003). In essence, 7
the idea that there is, “more than one way to skin a cat” (Clemens, 1889) applies.
1.1.4
Handicaps
Historically, the terms “disability” and “handicap” have been misused interchangeably. A handicap is not a disability. A handicap is not caused by a disability. The source of a handicap is the interaction between a person and some other object (including another person). Although the term “handicap” is often associated with persons with disabilities (United Nations Commission for Social Development on Disability, 1993), persons without disabilities also experience handicaps. The constitution of Disabled Peoples’ International (DPI) defines “handicap” as: . . . the loss or limitation of opportunities to take part in the normal life of the community on an equal level with others due [to] physical or social barriers (DPI, 1993) A badly designed system also handicaps users (Thimbleby, 1995). As computers become part of a user’s environment, the interaction between users and computers can be seen as a source of handicap; “human-computer interaction is an impoverished affair.” (Taubes, 2000). The keyboard and mouse, our primary points of contact, are limiting single mode interactions for use by a species accustomed to multimodal social interaction. In essence, the computer’s own shortcomings often create the barriers users, with or without disabilities, experience (Carter & Fourney, 2004b). For several historical reasons, in the English language, the term “handicap” has been associated with a variety of negative political and emotional connotations. For this reason, the term “barrier” has often been used instead. Unfortunately, without qualification (e.g., the type of barrier), this English term does not fully communicate both the environmental and social impacts the word “handicap” does. Handicaps are anything that may interfere with the accessibility of interactions. A handicap may have one or many sources among the system, user, interaction, and/or environment. From a universal access perspective, reducing any handicaps to an interaction is far more important than attributing blame (Carter & Fourney, 2004b). A handicap can provide different levels of interference at different times. On the one hand, a handicap may provide full, partial, or no interference. Full interference means that no interaction between the user and the system can occur and all possible means of interaction are blocked. Partial interference means that only a portion of the many ways the user and system can interact are available. If the number of ways the user and system can interact are so limited that there are no compatible ways for them to interact, a partial interference can still be seen as a full interference. If there is no interference at all, full interaction can occur. On the other hand, a handicap may be temporary, progressive, or permanent. This temporal dimension to a handicap means, across time, the level of interference in the handicap of the interaction can change as the situation changes. Partial interferences may become full interferences or no 8
interferences. Similarly, a given level of interference may never change (Carter & Fourney, 2004b). This Thesis will focus on handicaps to interactions rather than disabilities of users.
1.1.5
Focusing on Abilities to Minimize Handicaps
If handicaps interfere with the accessibility of interactions and interactions occur where parties have some shared context, there is a need to find a way to focus on abilities that are shared between the user and the system. The context may include various types of knowledge and abilities. The effect of such a focus would be to minimize handicaps to interactions and maximize accessibility. To accomplish this effect, measurement needs to occur. Such measurement systems do not exist.
1.2
Research Objectives
Since computer users need standardized ways of identifying their access needs to systems and of identifying systems (and their components) that can meet their access needs, there is a need for guidance in how to evaluate and improve accessibility of systems for users. The notion of “evaluation” suggests the need for descriptors which can be applied to objectively measure accessibility. The notion of “improvement” suggests the need for metrics based on sound principles (e.g., Basili & Rombach, 1988) which can detect change and algorithms to drive and interpret them. This Thesis will develop a standardized way to describe the needs and capabilities of users and systems.
1.3
Thesis Organization
This Thesis is organized into the following chapters: • Chapter 1 provides introductory material, motivation and research objectives for the Thesis. • Chapter 2 discusses accessibility, develops an approach to modelling accessibility (i.e., the Universal Access Reference Model (UARM)), and introduces the Common Accessibility Profile (CAP) as a means to record specifications of the model. • Chapter 3 defines the structure and contents of information within a CAP that can be used to describe users, systems, assistive technologies, and environments. • Chapter 4 defines operators on individual pieces and groups of information within a CAP. • Chapter 5 develops examples of the specification of CAPs for users, systems, and environments. 9
• Chapter 6 discusses validation of the CAP. • Chapter 7 discusses the results and contributions of this Thesis and suggests future work.
1.3.1
Defining Common Accessibility Profile Structure and Contents
Chapter 3 begins by determining what is needed by the CAP both in terms of structure and specification. These needs are based on the description of the CAP outlined in Chapter 2. Over several iterations, a structure for the CAP was developed. This structure was specified using tables. Candidate codings were gathered to fit the identified needs. These codings were initially derived from various resources including Chapter 2, various International Standards, and other applicable research. Where an appropriate candidate coding could not be determined, then a coding was created to fit the identified need. These codings were refined over several iterations until a generalized specification format could be achieved. Analysis from later chapters of this Thesis was then used to correct and further develop this specification.
1.3.2
Qualifying Information in Relationships within CAPs
Chapter 4 provides the ability to qualify information and specify the relationships between different pieces of information within a single CAP. For example, a specific system could be described as supporting more than one language. It is necessary to qualify a CAP to determine how OR’ing or AND’ing these language capabilities is understood. This chapter introduces: • unary operators {SHALL, MAY, NOT} that qualify individual pieces of information, and • binary operators {AND, OR, XOR} that specify relationships between pieces of information.
1.3.3
Developing Example CAPs
Chapter 5 demonstrates the feasibility of developing meaningful CAPs. These examples fulfill two goals: 1. Evaluating the usability of the CAP approach. 2. Evaluating the completeness and usability of the structure and specification from Chapter 3 and the qualifications and relationships from Chapter 4. This chapter considered the various examples suggested throughout Chapters 1 and 2. Candidate examples were considered for completeness and applicability to user, system, and environment. Since the number of candidate CAPs was insufficient for all categories, existing examples were modified and new examples were developed. 10
1.3.4
Validating the CAP
Chapter 6 describes an approach to validation that goes beyond the ability to generate CAPs that is shown in Chapter 5. This approach involved validation by a large international group of experts who formally reviewed and approved the CAP. Validation also considered the appropriateness of the metrics involved in the CAP by considering their theoretical and practical applicability.
1.3.5
Additional Materials
Technical material is provided in the Appendices as needed to allow the main thesis to present a solution in a manner that can be understood by a general audience (i.e., intended CAP users). A glossary of terms is provided in Appendix A.
11
Chapter 2 Background This chapter describes a model-based approach to evaluating and improving accessibility. Section 2.1 begins with a discussion on standards-based approaches to modelling usability and introduces the UARM. Since handicaps occur within the interactions of users and systems, a model based on interactions is necessary to describe accessibility. Section 2.2 builds on the introduction of the UARM by describing the components of the model in detail. The model contains five components: users, systems, interactions, environment, and context. Section 2.3 introduces multi-system models of interaction. Up to this point, the chapter assumes one system interacting with one user. Multi-system models illustrate additional systems, such as Assistive Technologies (ATs), interacting with the user. An AT is a program or device that can be added to a system to make it more accessible. The UARM is expanded to encompass a multi-system model and additional components are discussed in detail. Finally, Section 2.4 introduces the CAP. The CAP describes user-system and user-AT-system accessibility across all users and systems and is based on interactions, which may involve multiple channels. The CAP provides a framework of descriptors which can be applied to objectively measure usability.
2.1
Modelling Usability
It seems that everyone talks about usability, but it is unclear how to achieve it. Individual approaches have been difficult to generalize outside of the context in which they were developed. There is a need for a repeatable method to achieve usability which can be applied across situations. This method should be based on a model that can be used to evaluate usability both for groups of users and for individual users. This will ensure not only general usability but also accessibility for individuals. Since handicaps occur within the interactions of individual users and systems, a model based on interactions is necessary to describe accessibility. This section introduces the model of usability used by this Thesis. It discusses the standardsbased model of usability and user-centred development and introduces the UARM. The UARM is 12
both a response to the need for models to support system accessibility evaluation and improvement, and a standards-based user-centred approach.
2.1.1
Usability and User-Centred Development
Usability of a product is directly related to its users, their tasks, and the context in which their tasks are performed. Such differences as the user’s experience, skills, and abilities may influence usability. For example, expertise can change how a user prefers to interact with a product. Users who are unfamiliar with a software application or only use it occasionally may be most comfortable using graphical menus. Users very familiar with the same application (so-called “power users”), may be more comfortable using keyboard shortcuts (Carter, 2004). Different products within the same domain may have differences in usability only by being better for certain tasks (Carter, 2004). For example, Word-processor A may be more usable when inserting and editing page headings than Word-processor B. However, Word-processor B may be more usable when changing the appearance of text. Whether or not one product is better than another is based on the tasks users intend to perform with it. While the design of a product may have been focused on one set of tasks, users may expand on this set in actual product use. For example, a spreadsheet application originally intended to support bookkeepers may be used by a teacher to record marks from student assignments (Carter, 2004). The context of use may also influence whether the product is usable (Carter & Fourney, 2004b). For example, observing system designers using a system intended for use by administrative staff may say little about its usability for those administrative staff. The system designers’ knowledge, background, and approach to computer systems, and thus the quality of use they attain with the system under test, are likely to be very different from those of the administrative staff, as is their knowledge of administrative tasks (Macleod, 1994). ISO 9241-11 and ISO 13407 provide direction in how to approach usability (ISO, 1998a; International Organization for Standardization [ISO], 1999a). 2.1.1.1
Usability in ISO 9241-11
ISO 9241-11 describes usability as a composite of effectiveness, efficiency, and satisfaction with which specified users achieve specified goals in particular environments (ISO, 1998a). It recognizes that there are many possible measures of effectiveness, efficiency, and satisfaction and that all measures are subjective to some extent. Rather than using measures as absolute ratings, they can be used to compare between alternatives, such as between an existing system and a new system being developed or between two alternative new systems. The Common Industry Format for 13
Usability Test Reports (CIF) (American National Standards Institute [ANSI], 2001), which has been internationally standardized via ISO/IEC 25062 (International Organization for Standardization [ISO] and International Electrotechnical Commission [IEC], 2006a), provides examples of metrics related to each of these three components of usability. Effectiveness: ISO 9241-11 defines effectiveness as, “the accuracy and completeness with which users achieve specified goals” (ISO, 1998a). Users should be enabled to use the system to accurately complete an identified set of tasks with particular types of content in various identified environments. The specific user groups, tasks, content types, and environments involved can be identified through requirements analysis. Examples of effectiveness metrics include: completion rates (“the percentage of participants who completely and correctly achieve the goal of the task”), errors (a classification of errors that prevented test participants from completing the task or that required them to “attempt portions of the task more than once”), and assists (instances where participants required assistance to be able to complete the task) (ISO & IEC, 2006a). Efficiency: ISO 9241-11 defines efficiency as, “the resources expended in relation to the accuracy and completeness with which users achieve goals” (ISO, 1998a). Users hope that new systems will improve their efficiency in accomplishing their tasks. A system should be at least as efficient as its competition and/or any system that it intends to replace. Examples of efficiency metrics include: task time (“the mean time to complete each task”) and the completion rate to mean time-on-task ratio (“the percentage of users who were successful for every unit of time”) (ISO & IEC, 2006a). Satisfaction: ISO 9241-11 defines satisfaction as, “positive attitudes to the use of the product and freedom from discomfort in using it” (ISO, 1998a). If given the choice, users will often choose the systems that they are most satisfied with. A system should be at least as satisfying as its competition and/or any system that it intends to replace. The CIF refers to a number of widely used questionnaires as a source of metrics for evaluating satisfaction (ISO & IEC, 2006a). 2.1.1.2
Principles of User-Centred Development
ISO 13407, Human-centred Design Processes for Interactive Systems, provides guidance on humancentred design activities throughout the life cycle of interactive computer-based systems. It is a tool for those managing design processes and provides guidance on sources of information and standards relevant to the human-centred approach. ISO 13407 describes human-centred design as a multidisciplinary activity, incorporating both human factors and ergonomics knowledge and techniques (ISO, 1999a). 14
ISO 13407 presents a set of four “principles of human-centered design” that characterize the commonalities from among various approaches to user-centred (or human-centred) development (or design) (ISO, 1999a). The list below summarizes the principles from ISO 13407. To improve their clarity and usability, the first principle has been split into two separate principles (Carter, 2004). The system must be designed to meet the specific requirements of different groups of users. This includes performing some identifiable set of tasks using some identifiable types of content within some identifiable context(s) in manners that are suited to the unique characteristics of each group of users. Users should be actively involved in the development. Meeting user requirements is more important than involving them, but it is usually very difficult to develop a user-centred system without the active involvement of some users. Iteration is essential in obtaining and applying successive evaluations in the development of usable systems. Without evaluation there is no way to determine if a system is even slightly usable. With evaluation it is often possible to see how a system’s design (and resulting usability) could be improved. User-centred design benefits from applying a multi-disciplinary set of skills. In addition to user and software development (software engineering) skills, there are many other important skills, including (but not limited to): human-computer interaction skills, graphics design skills, cognitive engineering skills, application specialist skills, marketing skills, and management skills. User-centred design results in an appropriate allocation of function between users and technology. Human-computer interaction, by definition, requires both the human and the computer to play a part in accomplishing some task. Usability requires that each play a part that is appropriate to their capabilities and that meets the requirements of the users.
2.1.2
Universal Access Reference Model (UARM)
Reference models of human computer interactions should “provide a generic, abstract structure which describes the flow of data between the user and the application, its conversion into information, and the auxiliary support which is needed for an interactive dialogue” (Lynch & Meads, 1986). The UARM illustrates the major functions and relations that are common to all instances of universal access, without forcing a particular design/implementation on any individual instance. This section explains the basis upon which the components and relationships relevant to the UARM were identified and is based on material that the author has developed and published (Carter & Fourney, 2004b). The UARM was originally developed to identify areas requiring further accessibility guidance in International Standards. It identifies a range of different functions that a system must provide to 15
support accessibility for all. Originally these UARM functions were used to analyze the guidance contained in ISO/TS 16071. This analysis identified both further guidance to add, and a possible structure that could be applied, to future versions of ISO/TS 16071 as it evolved towards becoming an international standard (i.e., ISO 9241-171).
2.1.2.1
Systems
Systems are the traditional focus of accessibility. ISO/TS 16071 concentrates on software systems, which can be used as parts of various computing devices. At this point, “system” will refer to generic systems, without specifying the type or composition of these systems. Even if the goal is for computer systems to take the responsibility for being accessible, consideration of more than just the system is necessary. According to Stephanidis, “Universal access refers to the global requirement of coping with diversity in: (i) the characteristics of the target user population (including people with disabilities); (ii) the scope and nature of tasks; and (iii) different contexts of use and the effects of their proliferation into business and social endeavours” (Stephanidis & Savidis, 2001). Each of these aspects are considered within the UARM.
2.1.2.2
Users
Users are people who need or wish to use a system. Users will use a system because it provides some service that resolves a problem or completes a task. Any inability on the system’s part to meet the user’s needs may keep the user from completing their task and achieving task-related goals. For this reason, users are the main component of the model with specific concerns for accessibility. Universal access is important because no two users are exactly alike and an individual user can change over time (Carter & Fourney, 2004b). However, as noted in Chapter 1, meeting the needs of users and universal access is often seen as a burden rather than a design goal. Avoiding an accessibility view focusing blame on users or systems, allows seeing a third party involved — the interaction between the user and the system.
2.1.2.3
Interaction
The term “interaction” describes the process of communication (or negotiation) that occurs between a user and a system. Interaction is what ties systems to users. Users interact with a system to accomplish a task or set of tasks. Interactions can be considered the tangible components of a task. Tasks may involve a series of individual interactions. Interactions occur in two directions: from the user to the system and from the system to the user. Interactions occur by sending messages across channels. 16
Interactions are not necessarily serial in nature. Multiple interactions, especially by systems to users, may occur at one time. Multiple interactions may make use of the same or different forms of communication and may reinforce or even duplicate one another. Redundant communication may provide greater potential for universal access but may also provide an unnecessary load on the user. Any failure of one or more interactions to meet the needs of the system, task, and user can result in accessibility problems. Since determining and removing the cause of such a failure can improve accessibility, interactions should be the focus of accessibility.
2.1.2.4
Handicaps
Handicaps are anything that may interfere with the accessibility of interactions. As noted in Chapter 1, a handicap is, “the loss or limitation of opportunities to take part in the normal life of the community on an equal level with others due [to] physical or social barriers” (DPI, 1993). A handicap may have one or many sources among the system, user, interaction, and/or environment. Understanding the handicap to the interaction is more important than attributing the blame. Multiple handicaps may be present during a set of interactions. For example: consider an individual attending a lecture. There are many potential sources of handicaps to the interaction. The lecture may be presented completely aurally and the individual may have a hearing disability (which could be considered a user-related handicap to the interaction). The speaker may not speak clearly (which could be considered a system-related handicap to the interaction). The individual might not understand one of the points being made and may not be allowed to ask immediately for the necessary clarification to continue to understand the lecture (which could be considered an environment-related handicap to the interaction). If the individual received an urgent telephone call in the middle of the lecture or if some loud noise made hearing difficult, some of the lecture might be missed (which could be considered another environment-related handicap to the interaction). Figure 2.1 shows a first attempt at a reference model. It uses the metaphor of a valve to illustrate various levels of interference from a handicap. A fully open valve would represent no interference. A fully-closed valve would represent full interference. Any other setting of the valve would be a partial interference. Just as handicaps can act to restrict interactions, there are other factors that can assist interactions. These factors are referred to as contexts. INTERACTION
Handicap
USER
SYSTEM
INTERACTION
Figure 2.1: A first attempt at a reference model (adapted from Carter & Fourney, 2004b) 17
2.1.2.5
Contexts
Users and systems have their own contexts that they use to interpret and fill in gaps in the interactions that they receive (Maskery & Meads, 1992). Context can help to reduce the effect of handicaps that only partially inhibit communication. However, if the interaction is fully handicapped, as indicated in the model by a fully-closed valve, there is no communication for the context to interpret or fill in. In this case, some direct change must be made in the source of communication to at least partially remove or avoid the handicap to the interaction first, before trying to use context. Contexts are, and should be, shared between users and systems. For any interaction to be successful, it requires the user and the system to use some shared context (i.e., symbols/language and/or application knowledge) to make sense of the interaction. For example, both the system and the user may use their context/knowledge of the English language to facilitate communicating textual messages. As illustrated in Figure 2.2, this shared context provides a long-term link between the user and the system. CONTEXT
INTERACTION
USER
SYSTEM
INTERACTION
Figure 2.2: Adding context to a reference model (adapted from Carter & Fourney, 2004b) If the interactions provided by the system are appropriate for the user of a task in a given environment, there will be no handicaps. The necessary content of interactions may be reduced by implicit or explicit reference to a context that is expected to be shared between the user and the system. When this expectation is not met, a contextual gap occurs. The use of shared context may also reduce the effect of handicaps on current interactions. All interactions are interpreted relevant to the context of the receiver. Where contextual gaps occur, further interactions making use of a shared context is necessary to clarify the original interaction. Contexts may hinder (as well as help) interactions if they are not appropriate to the interaction. Where context is used inappropriately (such as with colloquialisms that are not shared and are thus 18
taken literally) it may lead to misassumptions. Where context is required but missing, its absence will handicap the interactions. The specific environment in which an interaction occurs can focus attention on specific contexts.
2.1.2.6
Environments
Environments provide additional contexts that focus the user or system on particular portions of their own contexts. Users, interactions and systems may not share the same environment and may each exist within multiple competing environments. An environment may be physical or socio-cultural. Physical environments include built spaces, such as homes, offices, and mechanical plants, as well as the effects of a user’s own physiological state. Changes to a user’s physiological state may be short-term, long-term, ongoing, or permanent. Examples include hunger, illness, progressive loss of sight, or permanent injury. Social and cultural environments include wide-spread attitudes towards the system being used. The term “handicap” emphasizes the, “shortcomings in the environment” (United Nations Commission for Social Development on Disability, 1993). An environment may help or handicap the interaction. It may help the interaction by suggesting the context to use when interpreting the interaction. It may handicap the interaction by introducing distractions and/or incorrect and/or inappropriate suggestions for context. In addition to focusing attention on existing context, environments are a source of additional context. Contextual knowledge of the environment of the user may help the system respond to the user more appropriately. The relationship between the environment and both context and handicaps completes a highlevel view of the UARM, as illustrated in Figure 2.3.
CONTEXT
ENVIRONMENT
INTERACTION
USER
SYSTEM
INTERACTION
Figure 2.3: The “Universal Access Reference Model” (adapted from Carter & Fourney, 2004b)
19
2.2
Further Considerations on UARM Components
This section builds on the previous introduction to the UARM by describing the components of the model in greater detail. The model contains five components: users, systems, interactions, environment, and context.
2.2.1
Users
A “User” is a person who interacts with a product, service, or environment (ISO & IEC, 2001b). Differences between users and differences experienced by a single user over time provide different accessibility needs. Different users have different capabilities and/or preferences, as well as different task-related needs when using the same system. Short-term, progressive or permanent changes to one’s abilities, skills and/or preferences means that each individual user may have different accessibility needs at different times. User interactions involve the basic functions identified by Communication Theory (Shannon, 1948). As such, users: • encode/decode messages using context that they hopefully share with the entities with which they are interacting, • transmit/receive messages using various media. Figure 2.4 illustrates the major user functions involved in encoding/decoding and transmitting/receiving messages. The actual transmission/reception of messages involves a combination of physical functions in the user’s interface with the real world. However, each physical function requires a corresponding skill/ability contained within the user’s context. The user’s context (a combination of abilities, skills and preferences) serves as the decoder/encoder. The user’s mind in
PREFERENCES
TASK ABILITIES & INTERACTION SKILLS ABILITIES & SKILLS
INTERFACE
Figure 2.4: A model of a user (adapted from Carter & Fourney, 2004b)
20
INTERACTION
USER’S MIND
this model is the source and destination of interactions.
2.2.1.1
The User’s Interface
The function of the user’s interface, in this model, is to create, select and manage the combination of channels (through the user’s physical senses) that are used to interact with the outside world. The interface creates output channels and selects input channels based on the user’s abilities and skills. Each channel requires certain skills to be used successfully. The interface provides the synergy needed across various interaction channels to allow the simultaneous transmission and reception of multiple messages. It also allows the user to focus on particular messages or channels. The communication possibilities and the needs of a user at a given time may be greater than the user’s capabilities. For example, users are seldom capable of receiving information via multiple audio channels simultaneously. A user only capable of receiving information via audio cannot receive as much information at the same time as a user capable of receiving both aural and visual information. The interface’s management function optimizes the set of channels in use to help the user avoid information overload. This includes filtering or ignoring various competing interaction channels, as well as noise within channels. 2.2.1.2
The User’s Profile
Users are complex beings with many interacting skills and preferences. The need of interfacing with a particular user is based on a comprehensive set of abilities. Bloom’s Taxonomy (Bloom et al., 1956) identifies a widely accepted set of abilities that is useful in evaluating user behaviours. The taxonomy classifies abilities along three domains of activities {cognitive, affective, psychomotor } and allows them to be measured along a continuum from plain and simple to rather complex. The UARM recognizes these domains within a user’s skills as they directly map to the filters a user applies to enable/disable the channels a user chooses to (or must) use to interact with a system. A user’s preferences maps to the affective domain, task abilities and skills to the cognitive domain, and interaction abilities and skills to the psychomotor domain. Bloom’s taxonomy also maps to the usability objectives of ISO 9241-11. Satisfaction, an emotional response, maps to the affective domain. Efficiency, a metric of skill, maps to the psychomotor domain. Effectiveness, a metric of accuracy and completeness, maps to the cognitive domain. In this sense, Bloom’s taxonomy of affective, cognitive, and psychomotor capabilities provides a justification of the metric provided by ISO 9241-11. Several authors (Edwards, 1995; Jacko & Vitense, 2001; Carter & Fourney, 2004b) also discuss a large number of cognitive, perceptual, physical and psychomotor abilities that make up many of the interaction abilities and skills suggested in the UARM. The UARM identifies three sets of user characteristics that can be combined to profile the user. A user’s profile involves three types of abilities that enable or disable the channels a user chooses 21
to (or must) use to interact with a system.
Interaction abilities and skills allow a user to interact using a particular communication channel. Such skills include a large number of cognitive, perceptual, physical, and psychomotor abilities such as physical extension or dimensions (e.g., hand size, height), education, and literacy. One’s interaction abilities and skills fluctuate as they may be affected by the onset of, and recovery from short-term or temporary disabilities and illnesses. Within the UARM, the set of interaction abilities and skills are recognized to change over time and in different circumstances. While permanent or temporary physical disabilities may impact certain user interaction abilities, external factors, such as environmental noise, may also handicap the interaction. Within the model presented here, the set of interaction abilities and skills are recognized to change over time and in different circumstances. Thus, what is important to consider is that interactions must use channels that the user currently has the interaction abilities and skills to use. Task abilities and skills allow a user to make sense of the content of interactions. Tasks provide purposes and understanding for interactions. Tasks are accomplished using particular cognitive capabilities that have been specially developed. Tasks and/or their content may require (or prefer) the use of certain interaction abilities and skills related to certain media types or channels of communication (International Organization for Standardization [ISO], 2002a). A user’s personal preferences can affect the choice of channel used wherever various channels are available for some interaction. A person’s state of mind has an impact on attention and performance. Attentiveness can affect efficiency and performance on workload demands (Taubes, 2000). A person’s preferences and habits are learned behaviour and/or reflective of the personality and mental model of the user. A user’s preferences and habits will change how an activity is completed and achieved by different users even if the task, environment and characteristics of the users and situation are the same (Inclusion of Disabled and Elderly People in Telematics, 2000). Cultural differences may influence a user’s preferences. User preferences may also change depending on the user’s current emotional state.
The user’s unique set of abilities may hinder or benefit the interaction between the user and the system. From the user’s perspective, an accessible interaction is only possible if there are sufficient channels available that are supported by the user’s interface and context. When compared to the set of channels the system makes available to the user, those abilities that the system and user both share are available for successful interactions. Therefore, shared abilities will open channels, while unshared abilities will close channels. 22
2.2.1.3
Life Changes
During the span of one’s life, one may experience any or all of the following six changes (which are not necessarily negative): Birth: Birth is a point in a person’s life where one would not have an experience of change. In particular, a person would not have an experience of “norm.” It is for this reason alone that many persons born with a disability tend to describe themselves this way only because others do, they have not actually experienced whatever ability had been “lost.” Accident/Illness: An accident/illness is a sudden, possibly temporary, change in one’s physical status/health. For the person with the accident-related injury or an illness, it is easier to recognize that a change in status has occurred; that is, they differ from both a “social norm” and their “personal norm” (i.e., body schema Reed & Farah, 1995) Since the person is aware of the change, this requires the person to find assistance with the systems they use and develop coping skills for the changes they have experienced. Ageing: Ageing (infant to child, child to youth, youth to young adult, young adult to middle age, middle age to elderly) is a gradual ongoing change in a person’s life. Unlike the sudden change brought on by accident, ageing requires recognition that a change has occurred and acceptance of the change. It is always possible that a person experiencing this gradual change is ignorant of the change; such a person may not realize that a system requires adaptation. In addition, coping mechanisms may be slow to develop. Recovery: A person’s recovery from an illness or an accident-related injury will also change their coping mechanisms and abilities, though not necessarily to their previous status. Assistance: A part of a person’s coping strategy may be the acquisition of some assistance. This may be anything from hearing aids to personal care assistance. Almost always, it is a partial remediation of the person’s needs; that is, for a given individual, there will be things that are still not done. With the acceptance of assistance, there may be negative consequences such as the perception by others that, with this assistance, there is no disability. For example, others may ignore the need to face a hard of hearing person when communicating simply because it is assumed that a person wearing hearing aids no longer has a hearing problem. Learning: Learning is a lifelong activity with broad impact. Over the span of a person’s life, change due to formal education and personal discovery also occurs. Such change is not necessarily gradual as learning can occur both over long periods as well as short “eureka” moments. The presence/absence of an ability should be seen as a continuous function rather than as the basis for dividing users into distinct groups of “haves” and “have-nots” (Vanderheiden, 1990). The 23
distribution of skill in a specific ability can be described like a curve which includes a small number of those who have an exceptionally high skill, a larger number with mid-range skill, and a long tail of those with little to no skill. Thus, individuals do not fall at the lower or upper end of the distribution overall, but generally fall into different positions depending on the particular ability being measured. (Vanderheiden, 1990)
2.2.2
Systems
Traditional human-computer interaction models of software systems may be divided into three parts: a front-end interface, the application (processing) logic, and a back-end database. The model, illustrated in Figure 2.5, expands on this basic structure by including interaction components, which are used to provide interaction styles and media necessary for an accessible interface.
INTERACTION
INTERFACE TO USERS, TO OTHER SYSTEMS, & TO THE ENVIRONMENT CHANNELS MANAGEMENT
PROCESSING
INTERACTION COMPONENTS
DATA STORED APPLICATION CONTENT SYSTEM CONTEXT ENVIRONMENT CONTEXT MODEL OF USER’S CONTEXT
INTERACTION STYLE MEDIA
Figure 2.5: A model of a system (adapted from Carter & Fourney, 2004b) Note the similarity between Figure 2.5 and Figure 2.4. In terms of Communication Theory (Shannon & Weaver, 1949), the Interface would be the receiver/transmitter, the Interaction Components together with part of the Processing functionality the decoder/encoder, and the Data Stored together with part of the Processing functionality is the destination/source of all messages. The remainder of this section discusses each of this model’s features.
2.2.2.1
The System’s Interface
A system’s interface is composed of a number of channels that provide the input and output functions of the system which interact with the user either directly or via one or more ATs. Individual channels correspond to particular hardware devices that are part of the computer system. Systems interface with users, for example, through the screen using a graphical user interface. Systems interface with other systems, such as ATs, for example, through connections to these systems and/or their processing component. Systems interface to the environment, for example, through sensors such as temperature controls. 24
Successful interaction requires that the user’s interface be capable of interacting with the system interface and/or any ATs connected to this interface. The system interface should be aware of, and support, any ATs that are being used to increase accessibility. The system’s interface should manage its combination of channels to achieve a suitable synergy that fulfills the needs of both the user and the system’s processing. This management function should provide sufficient interactions to meet these needs without providing information/interaction overload. This management function should also take into account any ATs that are being used, so as to maintain synergy as much as possible.
2.2.2.2
Processing
The system’s processing functionality executes the application’s logic for the system. It performs this function by interacting with the data store, making use of interaction components and interacting with the user through the system’s interface. The application logic specifies how the system will assist the user in performing a variety of (application) related tasks. As already noted, these tasks and/or their content may require (or better suit) the use of certain types of channels. However, to maintain accessibility, the channels presented to the user must correspond to the user’s interaction components. An additional accessibility concern is that the system’s tasks should be compatible with the user’s task abilities and skills. If the user does not have the required task abilities and skills, then the user must be able to acquire these from the system (through help and/or tutorial functions) before making use of system processing functions involving these tasks.
2.2.2.3
The System’s Interaction Components
The system’s interaction components provide the basic interaction styles and media that can be used by the system’s processing functionality to produce the system’s interface. Each interaction style and type of media may be used any appropriate number of times within the resulting interface. By making use of standard interaction styles and media, it is easier to support accessibility needs either directly or through the use of ATs. Each interaction style and media type has its own accessibility issues that need to be taken into account. The major styles of interaction, except for natural language and gesture, have been standardized within the ISO 9241 series; however, none of these standards contain specific accessibility-related guidance. Web-based interaction styles are standardized within standards set by the World Wide Web Consortium (W3C), along with accessibility-related guidance set out in the Web Content Accessibility Guidelines (Chisholm, Vanderheiden, & Jacobs, 1999). Interaction styles are rendered through media to produce the channels that the system interface provides to the user. 25
2.2.2.4
System Stored Data
A system may have up to four core pieces of data. Two, Application Content (including any data and task content) and System Context (i.e., its own context) are required by the system. The other two, Environment Context and a Representation of a User’s Context, while somewhat more “optional,” have a direct impact on improving the interaction shared by the user and the system.
Application Content Application content is the system’s knowledge of the application domain and includes the data being used and the task being performed. The data being used by the application is a user data storage component (e.g., in a word-processing system, the current document). Information about the task is knowledge of what the application is doing (e.g., word-processing). Being able to provide more than one means of system interaction may require the application content to be separated from the tool itself. An example of this is a browser which presents the information (i.e., a web page) retrieved from a remote source but the information itself has no effect on the browser. The web page content can provide its own full interface that is separate from the tool thus providing more than one means of system interaction. Knowledge of the application content maps directly into a system’s processing functionality. Application content is used by the system’s processing functionality as the system processes information and cooperates with the user to complete the task.
System Context A system maintains information about its own context. A system’s context knowledge includes information about available interaction styles and media, and current system state. Knowledge of the system’s context maps directly into the system’s interaction components. System context is used by the interaction components to define available interaction styles for the interface.
Environment Context A system may maintain information about specialized contexts such as its own environment context. The environment context is one of the two types of data that the system might not have. Knowledge of the system’s environment context maps directly into the system’s interaction components.
User Context Model A system may contain a representation of the user’s context. The user context model is the other of the two types of data that the system might not have. A system’s user context model contains an approximation of the user’s interaction, task, and preferences abilities and skills. This information 26
assists the system in cooperating with the user to complete the task. This representation may be based on current or previous interactions with the user, and/or derived from an analysis of the channels the user has chosen.
2.2.3
Interactions
Accessibility depends on the recipient’s ability to create, receive, and interpret interactions. Interaction involves a number of one-way messages between users and systems and between systems and users. The theoretical approach to the definition of an “Interaction” can be found in Communication Theory (Shannon, 1948). From the perspective of Communication Theory, the interaction between the user and the system can be described as one-way and transmission-oriented (see Figure 2.6). In this sense, any difficulties (i.e., “noise” in the language of Communication Theory) in the communication channel would inhibit the effective transmission of the system’s content (i.e., the application or service the system provides). The environment is seen as one source of such noise (Shannon, 1948).
INFORMATION SOURCE
ENCODER
TRANSMITTER
RECEIVER
DECODER
DESTINATION
NOISE SOURCE
Figure 2.6: “Communication Theory” view of interaction (adapted from Shannon & Weaver, 1949) Shannon’s Communication Theory presents a number of important functions that must take place between two entities attempting to communicate with one another (Shannon, 1948). A message travelling between its source and destination must be: • encoded (which should be done using a shared context), • transmitted via a specific medium, • over a communication channel connecting the sender and the recipient; that is, subject to noise (interference), • received by its intended recipient (if the recipient is capable of using that medium), and • decoded (using the recipient’s version of shared context). 27
If the context of the recipient is known, the choice of media can be limited to those that the recipient can access successfully. Accessibility may be increased by using multiple messages transmitted via different media over different channels, so the likelihood increases that one or more messages will be received and interpreted. Each channel may have its own set of handicaps, as illustrated in Figure 2.7. Handicaps to the interaction may occur where the recipient of a message is unable to make use of a message, and where there is no alternative or redundant message available. In addition to other contextual considerations, this may be due to the medium or the channel of the message. The recipient’s capacity to use a medium can be considered a recipient skill and thus a part of the recipient’s context. Environmental noise may interfere with interactions using one or more media, and can create even more handicaps to the interaction. For example, a user’s noisy environment may make information given through an auditory channel useless or make receiving information given through an auditory channel very difficult.
CHANNEL
Figure 2.7: Media channels and handicaps
It is important to remember to focus the consideration of handicaps as they limit interactions, rather than individual messages. Interaction is a two-way process involving a number of one-way messages. These one-way messages may or may not be asynchronous. Users interact with a system to use the system; systems interact with a user to respond to user requests. From the user’s perspective, an accessible interaction is only possible if, after filtering the available channels against the user’s real skills, there remains some set of channels that allows full interaction. If there are not enough channels available for full interaction, partial interaction may still be possible. However, if there are not enough channels remaining for a user to access a system, the system may be completely inaccessible to the user. Since accessibility is dependent on the message recipient’s ability to receive, interpret, and create interactions, accessibility may be increased by transmitting multiple messages via different media over different channels. Hopefully, one or more messages will be received and interpreted. Each medium makes use of its own communication channel. Messages may be combined with one another and/or transmitted simultaneously. One of the roles of these messages is to provide a means to share context. Handicaps affect interactions, rather than individual messages. If the interaction succeeds, in spite of difficulties with individual messages, then accessibility will be maintained. However, for 28
the sake of efficiency, it is preferable that only successful messages be involved in the interaction. This can be accomplished if the sender of the message is aware of the capabilities of the intended recipient of the message. In the UARM, this involves being aware of shared context. The single valve, in Figure 2.3 on page 19, is a simplification of what is really happening. Each channel is uniquely influenced and thus can have its own handicap to the interaction. As Figure 2.7 shows, it is more correct to depict each channel as having its own valve. A channel is a function of media, style, and content, and results in usability.
Media A medium is a way of rendering information to a user. Each medium can be classified in terms of the modality it supports. A modality is the type of sensation the medium uses (e.g., visual, auditory, tactile, olfactory). Each modality only supports content in specific media. This will be discussed in Section 2.4.1.
Style Media render interactions that are based on particular dialogue styles. The ISO 9241 series contains guidance on a variety of dialogue techniques that form the basis for different styles of interaction. The standard interaction styles in application software and operating systems are: menu (International Organization for Standardization [ISO], 1997a), command language (International Organization for Standardization [ISO], 1997b), direct manipulation (International Organization for Standardization [ISO], 1999b), form fill-in (International Organization for Standardization [ISO], 1998b), natural language, and gesture (Carter & Fourney, 2004b). ISO 14915-2 provides further information on controls and links that may be implemented through different styles (International Organization for Standardization [ISO], 2003a).
Content The actual content of an application can be composed of several different media-neutral types of information including: causal, conceptual, continuous action, descriptive, discrete action, event, physical, procedural, relationship, spatial, state, and value (ISO, 2002a).
Usability Usability is the result of combining media, style, and content in a channel such that it is effective, efficient, and satisfying for the user (ISO, 1998a). The effectiveness and efficiency of a system for a user is dependent on the choice of media, style, and content used to communicate across a channel. Handicaps to interactions directly influence a system’s usability. The presence of a handicap and 29
the degree it interferes with the channel will impact system effectiveness to the point that one’s objective cannot be achieved.
2.2.4
Environment
Users interact with systems in an environment. Environments may affect users, systems, and interactions by helping or handicapping the interaction. It helps the interaction by suggesting the context of use when interpreting the interaction. It handicaps the interaction by introducing distractions and/or incorrect and/or inappropriate contexts of use. Environments may be physical and/or socio-cultural. Physical environments include built spaces such as homes, offices, and mechanical plants as well as an individual’s physiological state. Social and cultural environments include wide-spread attitudes towards the system being used. Users, systems, and interactions may share the same environment but each be affected in different ways. For example, an environment high in electromagnetic noise may not affect the user, but it could affect the system. Similarly, an environment filled with noxious olfactory noise (i.e., a really bad odour) may dramatically affect the user but have no effect on the system. Users, systems, and interactions may not share the same environment; each may have a different separate environment affecting them in different ways. During the period of an interaction, the environment of the user may not be the same as that of the interaction. Simultaneously, the environment of the interaction may not be the same as that of the system. Thus, knowledge of the environment of the user may help the system respond to the user more appropriately. There is a temporal dimension to an environment. An environment might change during the period of the interaction. Such change could be instantaneous, rapid, or over a period of time. For example, a room might become ten degrees cooler over an hour. Similarly, a water pipe might suddenly burst.
2.2.4.1
User Environment
Although no user or system operates in a vacuum, a single perceived environment may not be shared by both system and user. For a user, the various factors of their environment may assist or hinder their performance with a system. For example, the user’s comfort in the environment (e.g., too hot, too cold), ability to get around the environment (e.g., an accessibly built environment, cleanliness/neatness of space), and ability to focus on the one system (e.g., divided attention to other tasks, presence of noise1 ) each contribute to the user’s performance. A user’s physiological state is part of their physical environment. One’s physiological state 1 “Noise”
in this context may be visual, olfactory, auditory and/or tactile.
30
provides a constant flux that frequently impacts one’s well-being. Such changes to one’s physiological state may be short-term, long-term, ongoing, or permanent. Examples include being too hot, hungry, sick, or permanently injured. A user’s socio-cultural milieu is also part of their environment. For example, while using a kiosk system, the presence of a potential user in the line behind the current user could distract the current user from their task. An even longer queue might provide further distraction. 2.2.4.2
System Environment
Knowledge of the environment of the user may help the system respond to the user more appropriately. Knowledge of its own environment may also help the system when responding to the user. A system may maintain information about its own environment. Such information may be “perceived” by the system through various sensors (Culler, Estrin, & Srivastava, 2004), system alarms (Tohma, 2004), or hardcoded information (e.g., “system preferences”). Sensors, such as thermostats, if present, can provide information about the system’s external space. System alarms, if present, can provide information about internal hardware or software status. Hardcoded information can be provided by a programmer, maintainer, or user. Hardcoded information may only be entered once, updated regularly, or updated irregularly. However, unlike sensors or alarms, hardcoded information may not correctly reflect the average physical environment of the system; only a snapshot in time. A system may store a model of any information it has about the user’s environment. This information assists the system in co-operating with the user to complete the task. Information about the user’s environment may be based on current or previous interactions with the user, and/or derived from an analysis of the channels the user has chosen to use to interact with the system.
2.2.5
Context
Context is the “glue” that ties interactions, systems, and users together. Each party in an interaction has their own context. The theoretical basis for context in the UARM is provided by Weaver’s notion of context in Communication Theory (1949). The problem that context identifies is that each party in an interaction can have their own perspective, which may help or hinder the success of the interaction. This includes the system which must share its own expectations with users if it is to successfully mediate any interaction. For any channel of transmission to be successful, it requires the sender and receiver to have a shared context — something in common — for the encoding/decoding of messages (Weaver, 1949). 31
For context to be used to “make sense” of any message, there must be some common knowledge related to the message (e.g., English) and some common skills that help make use of the channel (e.g., literacy). In the case of a computing system, both the user and the system: • must share knowledge of the application domain, • may have domain-related skills (Carter & Fourney, 2004b), and • must share knowledge of the symbols used in the communication (e.g., communicating in Arabic requires both knowledge of Arabic and literacy in oral/written Arabic) (Weaver, 1949). From Weaver’s approach, the shared context of the interaction requires: • including the interaction skills for non-computer environments into the user’s context (i.e., remembering that the user is not just interacting with the system, but also their own physical environment); and • recognizing that a user can be involved in multiple interfaces simultaneously such that the user’s full attention cannot be assumed. Therefore, to communicate, all parties must be able to anticipate something within a shared context. When context is missing, more information must be provided. Anything that is not shared has to be communicated or explained in a way that is shared. From this point of view, the user and system exist in many overlapping contexts (Maskery & Meads, 1992). Each of these contexts can be logically AND’ed or OR’ed such that the interaction between the user and the system can be described in the form of a Venn diagram (see Figure 2.8). Each context represents information about the task at hand, the environment, and so on. Any unshared portion of each party’s context is a potential area for misunderstanding and miscommunication, which may (or may not) lead to a handicap.
86(5¶6 &217(;7
6